Dot Products and Approximations
1) Dot Products and Coordinates
Below is a vecvtor $\vec{w}$ with unknown coordinates.
We can pick any two coordinates we like for another vector $\vec{u}$ and see the dot product $\vec{u} \cdot \vec{v}$ that results.
Question 1: Can you find the coordinates of $\vec{w}$?
Question 2: How many choices for $\vec{u}$ are needed to find the coordinates of $\vec{w}$?
2) Dot Products and Linear Combinations
Dot products can also be very useful for coordinates in non-standard coordinate systems.
In case you don't remember this from Math 4A, linear combinations are how we express vectors in non-standard coordinate systems.
Here are some examples.
- $\left[ \begin{array}{c} x \\ y \end{array} \right]_\alpha$ with respect to the standard $\mathbb{R}^2$ basis $\alpha = \left( \left[ \begin{array}{c} 1 \\ 0 \end{array} \right], \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \right)$ is $$ x\left[ \begin{array}{c} 1 \\ 0 \end{array} \right] + y\left[ \begin{array}{c} 0 \\ 1 \end{array} \right] = \left[ \begin{array}{c} x \\ 0 \end{array} \right] + \left[ \begin{array}{c} 0 \\ y \end{array} \right] = \left[ \begin{array}{c} x \\ y \end{array} \right]. $$
- $\left[ \begin{array}{c} a \\ b \end{array} \right]_\beta$ with respect to the non-standard $\mathbb{R}^2$ basis $\beta = \left( \left[ \begin{array}{c} 1 \\ 1 \end{array} \right], \left[ \begin{array}{c} 1 \\ -1 \end{array} \right] \right)$ is $$ a\left[ \begin{array}{c} 1 \\ 1 \end{array} \right] + b\left[ \begin{array}{c} 1 \\ -1 \end{array} \right] = \left[ \begin{array}{c} a \\ a \end{array} \right] + \left[ \begin{array}{c} b \\ -b \end{array} \right] = \left[ \begin{array}{c} a+b \\ a-b \end{array} \right]. $$
- $\left[ \begin{array}{c} a \\ b \\ c \end{array} \right]_\gamma$ with respect to the quadratic polynomial basis $\gamma = (x^2, x, 1)$ is $$ a \cdot x^2 + b \cdot x + c \cdot 1 = ax^2 + bx + c. $$
In the diagram below we have an unknown vector $\vec{w}$ again. But this time we also have two unknown (non-standard) basis vectors $\vec{u}$ and $\vec{v}$.
These vectors are orthogonal (perpendicular) to each other, but they are not unit length.
Question 3: Using just dot products, can you find the coefficients to express $\vec{w}$ as a linear combination of $\vec{u}$ and $\vec{v}$?
Did you have fun answering those questions?
3) Application: Function Approximation and Accuracy
Do you remember discovering the irrational number $\pi$?
How many digits did you end up learning?
No matter how many digits you learn, you still never have perfect accuracy. But if you emmorize 60 digits then your level of accuracy is close to the ratio of the largest distances in the known universe to the smallest.
We would like to model (approximate) the function $f(x) = e^x$ on the interval $[-1,1]$ with polynomials. We'll start by limiting our degree to 2 (kind of like accuracy to 2 decimal places).
What do you think is the most accurate way to approximate $e^x$ with a quadratic function $ax^2 + bx + c$?
4) Which Functions to Use
In the approximation above, the Taylor series taken at $x=0$ (also known as the McClaurin Series) was the best approximation if you get to zoom in to the point $(0,1)$ as much as you need to beat any other function. But the projection function, while not as close at $x=0$, did a much better job overall for the whole interval $[-1,1]$.
So why don't we learn this method instead of Taylor series? There are two reasons. One is that Taylor series is really important for convergence and has a lot of extensions in later Math classes. The second reason is why I didn't show you any calculations for the second approximation. The monomials $1, x, x^2$ aren't orthogonal to each other in the sense of our problem. This means some really tedious computations had to be done to get orthogonal versions of these.
Why are they complicated? Imagine learning the first 3 digits of $\pi$, and then when you get to the 4th digit you are told that now you have to go back and change some of the previous digits to keep getting more accurate. Yuck! Watch the polynomial coefficients below to see what I mean.
5) Sine and Cosine to the Rescue
What we will be studying in the near future is function approximation in the same sense as the previous example: How well our approximations stay close to a given function for the whole interval and not just a single point.
It turns out that while monomial terms like $x^n$ are not very fun to work with in this sense, functions of the form $\sin(nx)$ and $\cos(nx)$ work incredibly well for this very thing, and there are a lot of applications.