Big Picture: Linear Algebra 

The Big Picture of Linear Algebra 线性代数 (MIT Open-Courseware by the famous Prof Gilbert Strang)

A.X = B

A(m,n) = \begin{pmatrix} a_{11} &  a_{12} & \ldots & a_{1n}\\ a_{21} & a_{22} & \ldots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1}&a_{m2} &\ldots & a_{mn} \end{pmatrix}

A(m, n)  is a matrix of m rows, n columns

4 sub-Vector Spaces:
Column Space {C(A)} , dim = r = rank (A) 
Row Space {C(A^{T})} , dim = r = rank (A)

Nullspace {N(A)}   {\perp C(A^{T})} , dim = n – r

Left Nullspace {N(A^{T})} {\perp C(A)} , dim = m – r

Abstract Vector Spaces ​向量空间

Any object satisfying these 8 axioms belong to the algebraic structure of Vector Space: eg. Vectors, Polynomials, Functions, …

Note: “Vector Space” + “Linear Map” = “CategoryVect_{K}

Eigenvalues & Eigenvectors (valeurs propres et vecteurs propres) 特征值/特征向量

[ Note: “Eigen-” is German for Characteristic 特征.]

Important Trick: (see Monkeys & Coconuts Problem)
If a transformation A is linear, and the “before” state and “after” state of the “vector” v remain the same  (keep the status-quo) , then : Eigenvalue \boxed {\lambda = 1}

\boxed {A.v = \lambda.v = 1.v  = v}

Try to compute:
{ \begin{pmatrix} a_{11} &  a_{12} & \ldots & a_{1n}\\ a_{21} & a_{22} & \ldots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{n1}&a_{n2} &\ldots & a_{nn} \end{pmatrix} }^{1000000000}
is more difficult than this diagonalized equivalent matrix:
{ \begin{pmatrix} b_{11} & 0 & \ldots & 0\\ 0 & b_{22} & \ldots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 &0 &\ldots & b_{nn} \end{pmatrix} }^{1000000000}

= { \begin{pmatrix} {b_{11}}^{1000000000}&0 &\ldots &0\\ 0 &{b_{22}}^{1000000000} &\ldots & 0\\ \vdots &\vdots & \ddots & \vdots\\ 0 &0 &\ldots &{b_{nn}}^{1000000000} \end{pmatrix}}

Note: This is the secret of the Google computation of diagonalized matrix of billion columns & billion rows, where all the bjk are the “PageRanks” (web links coming into a particular webpage and links going out from that webpage).

The Essence of Determinant (*): (行列式)

(*) Determinant was invented by the ancient Chinese Algebraists 李冶 / 朱世杰 /秦九韶 in 13th century (金 / 南宋 / 元) in《天元术》.The Japanese “和算” mathematician 关孝和 spread it further to Europe before the German mathematician Leibniz named it the “Determinant” in 18th century. The world, however, had to wait till the 19th century to discover the theory of Matrix 矩阵 by JJ Sylvester (Statistical Math private Tutor of Florence Nightingale, the world’s first nurse) closely linked to the application of Determinant.

[NOTE] 金庸 武侠小说 《神雕侠女》里 元朝初年的 黄蓉 破解 大理国王妃 瑛姑 苦思不解的 “行列式”, 大概是求 eigenvalues & eigenvectors ? 🙂

Advertisements

Linear Algebra : Left & Right Eigenvectors and Eigenvalues

Let A matrix, vector x, λ eigenvalue

1. Right Eigenvectors and eigenvalues:

A.x = λx

Example:

A = \begin{pmatrix} 5 & -7 &  & 7 \\ 4 & -3 & & 4\\ 4 & -1 & & 2 \end{pmatrix} and, x = \begin{pmatrix} 1\\ 1\\ 1 \end{pmatrix}

Then,
A.x = \begin{pmatrix} 5\\ 5\\ 5 \end{pmatrix} = 5. \begin{pmatrix} 1\\ 1\\ 1 \end{pmatrix} = 5.x
So,
(right) eigenvalue = 5
image

2. Left Eigenvectors and eigenvalues:

x.A = λx

However, be careful that:
x.A = \begin{pmatrix} 1 & 1 && 1 \end{pmatrix}.\begin{pmatrix} 5 & -7 &  & 7 \\ 4 & -3 & & 4\\ 4 & -1 & & 2 \end{pmatrix} = \begin{pmatrix} 13 & -11 && 13 \end{pmatrix}

If we want to find the left eigenvector associated with the eigenvalue 5, then we find the eigenvector A^T.
(https://en.m.wikipedia.org/wiki/Eigenvalues_and_eigenvectors#Applications)

This would lead us to see that:
(-1 1 -1).A = (-5 5 -5) = 5. (-1 1 -1)
So, in this example, the eigenvalue 5 has different left and right eigenvectors:
(-1 1 -1) & (1 1 1) respectively.

Remark 1: However, the nice fact about matrices is that always :
left eigenvalue = right eigenvalue.
So we just simply call eigenvalue for short.

Remark 2: Intuitive explanation of eigenvector and eigenvalue.

Ref:
《Math Bytes》by Tim Charter
Princeton University Press
[NLB #510 CHA]

Khan Academy

I find Khan Linear Algebra video excellent. The founder / teacher Sal Khan has the genius to explain this not-so-easy topic in modular videos steps by steps, from 2-dimensional vectors to 3-dimensional, working with you by hand to compute eigenvalues and eigenvectors, and show you what they mean in graphic views.

If you are taking Linear Algebra course in university, or revising it, just go through all the Khan’s short (5-20 mins) videos on Linear Algebra here:

In 138 lessons sequence:

http://theopenacademy.com/content/linear-algebra-khan-academy

or random revision:

http://m.youtube.com/playlist?list=PLqXgtGPHph5ZLP-XMktk0Ggzte-xAxvI9

Google Linear Algebra

Google Search Engine & Linear Algebra:
1) Let M(nxn) matrix of size n (say 1 billion) web pages:

\begin{pmatrix} m_{11} & m_{12} & \ldots & m_{1n}\\ m_{21} & m_{22} & \ldots & m_{2n}\\ \vdots & \vdots & m_{jk} & \vdots\\ m_{n1} & m_{n2} &\ldots & m_{nn} \end{pmatrix}

m_{jk} = \begin{cases} 1, & \text{if Page }j \text{ linked to Page k} \\ 0, & \text{if } \text{ not} \end{cases}

Note: This PageRank of 0 & 1 is over-simplied. The actual PageRank is a fuzzy number between 0 and 1, based on Larry Page’s patented PageRank formula, taking into accounts of the importance of the pages linked from and to, plus many other factors.

2) Let v(n) eigenvector of n webpages’ PageRank ak:
\begin{pmatrix} a_1 \\ a_2 \\ \vdots\\ a_k \\ \vdots\\ a_n \end{pmatrix} \displaystyle \implies a_k= \sum_{j}m_{jk}
(all Page j pageRanks)
The page k pointed to by all pages j.

3) Let λ eigenvalue: M.v =λ.v

4) Iterate n times: \boxed{(M^{n}).v = \lambda{^n}.v}

=> page k is ranked more important if many important pages j point to it;
& pages j themselves pointed by other important pages, …(iterate n times).
=> highest ranked pages appear first in Search list.