# The Universal Pattern Popping Up in Math, Physics and Biology

Key Points:

• Bus Waiting Time Pattern
• Biology Chicken eyes
• Physics: Random matrix / Eigenvalues / Riemann Zeta-function for Prime Numbers
• Universal Pattern: Normal Distribution Curve

https://www.quantamagazine.org/the-universal-pattern-popping-up-in-math-physics-and-biology-20180823/

# Big Picture: Linear Algebra

The Big Picture of Linear Algebra 线性代数 (MIT Open-Courseware by the famous Prof Gilbert Strang)

A.X = B

$A(m,n) = \begin{pmatrix} a_{11} & a_{12} & \ldots & a_{1n}\\ a_{21} & a_{22} & \ldots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1}&a_{m2} &\ldots & a_{mn} \end{pmatrix}$

A(m, n)  is a matrix of m rows, n columns

4 sub-Vector Spaces:
Column Space ${C(A)}$ , dim = r = rank (A)
Row Space ${C(A^{T})}$ , dim = r = rank (A)

Nullspace ${N(A)}$   ${\perp C(A^{T})}$ , dim = n – r

Left Nullspace ${N(A^{T})}$ ${\perp C(A)}$ , dim = m – r

Abstract Vector Spaces ​向量空间

Any object satisfying these 8 axioms belong to the algebraic structure of Vector Space: eg. Vectors, Polynomials, Functions, …

Note: “Vector Space” + “Linear Map” = “Category$Vect_{K}$

Eigenvalues & Eigenvectors (valeurs propres et vecteurs propres) 特征值/特征向量

[ Note: “Eigen-” is German for Characteristic 特征.]

Important Trick: (see Monkeys & Coconuts Problem)
If a transformation A is linear, and the “before” state and “after” state of the “vector” v remain the same  (keep the status-quo) , then : Eigenvalue $\boxed {\lambda = 1}$

$\boxed {A.v = \lambda.v = 1.v = v}$

Try to compute:
${ \begin{pmatrix} a_{11} & a_{12} & \ldots & a_{1n}\\ a_{21} & a_{22} & \ldots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{n1}&a_{n2} &\ldots & a_{nn} \end{pmatrix} }^{1000000000}$
is more difficult than this diagonalized equivalent matrix:
${ \begin{pmatrix} b_{11} & 0 & \ldots & 0\\ 0 & b_{22} & \ldots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 &0 &\ldots & b_{nn} \end{pmatrix} }^{1000000000}$

$= { \begin{pmatrix} {b_{11}}^{1000000000}&0 &\ldots &0\\ 0 &{b_{22}}^{1000000000} &\ldots & 0\\ \vdots &\vdots & \ddots & \vdots\\ 0 &0 &\ldots &{b_{nn}}^{1000000000} \end{pmatrix}}$

Note: This is the secret of the Google computation of diagonalized matrix of billion columns & billion rows, where all the bjk are the “PageRanks” (web links coming into a particular webpage and links going out from that webpage).

The Essence of Determinant (*): (行列式)

(*) Determinant was invented by the ancient Chinese Algebraists 李冶 / 朱世杰 /秦九韶 in 13th century (金 / 南宋 / 元) in《天元术》.The Japanese “和算” mathematician 关孝和 spread it further to Europe before the German mathematician Leibniz named it the “Determinant” in 18th century. The world, however, had to wait till the 19th century to discover the theory of Matrix 矩阵 by JJ Sylvester (Statistical Math private Tutor of Florence Nightingale, the world’s first nurse) closely linked to the application of Determinant.

[NOTE] 金庸 武侠小说 《神雕侠女》里 元朝初年的 黄蓉 破解 大理国王妃 瑛姑 苦思不解的 “行列式”, 大概是求 eigenvalues & eigenvectors ? 🙂

# Linear Algebra : Left & Right Eigenvectors and Eigenvalues

Let A matrix, vector x, λ eigenvalue

1. Right Eigenvectors and eigenvalues:

A.x = λx

Example:

$A = \begin{pmatrix} 5 & -7 & & 7 \\ 4 & -3 & & 4\\ 4 & -1 & & 2 \end{pmatrix}$ and, $x = \begin{pmatrix} 1\\ 1\\ 1 \end{pmatrix}$

Then,
$A.x = \begin{pmatrix} 5\\ 5\\ 5 \end{pmatrix} = 5. \begin{pmatrix} 1\\ 1\\ 1 \end{pmatrix} = 5.x$
So,
(right) eigenvalue = 5

2. Left Eigenvectors and eigenvalues:

x.A = λx

However, be careful that:
$x.A = \begin{pmatrix} 1 & 1 && 1 \end{pmatrix}.\begin{pmatrix} 5 & -7 & & 7 \\ 4 & -3 & & 4\\ 4 & -1 & & 2 \end{pmatrix} = \begin{pmatrix} 13 & -11 && 13 \end{pmatrix}$

If we want to find the left eigenvector associated with the eigenvalue 5, then we find the eigenvector $A^T$.
(https://en.m.wikipedia.org/wiki/Eigenvalues_and_eigenvectors#Applications)

This would lead us to see that:
(-1 1 -1).A = (-5 5 -5) = 5. (-1 1 -1)
So, in this example, the eigenvalue 5 has different left and right eigenvectors:
(-1 1 -1) & (1 1 1) respectively.

Remark 1: However, the nice fact about matrices is that always :
left eigenvalue = right eigenvalue.
So we just simply call eigenvalue for short.

Ref:
《Math Bytes》by Tim Charter
Princeton University Press
[NLB #510 CHA]

Google Search Engine & Linear Algebra:
1) Let M(nxn) matrix of size n (say 1 billion) web pages:

$\begin{pmatrix} m_{11} & m_{12} & \ldots & m_{1n}\\ m_{21} & m_{22} & \ldots & m_{2n}\\ \vdots & \vdots & m_{jk} & \vdots\\ m_{n1} & m_{n2} &\ldots & m_{nn} \end{pmatrix}$

$m_{jk} = \begin{cases} 1, & \text{if Page }j \text{ linked to Page k} \\ 0, & \text{if } \text{ not} \end{cases}$

Note: This PageRank of 0 & 1 is over-simplied. The actual PageRank is a fuzzy number between 0 and 1, based on Larry Page’s patented PageRank formula, taking into accounts of the importance of the pages linked from and to, plus many other factors.

2) Let v(n) eigenvector of n webpages’ PageRank ak:
$\begin{pmatrix} a_1 \\ a_2 \\ \vdots\\ a_k \\ \vdots\\ a_n \end{pmatrix}$ $\displaystyle \implies a_k= \sum_{j}m_{jk}$
(all Page j pageRanks)
The page k pointed to by all pages j.

3) Let λ eigenvalue: M.v =λ.v

4) Iterate n times: $\boxed{(M^{n}).v = \lambda{^n}.v}$

=> page k is ranked more important if many important pages j point to it;
& pages j themselves pointed by other important pages, …(iterate n times).
=> highest ranked pages appear first in Search list.

# Eigenvector & Eigenvalue

1. Matrix (M): stretch & twist space
2. Vector (v): a distance along some direction
3. M.v = v’ stretched & twisted by M

Some directions are special:-
a) v stretched but not twisted = Eigenvector;
b) The amount of stretch = constant = Eigenvalue (λ)

Let M the matrix, λ its eigenvalue,
v eigenvector.
By definition: M.v = λ.v
v = I.v (I identity matrix)
M.v = λI.v
(M – λI).v=0
As v is non-zero,
1. Determinant (M- λI) =0 => find λ
2. M.v = λ.v => find v

Note1: Why call Eigenvalue ?
From German: “Die dem Problem eigentuemlichen Werte
= “The values belonging to this problem
=> eigenWerte = EigenValue
Eigenvalue also called ‘characteristic values’ or ‘autovalues’.
Eigen in English = Characteristic (but already used for Field).

Note2: Schrödinger Quantum equation’s Eigenvalue = Maximum probability of electron presence at the orbit outside nucleus.

Note3: Excellent further explanation of the eigenvector and eigenvalue:

http://lpsa.swarthmore.edu/MtrxVibe/EigMat/MatrixEigen.html

# Solution 2 (Eigenvalue): Monkeys & Coconuts

Solution 2: Use Linear Algebra Eigenvalue equation: A.X = λ.X

A =S(x)= $\frac{4}{5}(x-1)$  where x = coconuts

S(x)=λx

Since each iteration of the transformation caused the coconut status ‘unchanged’, which means λ = 1 (see remark below)

$\frac{4}{5}(x-1)=x$
We get
x = – 4

Also by recursive, after the fifth monkey: $S^5 (x)$ = $(\frac{4}{5})^5 (x-1)- (\frac{4}{5})^4-(\frac{4}{5})^3- (\frac{4}{5})^2- \frac{4}{5}$

$S^5 (x)$ = $(\frac{4}{5})^5 (x) - (\frac{4}{5})^5 - (\frac{4}{5})^4 - (\frac{4}{5})^3+(\frac{4}{5})^2 - \frac{4}{5}$

$5^5$ divides (x)

Minimum positive x= – 4 mod ($5^{5}$ )= $5^{5} - 4$= 3,121 [QED]

Note: The meaning of eigenvalue  λ in linear transformation is the change  by a scalar of λ factor (lengthening or shortening by λ) after the transformation. Here

λ = 1 because “before” and “after” (transformation A)  is the SAME status (“divide coconuts by 5 and left 1”).