Baking recipes made by AI | Google Cloud

AI Baking :
“Breakie” = Bread + Cookie
“Cakie” = Cake + Cookie

https://cloud.google.com/blog/topics/developers-practitioners/baking-recipes-made-ai

Remark:

By the way, the 1789 AD French Revolution Queen Marrie Antoinette’s famous quote of “WHY don’t the hungry farmers go eat Brioch ? “

“Brioch” = 50% Bread + 50% Cake

Cake-like “high-class” bread ‘Brioch’ made by my bread machine.
Butter-rich “Brioch”

第三代人工智能 AI 3.0

清华张钹院士专刊文章:迈向第三代人工智能(全文收录)

AI 1.0 = Knowledge Driven 知识驱动 (Symbolic Computing, Expert System, Years 1950s- 1980)

AI 1.0 Pioneers : Alan Turing (UK, Turing Machine) , Prof John McCarthy (MIT, LISP) , Prof Feigenbaum (USA STANFORD, Rule-based Expert System) , Prof Alain Bonnet, (France, École Polytechnique, Expert System), etc.

[AI Winter : 10 years in 1990s]

AI 2.0 = Big-Data Driven 数据驱动 (Deep Learning, Machine Learning, Years 2000s – 2020)

AI 2.0 Pioneers : Prof Geoffrey Hinton (UK/Canada, Deep Learning), Prof Andrew Ng 吴恩达 (Stanford USA, Google/百度), Prof Yann LeCun (France), Demis Hassabis (UK, AlphaGo) etc.

AI 3.0 = 算法 Math & Algorithm + 算力 Computing Power + AI 1.0 + AI 2.0 (Years 2020 – )

AI 3.0 Pioneers (potential) : Prof YAO 姚期智 (Turing Prize Winner, China, 清华), etc.

https://m.toutiaocdn.com/i6881872328318583303/?app=news_article&timestamp=1602329515&use_new_style=1&req_id=2020101019315501000807704914046A47&group_id=6881872328318583303&tt_from=android_share&utm_medium=toutiao_android&utm_campaign=client_share

李彦宏Baidu CEO : Internet 3 Episodes : PC->Mobile ->AI

李彦宏剑桥大学演讲

https://m.toutiaoimg.cn/a6805832038692684300/?app=news_article&is_hit_share_recommend=0&tt_from=android_share&utm_medium=toutiao_android&utm_campaign=client_share

李彦宏 Baidu CEO Cambridge Speech 剑桥大学演讲
《3 waves of Internet》:
1) PC- based (1997-)

  • Search Webpages
  • 6-month software update cycle

2) Mobile-based (2010 -)

  • “APP” is born
  • Eco-System : eg. Apple Appstore, Google PlayStore
  • O2O (Online to Offline) : Same day Hotel booking/Restaurant /…
  • SW Update everyday few times

3) AI-based (2017 – now)

  • Voice recognition sans keyboard input
  • Image recognition (eg. Customer ePayment :McDonald’s )
  • Natural language Pattern NLP (Salesman Virtual Assistant)

Cédric Vilani (Chinese) Interview

French Fields Medalist Cédric Vilani 中文interview: 他苦思证明数学/物理定理,在第 1001 th 夜 @4am,”好像上帝给他打电话 – Un coup de fil du Dieu” , 突然开窍…

他从政加入Macron的小党 “En March”,当MP, 今年竞选 Paris Mayor.

2017 年 他引入 “Singapore Math” 进法国小学。

https://m.toutiaocdn.com/i6827717478164988419/?app=news_article_lite&timestamp=1589724887&req_id=2020051722144701001404813006329A50&group_id=6827717478164988419

NLB Library has 13 copies of Cedric Vilani’s book for public loan.

MIT New Course (Prof Gilbert Strang) : Linear Algebra and Learning From Data

https://m.toutiaocdn.com/group/6643741079306764814/?app=news_article_lite&timestamp=1573147247&req_id=201911080120460100140261091245FA0A&group_id=6643741079306764814

[MIT OCW Online Course Videos]

https://ocw.mit.edu/courses/mathematics/18-065-matrix-methods-in-data-analysis-signal-processing-and-machine-learning-spring-2018/video-lectures/index.htm

[Full Video]

https://m.toutiaoimg.com/group/6735569795619488264/?app=news_article_lite&timestamp=1573149133&req_id=201911080152120100140470322F644EBF&group_id=6735569795619488264

丘城桐:基础数学和AI, Big Data

AI and Big Data are Twins, their Mother is Math.

“AI 3.0“ today, although impressive in “DeepLearning“, is still using “primitive” high school Math, namely:

AI has not taken advantage of the power of post-Modern Math invented since WW II, esp. IT related, ie :

That is the argument of the Harvard Math Dean Prof ST Yau 丘城桐 (First Chinese Fields Medalist), who predicts the future “AI 4.0“ can be smarter and more powerful.

https://www.toutiao.com/group/6751615620304863755/?app=news_article_lite&timestamp=1572193294&req_id=2019102800213401000804710406682570&group_id=6751615620304863755

… Current AI deals with Big Data:

  1. Purely Statistical approach and experience-oriented, not from Big Data’s inherent Mathematical structures (eg. Homology or Homotopy).
  2. The Data analytical result is environment specific, lacks portability to other environments.

3. Lack effective Algorithms, esp. Algebraic Topology computes Homology or Co-homology using Linear Algebra (Matrices).

4. Limited by Hardware Speed (eg. GPU), reduced to layered-structure problem solving approach. It is a simple math analysis, not the REAL Boltzmann Machine which finds the most Optimum solution.

Notes:

AI 1.0 : 1950s by Alan Turing, MIT John McCarthy (coined the term “AI”, Lisp Language inventor).

AI 2.0 : 1970s/80s. “Rule-Based Expert Systems” using Fuzzy Logic.

[AI Winter : 1990s / 2000s. Failed ambitious Japanese “5th Generation Computer” based on Prolog-based “Predicate” Logic]

AI 3.0 : 2010s – now. “DeepLearning” by Prof Geoffry Hinton using primitive Math (Statistics, Probability, Calculus Gradient Descent)

AI 4.0 : Future. Using “Propositional Type” Logic, Topology (Homology, Homotopy) , Linear Algebra, Category.

Math Theorem Machine Proof (AI) : Prof Wu WenJun

Mathematicians mostly enjoy longevity, Hadamard (98), Chern SS (93, 陈省身) , Andre Weil (92), Shimura (90), Wu Wenjun (98, 吴文俊)…

Wu WenJun 吴文俊 (1919 – 2017):

  1. Student of S. S. Chern
  2. French scholar: Algebraic Geometry
  3. Last Bourbaki member (the only Chinese)
  4. Teacher / mentor (University of Strasburg) of the great “hermit” (隐士) mathematician Grothendieck
  5. AI : Combined Ancient Chinese Algorithmetic Math + IT = Machine Proof of Geometry Theorems 数学机械化 (*)

(Read more below …. )

https://m.toutiaocdn.com/group/6744186589083075083/?app=news_article_lite&timestamp=1570335915&req_id=201910061225140100140411492E29E038&group_id=6744186589083075083

Note (*) : In 2010s Machine Proof of General Mathematics was invented by the Russian Fields Medalist Vladimir Voevodsky with the “Homotopy Type Theory” (HoTT) 同伦类型论 : a borrowed concept from computer languages (“Interface” ) to Math: Logic (Types) + Topology (Homotopy) + IT

Math High-Pay Career in Data Science

https://towardsdatascience.com/data-science-jobs-with-their-salaries-171acd3bf9be

Gone are the days where Math graduates were destined to low-pay teachers in the 1960s to 1990s.

Now Math career is the hottest high pay job in the 2 key engines of the “4th Industrial Revolution” , ie Mind Automation by Machine Learning (in Big Data) and its ‘Young Mother’ Artificial Intelligence.

The 3 top jobs (by order of pay) :

1. Data Scientist (US $120K) :

Skills : Math (Stats, Linear Algebra, Calculus, Probability, & potentially Algebraic Topology / Homological Algebra) + A. I.

Mathematician PhD preferred.

2. Data Architect / Engineer (US $100K)

Skill: Math + IT (especially in Big Data technologies).

3. Data Analyst / (US $65K)

Skills : IT + Business + some Math (Stats).

A Programmer’s Regret: Neglecting Math at University – Adenoid Adventures

Advanced Programming needs Advanced Math: eg.

Video Game Animation: Verlet Integration

AI: Stats, Probability, Calculus, Linear Algebra

Search Engine : PageRank: Linear Algebra

Abstraction in Program “Polymorphism” : Monoid, Category, Functor, Monad

Program “Proof” : Propositions as Types, HoTT

https://awalterschulze.github.io/blog/post/neglecting-math-at-university/

Abstraction: Monoid, Category

Category

Interview “The God Father of Deep Learning” Prof Geoffrey Hinton

Key Points:

  • Neural Network began in 70s.
  • AI in vogue in 80s, mainly Knowledge-based Expert System, inference engine only but NO self-learning capability.
  • “AI Winter” in 90s.
  • He could not get AI funding in UK.
  • Refused the USA military funding, he moved to Toronto University with Canadian funding on pure Basic AI research.
  • 4 decades of perseverence in Neural Network, he invented “DeepLearning” Algorithm using new approach (Machine ‘Self-learning’ capability by training in Big Data, learn from variance between output vs actual by using 19CE French mathematician Cauchy’s Calculus “Gradient Descent“. )
  • Hinton thanks Canada for Basic Research Funding.
  • Now working for Google.

Notes: The success of Hinton:

  1. Cross-discipline of 3 skills : (Psychiatrist + Math + IT) – Chinese proverb : 三个臭皮匠, 胜过一个诸葛亮 (3 ‘smelly’ cobblers beat the smartest military strategist Zhuge Liang in Chinese Three Kingdoms)
  2. Failures but with perseverence (4 decades)
  3. Courage (withstand loneliness) but with vision (see light at the end of tunnel)
  4. Look for condusive Research Environment : Canada Basic Research Funding
  5. Stick to his personal principle : Science for Peace of mankind, no ‘Military’ involvement.

[References] :

Gradient Descent in Neural Network (Video here) :

AI with Advanced Math helps in discovering new drugs

https://theconversation.com/i-build-mathematical-programs-that-could-discover-the-drugs-of-the-future-110689?from=timeline

Advanced Mathematical Methods with AI is a powerful tool:

  • Algebraic Topology (Persistent Homology)
  • Differential Geometry
  • Graph Theory

https://sinews.siam.org/Details-Page/mathematical-molecular-bioscience-and-biophysics-1

Math in Machine Learning

https://www.forbes.com/sites/quora/2019/02/15/do-you-need-to-be-good-at-math-to-excel-at-machine-learning/amp/

Certainly having a strong background in mathematics (eg. Linear Algebra, Multi-variables Calculus, Baeysian Probability, etc) will make it easier to understand machine learning at aconceptual level.

“If the math seems tough, focus on the practical first, learn through analogies and by building something yourself.

But if the math comes easy, you’re starting with a solid foundation.”

Math for AI : Gradient Descent

Simplest explanation by Cheh Wu:

(4 Parts Video : auto-play after each part)

The Math Theory behind Gradient Descent: “Multi-Variable Calculus” invented by Augustin-Louis Cauchy (19 CE, France)

1. Revision: Dot Product of Vectors

https://www.khanacademy.org/math/linear-algebra/vectors-and-spaces/dot-cross-products/v/vector-dot-product-and-vector-length

2. Directional Derivative

3. Gradient Descent (opposite = Ascent)

https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/gradient-and-directional-derivatives/v/why-the-gradient-is-the-direction-of-steepest-ascent

Deeplearning with Gradient Descent: