Interview “The God Father of Deep Learning” Prof Geoffrey Hinton

Key Points:

  • Neural Network began in 70s.
  • AI in vogue in 80s, mainly Knowledge-based Expert System, inference engine only but NO self-learning capability.
  • “AI Winter” in 90s.
  • He could not get AI funding in UK.
  • Refused the USA military funding, he moved to Toronto University with Canadian funding on pure Basic AI research.
  • 4 decades of perseverence in Neural Network, he invented “DeepLearning” Algorithm using new approach (Machine ‘Self-learning’ capability by training in Big Data, learn from variance between output vs actual by using 19CE French mathematician Cauchy’s Calculus “Gradient Descent“. )
  • Hinton thanks Canada for Basic Research Funding.
  • Now working for Google.

Notes: The success of Hinton:

  1. Cross-discipline of 3 skills : (Psychiatrist + Math + IT) – Chinese proverb : 三个臭皮匠, 胜过一个诸葛亮 (3 ‘smelly’ cobblers beat the smartest military strategist Zhuge Liang in Chinese Three Kingdoms)
  2. Failures but with perseverence (4 decades)
  3. Courage (withstand loneliness) but with vision (see light at the end of tunnel)
  4. Look for condusive Research Environment : Canada Basic Research Funding
  5. Stick to his personal principle : Science for Peace of mankind, no ‘Military’ involvement.

[References] :

Gradient Descent in Neural Network (Video here) :

Advertisements

Slashing Deep Learning Speed With Hashing

An old trick Hashing taking advantage of tjr inherent sparsity in Big Data to reduce 90% time without loss of > 1% accuracy in data.

For examples : in picture recognition, many data are blanks consists of background (scenario,  lighting), only less than 10% are striking pattern data which characterise the particular objects, like zebra trait, tiger body skin lines, elephant trunk, sunflower… when search data are stored in a matrix of billion columns and rows, 90% elements are 0.

https://m.phys.org/news/2017-06-scientists-slash-deep.html