top of page
Writer's pictureChockalingam Muthian

Turing Award 2018

Three researchers have won the 2018 Turing Award, known as the 'Nobel Prize of computing,' for conceptual and engineering breakthroughs in artificial intelligence (AI).


Yoshua Bengio, Geoffrey Hinton and Yann LeCun were named the recipients of the award on Tuesday by the Association for Computing Machinery (ACM).


About the recipients

Bengio is a professor at the University of Montreal in Canada, Hinton is vice president and engineering fellow, Google and professor emeritus at the University of Toronto, and LeCun is a professor at New York University and vice president and chief AI scientist at Facebook.


Let us look at their key research areas in the field of AI



Yoshua Bengio- Professor Department of Computer Science and Operations Research, Canada Research Chair in Statistical Learning Algorithms.


Bengio’s research focus mainly on unsupervised learning. The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.


His long-term goal is to understand the mechanisms giving rise to intelligence; understanding the underlying principles would deliver artificial intelligence, and he believe that learning algorithms are essential in this quest. He is the author of the famous textbook in fact the Bible for Deep Learning.


His recent research papers can be seen in this link - http://www.iro.umontreal.ca/~bengioy/yoshua_en/Highlights.html



Geoffrey Hinton- Hinton was co-author of a highly-cited paper published in 1986 that popularized the back propagation algorithm for training multi-layer neural networks. He is viewed by some as a leading figure in the deep learning community and is referred to by some as the "Godfather of Deep Learning". The dramatic image-recognition milestone of the AlexNet designed by his student Alex Krizhevskyfor the Imagenet challenge 2012helped to revolutionize the field of computer vision.


Geoffrey Hinton designs machine learning algorithms. His aim is to discover a learning procedure that is efficient at finding complex structure in large, high-dimensional datasets and to show that this is how the brain learns to see. He was one of the researchers who introduced the back-propagation algorithm and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, products of experts and deep belief nets. His research group in Toronto made major breakthroughs in deep learning that have revolutionized speech recognition and object classification.


Research Areas – Algorithms and Theory, General Science, Machine Intelligence, Machine perception, NLP and Speech Processing.



Yann LeCun- computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics, and computational neuroscience. He is the Silver Professor of the Courant Institute of MathematicalSciences at New York University, Vice President, Chief AI Scientist at Facebook.


He has published over 180 technical papers and book chapters on these topics as well as on neural networks, handwriting recognition, image processing and compression, and on dedicated circuits and architectures for computer perception. The character recognition technology he developed at Bell Labs is used by several banks around the world to read checks and was reading between 10 and 20% of all the checks in the US in the early 2000s. His image compression technology, called DjVu, is used by hundreds of web sites and publishers and millions of users to access scanned documents on the Web. Since the late 80's he has been working on deep learning methods, particularly the convolutional network model, which is the basis of many products and services deployed by companies such as Facebook, Google, Microsoft, Baidu, IBM, NEC, AT&T and others for image and video understanding, document recognition, human-computer interaction, and speech recognition.


He is working on a class of learning systems called Energy-Based Models, and Deep Belief Networks. He is also working on convolutional nets for visual recognition , and a type of graphical models known as factor graphs.

22 views0 comments

Recent Posts

See All

LLM Tech Stack

Pre-trained AI models represent the most important architectural change in software development. They make it possible for individual...

Comments


bottom of page