香港科技大摹 UNIVERSITY OF SCIENCE AND TECHNOLOGY Introduction to Deep Learning Professor Qiang Yang
Introduction to Deep Learning Professor Qiang Yang
Outline Introduction Supervised Learning Convolutional Neural Network Sequence Modelling RNN and its extensions Unsupervised Learning Autoencoder Stacked Denoising Autoencoder Reinforcement Learning Deep reinforcement Learning Two applications: Playing Atari alphaGo
Outline • Introduction • Supervised Learning – Convolutional Neural Network – Sequence Modelling: RNN and its extensions • Unsupervised Learning – Autoencoder – Stacked DenoisingAutoencoder • Reinforcement Learning – Deep Reinforcement Learning – Two applications: Playing Atari & AlphaGo
Introduction Traditional pattern recognition models use hand crafted features and relatively simple trainable hand-crafted Simple feature Trainable outpu extractor Classifier This approach has the following limitations It is very tedious and costly to develop hand crafted features The hand-crafted features are usually highly dependents on one application, and cannot be transferred easily to other applications
Introduction • Traditional pattern recognition models use handcrafted features and relatively simple trainable classifier. • This approach has the following limitations: – It is very tedious and costly to develop handcrafted features – The hand-crafted features are usually highly dependents on one application, and cannot be transferred easily to other applications hand-crafted feature extractor “Simple” Trainable Classifier output
Deep Learning Deep learning(a k a. representation learning) seeks to learn rich hierarchical representations (i.e. features) automatically through multiple stage of feature learning process LoW-level Mid-level High-level Trainable features features features classifier output Feature visualization of convolutional net trained on ImageNet Zeiler and fergus, 2013
Deep Learning • Deep learning (a.k.a. representation learning) seeks to learn rich hierarchical representations (i.e. features) automatically through multiple stage of feature learning process. Low-level features output Mid-level features High-level features Trainable classifier Feature visualization of convolutional net trained on ImageNet (Zeiler and Fergus, 2013)
Learning hierarchical Representations OW. Mid level let High-level Trainable features classifier output features features Increasing level of abstraction Hierarchy of representations with increasing level of abstraction. Each stage is a kind of trainable nonlinear feature transform ° mage recognition Pⅸel→edge→ texton→ motif→part→ object Text Character→Word→ word group→ clause→ sentence→ story
Learning Hierarchical Representations • Hierarchy of representations with increasing level of abstraction. Each stage is a kind of trainable nonlinear feature transform • Image recognition – Pixel → edge → texton → motif → part → object • Text – Character → word → word group → clause → sentence → story Lowlevel features output Midlevel features High-level features Trainable classifier Increasing level of abstraction