1. Trang chủ
  2. » Giáo Dục - Đào Tạo

DEEP LEARNING TUTORIAL (điều KHIỂN TRƯỢT và điều KHIỂN HIỆN đại SLIDE)

179 90 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 179
Dung lượng 5,25 MB

Nội dung

Trắc nghiệm, bài giảng pptx các môn chuyên ngành Y dược và các ngành khác hay nhất có tại “tài liệu ngành Y dược hay nhất”; https:123doc.netusershomeuser_home.php?use_id=7046916. Slide bài giảng môn điều khiển trượt và điều khiển hiện đại ppt dành cho sinh viên chuyên ngành công nghệ kỹ thuật và các ngành khác. Trong bộ sưu tập có trắc nghiệm kèm đáp án chi tiết các môn, giúp sinh viên tự ôn tập và học tập tốt môn điều khiển trượt và điều khiển hiện đại bậc cao đẳng đại học chuyên ngành công nghệ kỹ thuật và các ngành khác

DEEP LEARNIN G TUTORIAL Deep learning attracts lots of attention • Google Trends Deep learning obtains many exciting results The talks in this afternoon This talk will focus on the technical part 2007 2009 2011 2013 2015 Outline Part I: Introduction of Deep Learning Part II: Why Deep? Part III: Tips for Training Deep Neural Network Part IV: Neural Network with Memory Part I: Introduction of Deep Learning What people already knew in 1980s Example Application • Handwriting Digit Recognition Machine “2 ” Handwriting Digit Recogniti on Input Output x1 y1 0.1 is x2 0.7 y2 is …… …… …… x256 0.2 y10 is The image is “2” 16 x 16 = 256 Ink → No ink → Each dimension represents the confidence of a digit Example Application • Handwriting Digit Recognition y1 x1 x2 “2 ” …… …… x256 Machine y2 y10 In deep learning, the function is represented by neural network Element of Neural Network Neuron a1 w1 a2 w2 … wK aK weights z a1w1  a2 w2    aK wK  b z   z  b bias Activation function a Neural Network neuron neuron Input Layer Layer Layer L Output x1 …… y1 x2 …… y2 …… Hidden Layers …… …… …… Input Layer …… …… xN yM Output Layer Deep means many hidden layers Example of Neural Network -1 -2 -1 0.98 -2 0.12 Sigmoid Function   z   e z   z z Extension: “peephole” LSTM yt yt+1 ct ct-1 zf ct-1 zi ht-1 z xt ct+1 zf zo ct zi ht z xt+1 zo Other Simpler Alternatives Gated Recurrent Unit (GRU) [Cho, EMNLP’14] Structurally Constrained Recurrent Network (SCRN) [Tomas Mikolov, ICLR’15] Vanilla RNN Initialized with Identity matrix + ReLU activation function [Quoc V Le, arXiv’15]  Outperform or be comparable with LSTM in different tasks What is the next wave? • Attention-based Model …… …… Reading Head Writing Head Reading Head Controller Input x Internal memory or information from output Writing Head Controller DNN/LSTM output y Already applied on speech recognition, caption generation, QA, visual QA What is the next wave? • Attention-based Model • End-To-End Memory Networks. S Sukhbaatar, A Szlam, J Weston, R Fergus arXi v Pre-Print, 2015 • Neural Turing Machines. Alex Graves, Greg Wayne, Ivo Danihelka arXiv Pre-Print, 2014 • Ask Me Anything: Dynamic Memory Networks for Natural Language Processing K umar et al arXiv Pre-Print, 2015 • Neural Machine Translation by Jointly Learning to Align and Translate. D Bahdan au, K Cho, Y Bengio; International Conference on Representation Learning 2015 • Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. K elvin Xu et al arXiv Pre-Print, 2015 • Attention-Based Models for Speech Recognition. Jan Chorowski, Dzmitry Bahdan au, Dmitriy Serdyuk, Kyunghyun Cho, Yoshua Bengio arXiv Pre-Print, 2015 • Recurrent models of visual attention. V Mnih, N Hees, A Graves and K Kavukcu oglu In NIPS, 2014 • A Neural Attention Model for Abstractive Sentence Summarization. A M Rush, S Chopra and J Weston EMNLP 2015 Concluding Remark s Concluding Remarks • Introduction of deep learning • Discussing some reasons using deep learning • New techniques for deep learning • ReLU, Maxout • Giving all the parameters different learning rates • Dropout • Network with memory • Recurrent neural network • Long short-term memory (LSTM) Reading Materials • “Neural Networks and Deep Learning” • written by Michael Nielsen • http://neuralnetworksanddeeplearning.com/ • “Deep Learning” (not finished yet) • Written by Yoshua Bengio, Ian J Goodfellow and Aaron Courville • http://www.iro.umontreal.ca/~bengioy/dlbook/ Thank you for your attention! Acknowledgement • 好好 Ryan Sun 好好好好好好好好好好好 Appendix Matrix Operation x1 x2 -1 -2 -1 0.98 y1 -2 0.12 y2 W x b a Why Deep? – Logic Circuits • A two levels of basic logic gates can represent any B oolean function • However, no one uses two levels of logic gates to b uild computers • Using multiple layers of logic gates to build some fu nctions are much simpler (less gates needed) Boosting Weak classifier Weak classifier …… Input Combine Weak classifier Deep Learning Weak classifier Boosted weak classifier Boosted Boosted weak classifier x1 …… x2 …… … … … … xN …… Maxout Input x ReLU ReLU is a special cases of Maxout Input x + Max + Maxout Input x ReLU ReLU is a special cases of Maxout Input x + Max + Learnable Activation Function ... CS 678 – Deep Learning 45 Deep Net with Greedy Layer Wise Tra ining ML Model Supervised Learning New Feature Space Unsupervised Learning Original Inputs Adobe – Deep Learning and Active Learning. .. Part I: Introduction of Deep Learning Part II: Why Deep? Part III: Tips for Training Deep Neural Network Part IV: Neural Network with Memory Part I: Introduction of Deep Learning What people already... layer learning • Deep network local minima • We will discuss the two most common approaches • Stacked Auto-Encoders • Deep Belief Networks CS 678 – Deep Learning 47 Self Taught vs Unsupervise d Learning

Ngày đăng: 29/03/2021, 08:17

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w