- The science behind deep learning
- Building and training your own neural networks
- Privacy concepts, including federated learning
- Tips for continuing your pursuit of deep learning
About the Reader For readers with high school-level math and intermediate programming skills. About the Author Andrew Trask is a PhD student at Oxford University and a research scientist at DeepMind. Previously, Andrew was a researcher and analytics product manager at Digital Reasoning, where he trained the world's largest artificial neural network and helped guide the analytics roadmap for the Synthesys cognitive computing platform. Table of Contents
- Introducing deep learning: why you should learn it
- Fundamental concepts: how do machines learn?
- Introduction to neural prediction: forward propagation
- Introduction to neural learning: gradient descent
- Learning multiple weights at a time: generalizing gradient descent
- Building your first deep neural network: introduction to backpropagation
- How to picture neural networks: in your head and on paper
- Learning signal and ignoring noise: introduction to regularization and batching
- Modeling probabilities and nonlinearities: activation functions
- Neural learning about edges and corners: intro to convolutional neural networks
- Neural networks that understand language: king - man + woman == ?
- Neural networks that write like Shakespeare: recurrent layers for variable-length data
- Introducing automatic optimization: let's build a deep learning framework
- Learning to write like Shakespeare: long short-term memory
- Deep learning on unseen data: introducing federated learning
- Where to go from here: a brief guide