Jump to ratings and reviews
Rate this book

Unsupervised Deep Learning in Python: Master Data Science and Machine Learning with Modern Neural Networks written in Python and Theano

Rate this book
Modern Deep Learning

When we talk about modern deep learning, we are often not talking about vanilla neural networks - but newer developments, like using Autoencoders and Restricted Boltzmann Machines to do unsupervised pre-training.

Deep neural networks suffer from the vanishing gradient problem, and for many years researchers couldn’t get around it - that is, until new unsupervised deep learning methods were invented.

That is what this book aims to teach you.

Aside from that, we are also going to look at Principal Components Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE), which are not only related to deep learning mathematically, but often are part of a deep learning or machine learning pipeline.

Mostly I am just ultra frustrated with the way PCA is usually taught! So I’m using this platform to teach you Principal Components Analysis in a clear, logical, and intuitive way without you having to imagine rotating globes and spinning vectors and all that nonsense.

One major component of unsupervised learning is visualization. We are going to do a lot of that in this book. PCA and t-SNE both help you visualize data from high dimensional spaces on a flat plane.

Autoencoders and Restricted Boltzmann Machines help you visualize what each hidden node in a neural network has learned. One interesting feature researchers have discovered is that neural networks learn hierarchically. Take images of faces for example. The first layer of a neural network will learn some basic strokes. The next layer will combine the strokes into combinations of strokes. The next layer might form the pieces of a face, like the eyes, nose, ears, and mouth. It truly is amazing!

Perhaps this might provide insight into how our own brains take simple electrical signals and combine them to perform complex reactions.

We will also see in this book how you can “trick” a neural network after training it! You may think it has learned to recognize all the images in your dataset, but add some intelligently designed noise, and the neural network will think it’s seeing something else, even when the picture looks exactly the same to you!

So if the machines ever end up taking over the world, you’ll at least have some tools to combat them.

Finally, in this book I will show you exactly how to train a deep neural network so that you avoid the vanishing gradient problem - a method called “greedy layer-wise pretraining”.

“Hold up... what’s deep learning and all this other crazy stuff you’re talking about?”

If you are completely new to deep learning, you might want to check out my earlier books and courses on the subject:

Deep Learning in Python https://www.amazon.com/dp/B01CVJ19E8
Deep Learning in Python Prerequisities https://www.amazon.com/dp/B01D7GDRQ2

Much like how IBM’s Deep Blue beat world champion chess player Garry Kasparov in 1996, Google’s AlphaGo recently made headlines when it beat world champion Lee Sedol in March 2016.

What was amazing about this win was that experts in the field didn’t think it would happen for another 10 years. The search space of Go is much larger than that of chess, meaning that existing techniques for playing games with artificial intelligence were infeasible. Deep learning was the technique that enabled AlphaGo to correctly predict the outcome of its moves and defeat the world champion.

46 pages, Kindle Edition

Published June 30, 2016

14 people are currently reading
6 people want to read

About the author

LazyProgrammer

15 books1 follower

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
2 (18%)
4 stars
5 (45%)
3 stars
3 (27%)
2 stars
1 (9%)
1 star
0 (0%)
No one has reviewed this book yet.

Can't find what you're looking for?

Get help and learn more about the design.