Friday, June 19, 2015

Research Blog: Inceptionism: Going Deeper into Neural Networks



Lovely blog post from Google. I never trust machine learning because you never know what they've actually learnt. Personally I've always thought that visualising machine learning would be a good way of trying to understand what machine learning is doing. Looks like some people of Google have done a great job. Not only that it looks lovely.



Research Blog: Inceptionism: Going Deeper into Neural Networks: Why is this important? Well, we train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn’t matter (a fork can be any shape, size, color or orientation). But how do you check that the network has correctly learned the right features? It can help to visualize the network’s representation of a fork.

Indeed, in some cases, this reveals that the neural net isn’t quite looking for the thing we thought it was. For example, here’s what one neural net we designed thought dumbbells looked like:

No comments: