Tip:
Highlight text to annotate it
X
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér.
This one is going to be a treat.
As you know all too well after watching at least a few episodes of this series, neural
networks offer us amazingly powerful tools to defeat problems that we didn't stand a
chance against for a long long time.
We are now in the golden age of AI and no business or field of science is going to remain
unaffected by this revolution.
However, this approach comes with its own disadvantage compared to previous handcrafted
algorithms, it is harder to know what is really happening under the hood.
That's also kind of the advantage of neural networks, because they can deal with complexities
that we, humans are not built to comprehend.
But still, it is always nice to peek within a neural network and see if it is trying to
learn the correct concepts that are relevant to our application.
Maybe later we'll be able to take a look into a neural network, learn what it is trying
to do, simplify it and create a more reliable, handcrafted algorithm that mimics it.
What's even more, maybe they will be able to write this piece of code by themselves.
So clearly, there is lots of value to be had from the visualizations, however, this topic
is way more complex than one would think at first.
Earlier, we talked about a technique that we called activation maximization, which was
about trying to find an input that makes a given neuron as excited as possible.
Here you can see what several individual neurons have learned, when I trained them to recognize
wooden patterns.
In this first layer, it is looking for colors, then in the second layer, some basic patterns
emerge.
As we look into the third layer, we see that it starts to recognize horizontal, vertical
and diagonal patterns, and in the fourth and fifth layers, it uses combinations of the
previously seen features, and as you can see, beautiful, somewhat symmetric figures emerge.
If you would like to see more on this, I put a link to a previous episode in the video
description.
Then, a followup work came for multifaceted neuron visualizations, that unveiled even
more beautiful and relevant visualizations, a good example was showing which neuron is
responsible for recognizing groceries.
A new Distill article on this topic has recently appeared by Christopher Olah and his colleagues
at Google.
Distill is a journal that is about publishing clear explanations to common interesting phenomena
in machine learning research.
All their articles so far are beyond amazing, so make sure to have a look at this new journal
as a whole, as always, the link is available in the video description.
They usually include some web demos that you can also play with, I'll show you one in a
moment.
This article gives a nice rundown of recent works in optimization-based feature visualization.
The optimization part can take place in a number different ways, but it generally means
that we start out with a noisy image, and look to change this image to maximize the
activation of a particular neuron.
This means that we slowly morph this piece of noise into an image that provides us information
on what the network has learned.
It is indeed a powerful way to perform visualization, often more informative than just choosing
the most exciting images for a neuron from the training database.
It unveils exactly the information the neuron is looking for, not something that only correlates
with that information.
There is more about not only visualizing the neurons in isolation, but getting a more detailed
understanding of the interactions between these neurons.
After all, a neural network produces an output as a combination of these neuron activations,
so we might as well try to get a detailed look at how they interact.
Different regularization techniques to guide the visualization process towards more informative
results are also discussed.
You can also play with some of these web demos, for instance, this one shows the neuron activations
with respect to the learning rates.
There is so much more in the article, I urge you to read the whole thing, it doesn't take
that long and it is a wondrous adventure into the imagination of neural networks.
How cool is that?
If you have enjoyed this episode, you can pick up some really cool perks on Patreon,
like early access, voting on the order of the next few episodes, or getting your name
in the video description as a key contributor.
This also helps us make better videos in the future and we also use part of these funds
to empower research projects and conferences.
Details are available in the video description.
Thanks for watching and for your generous support, and I'll see you next time!