Henry Lin and Max Tegmark have a fascinating new paper arguing that the success of deep learning in so many domains has deep connections to the fundamental laws of the universe. Both take potentially enormously large sets of possible data sets and simplify them to tiny set of outcomes, governed by just a few parameters. Luckily, they both simplify to much the same tiny subset.

As the authors put it:

As the authors put it:

We will see in below that neural networks perform a combinatorial swindle, replacing exponentiation by multiplication: if there are say n = 106 inputs taking v = 256 values each, this swindle cuts the number of parameters from v n to v×n times some constant factor. We will show that this success of this swindle depends fundamentally on physics: although neural networks only work well for an exponentially tiny fraction of all possible inputs, the laws of physics are such that the data sets we care about for machine learning (natural images, sounds, drawings, text, etc.) are also drawn from an exponentially tiny fraction of all imaginable data sets. Moreover, we will see that these two tiny subsets are remarkably similar, enabling deep learning to work well in practice.

## No comments:

## Post a Comment