The Romance of Hidden Components

It is amazing that deep learning works. Billions of parameters simultaneously converging to a good solution, not getting trapped in local minima. In convex optimization, the theory is very clear regarding the convergence of gradient descent (i.e. how many iterations?). But the regime in which deep learning operates is very different, besides just non-convexity. We do not have good intuitions about very high dimensional spaces. Another reason we should understand high-dimensional spaces is that, in the data that we observe, is high dimensionality just nominal? The manifold hypothesis certainly says so. And if the high dimensions are just nominal, how can we find the real manifold where the data resides. This post covers some properties of high-dimensional spaces, how can we extract the real data manifold from a high-dimensional description and some connections to deep neural nets.

Properties of High Dimensional Spaces

… To be continued..

Back