A Hierarchical Bayesian Model for Learning Higher-order Structure in High-dimensional Datasets

Yan Karklin, Carnegie Mellon University

Photo of Yan Karklin

The goal of many machine learning techniques is to find an efficient representation of statistical regularities in the data. However, only a few learning algorithms can yield large, distributed representations required in domains where the intrinsic dimensionality of the data is very large. One such approach, Independent Component Analysis (ICA), has been used successfully to derive efficient linear codes for natural images and speech, but as a linear model it is limited in the kind of statistical structure it can represent. Here we present a hierarchical Bayesian model that uses a sparse, distributed code to represent common patterns in the distributions of the data. The model, a non-linear generalization of ICA, is able to capture higher-order statistical structure and describe non-stationary data distributions observed in many domains. Applied to natural images, our method recovers more abstract properties of the data, such as object location, scale, and texture. The learned higher-order representation may be useful for a variety of image processing tasks such as image segmentation or texture classification. This approach could also contribute to the understanding of biological sensory systems by providing theoretical insight into the response properties and computational functions of early perception.

Abstract Author(s): Yan Karklin and Michael S. Lewicki