**
Jeremy Lewi, Georgia Institute of Technology
**

We show how prior knowledge about a neuron’s conditional response function can be used to select optimal stimuli for fitting a parametric model to a neuron's response function. Our prior beliefs describe a submanifold in which we expect to find the model’s optimal parameters. Stimuli are selected by maximizing the expected information provided by the response about the model's parameters. The expected information is computed using the information available in previous experiments and our prior knowledge. Our algorithm can be used to design optimal experiments for estimating high-dimensional receptive fields which have a known parametric structure; e.g., Gabor functions in the case of V1 experiments.

We model a neuron’s response function as a Generalized Linear Model (GLM). We update a Gaussian approximation of the posterior on the entire parameter space of the GLM after each observation. We do not force the posterior to only have support on the submanifold given by our prior because 1) the true receptive field may not lie exactly on the submanifold, but rather only near it in some statistical sense, and 2) we want to preserve a key log-concavity property of the posterior which ensures there are no local maxima. We use our beliefs about the submanifold to choose optimal stimuli. We estimate the expected information using models in the tangent space of the manifold at the projection of the maximum a-posteriori estimate (MAP) of the parameters onto the manifold. Since our posterior is log-concave, the tangent space in the neighborhood of the MAP is the set of models which are close to the submanifold and have high probability under our posterior. For example, if we are fitting a Gabor receptive field, the projection of the MAP is the best Gabor approximation of the MAP. To find receptive fields which are close to the MAP and the submanifold, we want to perturb the parameters of the Gabor function corresponding to the MAP. We can approximate the changes in the receptive field due to perturbation of the Gabor’s parameters by taking linear combinations of the partial derivatives of the receptive field with respect to the Gabor’s parameters. The vector space defined by these combinations is the tangent space. By projecting the posterior into the tangent space, we can restrict ourselves to a set of receptive fields that have high probability under our posterior and are close to the manifold at least locally. After projecting the posterior into the tangent space, we can use an existing algorithm to efficiently optimize the stimuli.

We present simulations showing that the incorporation of prior knowledge leads to faster convergence to the optimal parameter values. For comparison, our simulations also used maximally informative stimuli which ignored our prior knowledge, as well as random stimuli, and typical stimuli such as drifting gradients.

**Abstract Author(s): **Jeremy Lewi, Robert Butera, and Liam Paninski