Introduction to statistical models of neural spike train data

Lectuler: Hideaki Shimazaki; RIKEN Brain Science Institute, Email: shimazaki at brain.riken.jp

Course descreption:

This course will introduce statistical models of event data, i.e., Point process models, and their applications to the analysis of neural spike trains. The models include homogeneous and inhomogeneous (rate-modulated) Poisson models and various history-dependent non-Poisson models (e.g., a renewal model). We will also study a popular parametric model in the field known as the point process - generalized linear model (GLM), which offers a tractable scheme to include stimulus input or motor output signals in the model. The course then covers statistical inference based on these models. Neurophysiologists often relate stimulus or behavior with spike-rates of individual neurons obtained by repeated measurements. We will review non-parametric estimation of the time-varying spike-rate using a classical histogram or kernel smoother, and optimization of these methods under the assumption of a Poisson point process. Methods for parametric models (e.g., point process-GLM) will cover standard procedures such as maximum likelihood estimation and a test for the goodness-of-fit. Emphasis is on validation of the models. Students will be exposed to important conceptions such as model selection, resampling, and statistical tests. We will review how these statistical models are used to express a set of assumptions to test specific features of neural activity relevant for information processing in neural coding studies. Toward the end of the course, students will learn how to decode a stimulus input or motor output from neural activity. The course will introduce a state-space model of the point process-GLM, and a recursive Bayesian filter/smoother to decode these signals. Practical applications of the decoding methods in Neuroscience and neuroprosthetic studies will be reviewed.

In this course, you will learn:
point process theory, time-rescaling theorem, non-parametric density estimation, state-space model, model selection, smooth prior, Bayesian recursive filter, generalized linear model, maximum entropy model, higher-order ineteraction, introductory information geometry, Fisher information, Laplace approximation, expectation-maximization algorithm.

, maximum likelihood estimation, L2/L1 regularization

Prerequisite: The course is designed for stutdents with no previous experience with statistical analysis and neural data. A basic knowledge of calculus and statistics would help.

 

Lecture 0: Overview of the course, neural coding studies

The first lecture will introduce historical aspect of neural coding studies. Sensory systems.

You will learn why the statistical modeling studies become important.

You will learn:

 

Lecture 1: Neural spike data and Point processes

Poisson point process, memoryless property, exponential distribution,

conditional intensity function

$ \lambda = \int{P\ell^{3}}{3EI} + \frac{P\ell}{GkA} $

goodness-of-fit test, QQ-plot, KS-plot

Time-rescaling theorem

Lecture: Density estimation

Further readings

Bayesian statistics

Uncertainty = randomness in the system

Uncertainty = loss of information

Lecture: State-space model

Limitation

Higher-order interactions

Sparseness and higher-order interactions

Time-varying higher-order interactions

 

Recent topics of Neuroscience

Furter reading for trial-by-trial variability

 

Modling study that incorporates a trial-by-trial variability

Spike-triggered average and covariance

Data set

Neural Signal Archive http://www.neuralsignal.org/

CRCNS http://crcns.org/data-sets/

Neural Prediction Challenge http://neuralprediction.berkeley.edu

 

References

Prof. Rob Kass

Prof. Liam Paninski course page