Fred Simard has organized a Computational Neuroscience day by and for CAMBAM students, which took place May 7, 2013. He came up with the brilliant idea of recording the workshop lectures on video, and we now also have a CAMBAM student youtube account. Check out what we have on offer! (Adapted from original post on INSIDE CAMBAM by Fred Simard,) Thank you Fred, and all the great speakers!

 

Linear Regression, Optimization, Classification and Decoding – Part 1, by Greg Stacey

Speaker: Greg Stacey
Contact: richard.greg.stacey@mail.mcgill.ca

This video is the first part of the series of two talks on the topic of regression, optimization, classification and decoding. In it you will the basics of linear regression, and explanation of the concept of a cost/loss function and of the gradient method for the optimization of such a function.

You can find a code sample here:
RegressionExamples.m

 

Linear Regression, Optimization, Classification and Decoding – Part 2, by Nathan Friedman

Speaker: Nathan Friedman
Contact: nathan.friedman2@mail.mcgill.ca

In this talk, the second in the series on linear regression, optimization, classification and decoding, I give a very brief overview of some machine learning classification algorithms. I explain what a linear classifier is and demonstrate both binary and multi-class classifiers. The algorithms presented are: Regularized Least Squares, Logistic Regression, Perceptron, Support Vector Machine, and Fisher’s Discriminant.

 

Dimensionality Reduction: PCA and Gauss. Proc. Factor Analysis, by Frederic Simard

Speaker: Frederic Simard
Contact: frederic.simard@mail.mcgill.ca

Website: www.atomsproducts.com

In this talk, you will learn the basics of dimensionality reduction. The first algorithm that is presented is the principal component analysis which is based on the variance in the data set. You will learn how to select a subset of dimensions while maintaining the most information about your data, as to, for example, make a classifier. A quick presentation of the Gaussian Process Factor Analysis follows. This algorithm extract trajectories of system state in lower dimension space.

You can find code sample packages here:
Principal Components Analysis Code Sample
Gaussian Process Factor Analysis Code Sample

 

Information Theory and Neural Coding – Part 1, by Adam Schneider

Speaker: Adam Schneider
Contact: adam.schneider@mail.mcgill.ca
Collaborator: Mohsen Jamali

Information theory, developed by Claude Shannon in 1949, provides mathematically rigorous tools to quantify the precision with which a systems output contains information about its inputs, setting physical limits on a system’s capacity for information transmission.  In this talk I present a brief summary of the fundamental concepts underlying information theory, in the context of its application to neuronal signal processing.

A useful, well documented, MATLAB toolbox for calculating coherence and mutual information in neural systems can be found at www.chronux.org.

 

Information Theory and Neural Coding – Part 2, by Ashkan Golzar

Speaker: Ashkan Golzar
Contact: ashkan.golzar@mail.mcgill.ca
Collaborator: Mohsen Jamali

Following the fundamental concepts of information theory in the previous part, in the second part of this talk we present three methods to calculate mutual information between stimulus and neural signal: direct method, upper-bound method, and lower-bound method. We discuss advantages and disadvantages of each method as well as the assumptions inherent to each. We also show how information theory can address central questions in the field of neural coding.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s