Mind The Music


(See also this article in Electronic Times)

Mind the Music was a project carried out at the University of Glasgow. The participating departments were the Department of Electronics & Electrical Ebgineering, the Department of Psychology, and the Department of Music. The goal of the project was to develop digital signal processing algorithms to analyse and classify the human electroencephalogram (EEG) while a subject was listening to or imagining a piece of music. In effect, the goal of this project was to see if we could build a system that is able to detect what someone is thinking about.

The EEG is the measurement of the voltages on the surface of the human scalp. These voltages are generated by the activities of the billions of neurones that make up the brain. They are of the order of microvolts and special sensors are used for their measurement. In this project we used a 128 channel EEG system supplied by Electrical Geodesics Inc.

In the experimental phase of this project we made recordings of the EEG of several subjects while they were listening to simple arpegios and tones. We also asked the subjects to imagine they were hearing those same sounds and, at different (known) times, we asked the subjects to perform a simple mental task (like counting backwards from 100.) These recordings were digitised and stored for later processing.

In the processing phase of this project our goal was to automatically classify the recorded EEG data. In other words, given a subject's recorded  EEG, could we determine if the subject was listening to music, imagining music, or performing the counting task?

To do this we employed a neural network classifier. First of all, however, we had to pre-process the raw EEG data sets as the amount of data (128 channels, 50 Hz sampling rate over several seconds) is too much to present to a standard neural classifier. We therefore chose to parameterise the raw data by estimating a multichannel autoregressive model over a sliding time window. This reduced the number of free parameters by a factor of over 1000. The autoregressive model parameters were then presented to the network for classification. Of course, some of the recordings were used to train the network using a standard back-propagation algorithm.

The results we obtained were very encouraging. In some cases we achived nearly 90% classification success rates.

The project led to a successful Ph.D. thesis (Dr. Alex Duncan) and was supported in part by the U.K. Science & Engineering Research Council.