Neural networks are a form of machine learning based off the functionality of the human brain.

They consist of nodes connected by weighted edges - similar to the structure of neurons.

Input goes in as floating-point numbers, and is propagated through the network from node to node along those edges.

The outputs and inputs can be basically anything you want: all of the image-recognition features in Google and Apple Photos are powered by neural networks; Microsoft's Skype Translate feature uses neural networks to perform real-time translation; the DeepMind group used neural networks to create AlphaGo, the world's best Go player.

To date, however, there has been very little use of this form of machine learning in the field of music information retrieval. Services like Pandora and Spotify provide algorithmic music selection, but their actual databases are assembled through human labor and data mining, respectively.

Music is rich in information - from things like what key and time signature are being played in up to the sociocultural context in which the lyrics were written. Musicologists work to extract this information in meaningful ways, but there is far more music in the world than there are musicologists. Providing automated tools to ease their efforts could be of significant benefit.

The field of music information retrieval exists to provide these tools, and they have: automated beat-detection is doable, and identifying the key can be done with some effort. More complex tasks, though, remain out of reach. There is no algorithm to identify the genre of a piece, and no machine can identify which instrument is playing as easily as a human can.