Spanish telecommunications engineers have devised a new method to generate sheet music based on the sounds of individual notes, which it can identify regardless of musician, instrument, and venue.
The research team, from the University of Jaen in Jaen, Spain, describes an automated system that determines the spectral pattern of an instrument’s musical notes. The pattern is used to create a harmonic dictionary, which is paired with a pattern algorithm. The system then determines which note is which, and converts the information into a readable format. Given a WAV file of a recording, the software can produce a MIDI transcription.
Automatic music transcription could help musicologists analyze sound samples, recover musical content and separate varying audio sources, according to Julio José Carabias, co-author of the paper and a researcher from the Department of Telecommunications Engineering at the University of Jaen.
The method’s details were published in _ IEEE Transactions on Audio, Speech, and Language Processing._ The system is adaptable, meaning it can interpret any instrument, from dulcimers to didgeridoos. As of now, it only works for one instrument at a time, but the researchers think the method can be scaled to include many instruments playing at once. Other musical transcription devices use databases and are trained to recognize specific notes — much like a spectrometer is trained to recognize the spectra of certain chemical compounds. But the Spanish device learns on its own, by creating its own dictionary.
A sound spectrum is a representation of a sound in terms of the amount of vibration at each individual frequency. The distribution of a note’s harmonic energy defines its spectral pattern. Using that information, the system creates a dictionary of sound. It identifies the notes even when the type of instrument, musician, type of music or recording studio conditions vary.
“Another advantage of this method is that it does not require prior training with a musical database,” Carabias said.