Facebook Researchers Used AI To Create A Universal Music Translator
Is Facebook pumping up the volume on what AI can mean to the future of music? You can decide after having a look at what Facebook AI Research scientists have been up to.
A number of sites including The Next Web have reported that they unveiled a neural network capable of translating music from one style, genre, and set of instruments to another.
You can check out their paper, “A Universal Music Translation Network” by authors Noam Mor, Lior Wolf, Adam Polyak, Yaniv Taigman, Facebook AI Research.
A video showing the authors’ supplementary audio samples lets you hear what they did with samples ranging from symphony, string quartet, to sounds of Africa, Elvis and Rihanna samples and even human whistling.
In one example, they said they converted the audio of a Mozart symphony performed by an orchestra to an audio in the style of a pianist playing Beethoven.
Basically, a neural network has been put to work to change the style of music. Listening to samples, one wonders what the AI process is like in figuring out how to carry the music from one work to another?
Does it involve matched pitch? Memorizing musical notes? Greene said no, their approach is an “unsupervised learning method” using “high-level semantics interpretation.”
Greene added that you could say “it plays be ear.” The method is unsupervised, in that it does not rely on supervision in the form of matched samples between domains or musical transcriptions, said the team.
Greene also translated, explaining that this was “a complex method of auto-encoding that allows the network to process audio from inputs it’s never been trained on.”
In a bigger picture, one can mark the AI attempt to translate styles and instruments as another sure sign of an intersection being crossed between AI and music that can change our pejorative view of “machine” music as inferior and canned.
Please like, share and tweet this article.
Pass it on: Popular Science