
We created a tool called Tone Transfer to allow musicians and amateurs alike to tap into DDSP as a delightful creative tool. This opens up machine learning technologies to a wider range of musical cultures. Rather than following the formal rules of western music, like the 12 notes on a piano, DDSP transforms sound by modeling frequencies in the audio itself. Many are trained on the structure of western musical scores, which excludes much of the music from the rest of the world. Machine learning models inherit biases from the datasets they are trained on, and music models are no different. This development is important because it enables music technologies to become more inclusive. The options are endless.Īnd so are the sounds you can make. Try replacing a capella singing with a saxophone solo, or a dog barking with a trumpet performance.

The process can lead to so many creative, quirky results. DDSP is a new approach to machine learning that enables models to learn the characteristics of a musical instrument and map them to a different sound. The team recently created an open source technology called Differentiable Digital Signal Processing (DDSP). Google Research’s Magenta team, which has been focused on the intersection of machine learning and creative tools for musicians, has been experimenting with exactly this. What if there was a way to turn your voice into something like a violin, or a saxophone, or a flute? Better, sure, but the result is still your voice. You might not sound like the real deal, right? Now imagine your rendition using auto-tune software.
