Get a look at the technology behind Google’s touchscreen AI synth [Watch]
Google’s position at the forefront of technological innovation finds a new mode in the touchscreen hardware synth recently unveiled by the company. An alternative to synths that traditionally combine waveforms to generate sound, the touchscreen, AI assisted synth uses NSynth machine learning technology to “interpret” a range of inputs and generate new sounds.
The NSynth technology enables Google’s synth to register sounds as numbers in order to mathematically produce a novel series of numbers after the synth’s analysis of the original set of inputs. The synth then coverts its newly conceptualized string of numbers back into sound, thus producing sounds that are both new and nonpareil. Sounds that exemplify the synth’s uncanny ability to create the unique audio include a car’s engine combined with a sitar, and bass sound paired with that of thunder, in addition to various others.
Those interested can experiment with the NSynth technology in order to fully experience the synth’s anomalous kind of machine learning on Google’s web version of the synth.
The synth’s hardware allows its users to transition between four parameters on its X/Y pad, and to play and sequence sounds via MIDI, while “morphing between the sound sources in real time.”
Although Google will not commercially market its AI synth, it will release the technology as an open source Github download. Post download, users will have the ability to add their own features to the technology.
H/T: DJ Mag