Today on March 21st Google posted an interesting interactive doodle dedicated to classical Baroque composer Johann Sebastian Bach. Coders and designers at Google analyzed over 300 musical pieces from Bach to create a new unique software tool that can now harmonize any single-notes input in the style of a famous classical composer.
We are impressed! :)
Google Doodle – Johann Sebastian Bach AI ML Harmonizer
Key Features & Fun Facts
- Doodle accepts 2 measures melody (single voice).
- After completing user melody of max 16 notes (you can avoid some notes e.g. use music breaks or pauses), auto-harmonizer will work in background to add additional voices (chords).
- If you click on amplifier box at the bottom right, you can replace classical piano with an electronic synth sound. In our opinion, piano version sounds much better, more realistic, in style, convincing and plausible.
- This is a first-ever Artificial Intelligence / Machine Learning Google Doodle. This only means we can expect more – who knows, a weather prediction machine may be next? :)
- For cases where someone’s PC / Phone / Tablet / Smart TV / etc. isn’t fast enough to run TensorFlow.js, doodle will be served with Google’s new Tensor Processing Units (TPUs) – yet another Doodle first!
Video: Google Doodle in action (our demo begins around 1:20)
When you mention Bach to someone today, many will immediately think of his (arguably) famous and most recognizable piece: Toccata & Fugue in D Minor (BWV 565), simply because it has a catchy memorable intro and melody, and was popularized by various modern artists throughout the history.
However, as much as Bach is mostly known for his organ work in the forms of many preludes, toccatas and fugues, he has also written music chorales, cantatas, passions, oratorios and chamber music for instruments that were available and dominant at that time: harpsichord, violin, choirs and orchestral pieces, and so on. Anyone, who attended even basic musical education, played at least one piece from Bach during practice and learning sessions. Fact!
Surely, one short article like this cannot do the justice for the entire opus of the Master, and this is why we encourage you to start your personal exploration journey from the Wikipedia page about Johann Sebastian Bach and then try to find another side of the rabbit hole. Warning: you will be lost!
Video: Story Behind Google’s Musical Chords Harmonizer Doodle
Anyway, given our basic musical education and general affection for classical music, we were impressed by this! Unfortunately, doodle is just a demonstration what can be done with modern algorithms and computers. Not just some random noise generation and ‘feeling lucky’ hit-and-miss melodies, but educated guess human-like compositions that have a ‘meaning’ to us, fellow humans! :)
This brings us to more questions: what is the future of music creation, and in general, art in that case? Is the art expression of humans because they have a need or desire to reflect on their own experience, a reaction to the environment they (we) are living in, or just because? How can we separate and distinguish true creativity with almost complete automation in the way? Where is the border line (if any) in such case?
On the other hand, we have to ask (and Who? that’s a good one, too!) what will machine generated art represent in the future? It is nothing new, though. Computers are used to generate art for decades now, ranging from film industry (e.g. CGI, textures, animations) all the way down to electronic music and digital music composers. But, their creativity was never questioned until today: they were always directed, selected and evolved by us – fellow humans. They were simply simple tools, nothing more, nothing less.
However, AI and ML algorithms nowadays offer much greater power in terms of generating art that is instantly acceptable and pleasant. Of course, you need good input data for the models to work, and some talented engineers, because that can easily go sideways. One wrong algorithm choice, one wrong or excessive line of code, and you will end up with junk, instead of art. Or maybe master-piece.
What is the difference between random or ‘non-intelligent’ musical score harmonizers available today and artificial intelligence approach ones? How they work at all you might ask yourselves! Well, in general, they will both accomplish the same task, but through different means. In the first case, we have to use our ears and the process is pretty much manual all the way through the end. AI-based approach offers greater flexibility and creativity in human-acceptable terms. For example, If we set note D in D-minor scale, basic harmony theory gives us plethora of chords we can construct: D-minor, D-major, Dmin7, Dsus4… (the list is very long and complex!). In traditional methods, choice was always ours – computers only generated suggestions based on music theory.
But, now let us introduce intelligence and intuition into equation: If we know the preceding and succeeding notes in the melody, we can now create more accurate harmony movement, than just a simple note-by-note voicing. And most shocking part of it – without any human intervention! That’s where AI/ML algorithm part comes in handy – it can learn from existing pieces rules (and style) and then use that knowledge to apply it to the random and newly generated parts that it didn’t know anything about because they didn’t existed before. It’s like educated guess approach, but there’s more to it, of course.
The doodle sketch is limited to 2 measures / 16 notes. This is done for several reasons, but most important one is to reduce melody complexity across multiple devices and shorten analysis time for automatic harmonizing. Also, we think this approach is very clever, because algorithm has a good and relatively “simple” data to work with. It would be much harder to create completely “random” melody generator that is both acceptable to humans and original at the same time.
Musical notes are like alphabet letters: there are infinite possibilities to combine them, but only small subset of all possible combinations form meaningful words in a given language. Same rule applies in music, except, there are infinitely more combinations to generate. Yes, there are only 7 named notes strictly speaking: C, D, E, F, G, A, H (B); however you have octaves, scales, pitch variations (#/b), chords, various tempos and measures, progressions, instruments arrangements, velocities, human imperfect interpretations… In short, music is more realistically compared to human speech, than simple alphabet and words/sentences.
Again, we haven’t answered our initial question here: what is the purpose of all this? What is the meaning of computer-generated art on a higher, more intelligent level, than simple algorithmic and randomized approach we were used to (“in the past”) so far? Well, it is just too early to say, honestly. We don’t know. We just repeat this question, hoping our own echo will reverberate back with a wisdom of time delay carrying an answer along the way.
Machines still do not have self-awareness and reasoning the way humans do. Our environment, our biological constitution, our everything creates very specific needs – we eat, we sleep, we learn, we work, we create, we evolve. What will be the goal for computers or machines to grow, create, develop and – evolve? Unless, of course, let us sound audacious and arrogant here: we (humans) give them purpose? We don’t know that yet. Maybe, if you hard-code that part into something (or someone), and leave it for a thousand millennia running, no intelligence (organic or non-organic) after 1 million years will question that motif or know the answer to it, either. It will simply be assumed, like an axiom of the very existence.
Anyway, let’s get down to earth and stop over-thinking this invention too much. And, just to be sure, don’t throw away your old and obsolete DAW software, VST plugins, musical instruments and hardware equipment just yet! It is fun to have another approach to music and creativity for one, and it’s cool and exciting, for second.
This technology can allow us better understanding things in our surroundings and everyday life, e.g. Bach’s or someone else’s music on a precise, mathematical level, something which was exclusively reserved to music theorist and composers in the past, who spent their entire lifetime studying other’s work. Of course, it will not directly tell us why someone enjoys that particular style, and others do not; that’s up to the hard-wiring of our own neural nets, otherwise known as human brains. :)
Now, you can emulate your famous composer without even knowing mathematical rules and MIDI stuff going on behind the scenes. And, who knows, maybe it will inspire you to create new grand Toccata and Fugue of the 21st century and make you famous!
6 Comments
Add Your CommentThanks. Nice backgrounder and a couple of nice teases and links. I await the packaging of that Music ML as an app I could turn loose on my favorite composers.
March 22nd, 2019Thanks! :)
March 23rd, 2019doesn’t work with internet explorer
March 22nd, 2019Hi, unfortunately (or thankfully, depending on who you ask), IE is no longer actively developed (only maintained), so I doubt it supports all the JavaScript features required to run it.
March 23rd, 2019Fascinating. I’m going to feed in some of the Chorales Bach himself harmonized to see if it can regenerate any of Bach’s bass lines. Arnold Schoenberg was teaching a harmony class, and told his students as an exercise to take one of Bach’s chorales and reharmonize it. He looked at the work of one student the next day and told her, that she had to do the entire harmonization, not use JSB’s bassline, and she apologized, saying she must have misunderstood the assignment. An observer in that class session later asked Schoenberg if he had memorized all of Bach’s bass lines and he responded, “No, I just knew that I couldn’t have written that bass line.”
March 24th, 2019Hi, that should be really interesting experiment, because neural network and model behind doodle was trained by reconstruction of missing parts. More details are available in the links at the end of the article.
March 24th, 2019