Machine Learning and Artists

I was at New York for the International 143rd Audio Engineering Society (AES) convention this past weekend. This convention happens once a year and features the most recent technologies and development from the world of audio and technology. As a musician who is really interested in the developments in music technology, this is a great opportunity to find out what’s going on in the world; and to find clues to see what’s going to happen in the world music industry in the near future. To my surprise, the developments that were discussed in this event went far beyond the world of music and I think it will effect creatives as a whole in good and bad ways.

IMG_2030.jpgOne of the most striking sessions at AES was the “Machine Learning and Music Recomposition: A Glimpse at the Future” panel. To be honest I did not know anything about machine learning before this session, other than that it is related to artificial intelligence. And I know amongst my musician colleagues I’m not the only one whose knowledge is at this level, because when I told them about what I heard at this session, their eyes were wide open.

IMG_2042.jpg

So what is machine learning? Take Siri or Alexa – they can take sounds of utterances and translate them into words. But how do they do it? For instance, when I say “would you too?”, why doesn’t Siri understand it as “would Jew too?”. The answer is, Siri is based on ever evolving algorithms that statistically analyze language patterns and act upon them. In other words, it momentarily looks at language patterns and selects the one that occurs the most. Realizing “would you” happens more commonly, it selects “would you” over “would Jew”.

In fact, for this blog post I decided to make a little experiment by dictating it with Siri on my iPhone to see how successful it would be. Well, it is strangely very accurate except a few mistakes.

pexels-photo-277052.jpeg

Beyond machine learning, there’s also another concept called deep learning. Deep learning is based on ‘features’. A ‘feature’ is what we want to train to the machine. What’s more interesting is that in deep learning, the algorithms choose features, not humans. I think then we could say that the machine chooses what it wants to learn and what it does not want to learn – which is pretty crazy. It gets even more complicated because there can be multiple levels of features.

So based on these findings and developments in machine learning, the panelists demonstrated us a quick project. The task was to condense a four a minute song into a one minute clip by simply chopping it down and creating seamless transitions. So, both the computer and a human took “Dream On” by Aerosmith and condensed it down. The results? The human won, but only slightly better than the machine. There was even one part of the song where the machine actually did a much better job in doing a seamless transition than the human. So I think it’s safe to say that soon the machines will be able to do audio editing tasks with minimal mistakes.

pexels-photo-256502.jpeg

This all sounds scary, and sure, some jobs will likely disappear as machines will be able to take on certain tasks that humans used to do. But I don’t want to portray a dystopia in this sense. I’m optimistic that by utilizing the machines to do these repetitive tasks that we used to do, we can spend more time to generate original content, make art and find new ways to express ourselves. Just like how we’re using great computers, tablets and technology to make great art, we can also use machine learning to enhance the quality of our work. In other words, machines are only tools in our disposal,  if we can find ways to use them for our purposes, we can achieve great heights.

Have you ever used a product based on machine learning? Do you have a positive or negative expectation about the future of arts with machine learning? Feel free to comment below!

Alper Tuzcu is a Berklee College of Music and Denison University alumni, and a Boston based guitarist, songwriter and producer. His debut eclectic album ‘Between 12 Waters’ featuring 8 different vocalists is available on Spotify, and you can follow him on Instagram or Twitter @alpertuzcu, or visit his website http://www.alpertuzcu.com

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s