Why is emotion important in computer music?

When I talk to people about my research and I tell them that I study how to make computers create music that can express emotions I always get the same reactions, here’s the top 5:

  1. Computers can’t express emotions because they have no soul.
  2. Well, but when you make computer music there’s still a human creating the music, right?
  3. A computer program can’t feel emotions!
  4. Humans draw much from their experiences to write music, how can a computer do that?

And the most dreaded question by any researcher:

  1. What is the point?

To answer this huge question let me start from the top: it is true that computers have no soul (as much as we can tell at least), yet I’m also still waiting for some evidence to prove that we do have one. Even assuming that we do have a soul, during the last 40 year people have been creating computer music reaching some very impressive results.
The reason we’re able to do so is that music is very structured and each genre is defined by some rules or styles, which allow us to do this categorization of music.

So music is not only soul, but brain too.

Yet, in most cases when you listen to computer music it can feel, for the lack of a better word, “artificial”.

In some ways the second remark might be quite true, after all these programs have to be written by some human, unless we imagine the hypothetical emergence of a program that creates music out of nowhere.
Yet what people usually mean when they tell me this is that I, the composer, use the computer as a tool to write/develop/render my music, and this is completely wrong. What I create are systems that are capable of independent music composition, not requiring my input at all.

Of course these “computer composers” are also part of me, in a sense.

They act upon knowledge that is given to them by me, although it’s important to note that there is a lot of research that deals with having programs learn how to make music by themselves, generally by learning from a given corpus of pieces.

It is true that computers can’t feel emotions, at least not in the way that we consider emotions. Yet, the beauty of Affective Computing (the field that deals with recognizing, interpreting, processing, and simulating human emotions) is that we do have emotional relationships with our computers.

Who hasn’t shouted at his PC when it got stuck?

In the same way we have emotions toward things that probably don’t, we can be influenced positively by them, User Experience studies are all about finding out how to create programs that make us feel good from using them.
Music can definitely be a medium through which we express emotions, even if the composer is a computer.

What I think it’s important to understand, when we’re comparing a human composer to a computer, is that we often have this misconception that the way we compose music is the right way and computers should try to emulate it.
I dare you to pick two random artists and ask them about their creative process, I can assure you that they’d tell you two different things.

Why? Because we’re all different, so why are we expecting computers to do things as a human would?

Finally: what is the point?

If we can find out how computer music can express emotions we are going to learn more about music, about the way we experience it and, ultimately, about ourselves.

And that, I think, is as good a goal as there can be.

 

PS: sorry for the lack of references, if you want to know more about any of the topics leave a comment and I’ll elaborate.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s