I wish this session could have been 3 hours instead of 1! The overarching theme was discussing how artificial intelligence can intersect art, how we respond to that in a human way (the tensions, fears, etc.), and how we distinguish and define who gets credit for the creations. The session was organized like this;
- Each person introduced themselves and their work
- then discussed how AI is currently intersecting with art, the law, neuroscience, and where it is likely to go in the future
- spoiler alert : it's not going anywhere, it's only going forward, so we should think about and learn from
Jessica Fjeld was the moderator for the panel and introduced some questions and context as well as the model for how AI generated artwork functions. As of today, there are programs that have the capability to produce art that is indistinguishable from art made by humans without the aid of technology.
Some contextual references presented to frame this discussion:
One argument is that generative art has been produced for centuries. An example is the Mozart Dice Game which was published in 1792 and would use the roll of two dice to randomly select small sections of music to compose an entire piece. While the claim that Mozart was first to do this seems to be unprovable, the name stuck. You can download an app on your phone called Mozart Dice Game (it costs $1.99) and see the algorithm play out before your years. This raises a few questions:
Who is the author of the music created with the app on your phone?
- It couldn't be your phone, that's just the tool
- is the author the programmer who developer the app?
- Is it the user of the phone (ie, you)?
- Is it Mozart (or another composer) from the 1700s who first defined the process of using dice to create music?
The point being, the computer or machine operates at the behest of many humans.
This taxonomy chart is from Jessica Fjeld and Mason Kortz A Legal Anatomy of AI-generated Art: Part I and was presented in the session:
Sarah Newman is currently an artist and researcher at MetaLAB at Harvard and has a background in philosophy and fine art. She explores difficult philosophical concepts through her work and installations. One of her installations was at SXSW and I went to it after the session. It's called The Future of Secrets, here's how it works:
- you type your secret on a computer in the room (hidden characters, so your secret remains anonymous)
- once you enter your secret, a tiny paper prints out of a machine someone else's secret
- there is a projection of the secrets on the wall
- there are headphones in the room that are playing the secrets in automated voices, both typically masculine and feminine
A little bit more about the installation:
- It was inspired by the relationship of humans and machines
- she noticed an uncanny phenomenon; people connect with someone else's secrets and apply meaning and a narrative to them. Some people even become convinced that the someone-elses secret that prints out is meant for them. Like the machine is a psychic or like the installation somehow connects with someones mobile device upon entry and then learns things about them and then produces some meaningful secret that wasn't their own, but that they can relate to. The people believe the machine knows more than it does.
Some things I asked Sarah after the panel ended:
- Did she have any expectations for how people with interact and react to the installation. Her answer: not really, but she did expect that people would be more hesitant or uncomfortable about sharing as secret, however, in hind sight, she realized that the people entering the room were volunteering, so that made sense. She didn't expect people to connect a meaningful narrative to their secret