How Munch Museum is using AI to give its audiences new access to the history of art

Mark Samuels Profile picture for user Mark Samuels December 4, 2023
Summary:
The museum dedicated to the work of prolific Norwegian artist Edvard Munch is embracing emerging technology and developing fresh opportunities.

scream

Munch Museum is using Artificial Intelligence (AI) to create pioneering interactive experiences for local visitors and global audiences.

The museum in Oslo contains the world's most extensive collection of art dedicated to the Norwegian artist Edvard Munch. With 27,000 artworks, non-art objects and writings, parts of which are spread across 11 galleries on 13 floors, the museum is eager to show its collection to a wider audience.

Birgitte Aga, Head of Innovation and Research at Munch Museum (MUNCH), says that’s where technology specialist Tata Consultancy Services (TCS) is helping the museum to open access to its art. The two organizations are working together on a pioneering project that uses a Machine Learning (ML) algorithm to delve into Edvard Munch’s artistic processes and allow audiences to connect with that data-led insight in creative ways:

Using AI for MUNCH creates new opportunities to preserve the collection and to present it to our audiences, and for them to engage with it, in a more relevant way. AI opens up completely new opportunities to understand the artistry of Munch and to make correlations that we never knew existed before.

Aga describes Edvard Munch as a ferociously productive artist. The thousands of artworks in the museum’s collection include 7,000 drawings and sketches that show how he tested styles and often reworked paintings, such as The Scream. The museum is keen to make this artistic process visible to the public using the power of emerging technology:

What we need to do is make our collection more relevant to people. It’s a collection that is very rarely seen and is fragile. We have digital versions of the art on our website – you can go in and look at the art, but that’s not necessarily something that audiences would choose to do. We live in a society where audiences expect experiences, rather than just objects, and we have to work continually on mediating our collection.

Bringing art to life

In an attempt to increase interactivity and develop richer experiences for audiences, the museum is keen to show how Munch produced his art. The rise of generative AI applications, such as Midjourney and DALL-E, during the past year has demonstrated the creative potential of emerging technology to the public. Now, the museum is working with TCS to use ML to exploit interest in AI and to bring Munch’s creative processes to life:

We wanted to see how we could train an ML algorithm with Munch’s drawings. So, how would Munch create a painting? Where would he start? What would he create? What kinds of lines did he use? And then, at the same time, we wanted to enable an audience to come into a room, sit down, grab a piece of paper, start drawing, and the AI would guide them to draw in the style of Munch.”

The museum approached TCS to collaborate on creating a real-time drawing experience based on Munch’s collection. For the past year, TCS and MUNCH have been working together on testing pioneering AI and ML technologies.

Aga describes the user interface, which is currently at the prototype stage, as “a back projection on a transparent surface”. When a user places a sheet of paper on the interface and starts drawing, their pen marks are met with a projected line from the machine-learning algorithm in real time:

We created the basic prototype six months ago. What we need to focus on now is the heavy AI and machine-learning development part of the project to realise the potential of this user experience.

Creating accessible experiences

Work on the project continues apace. Aga says her team hopes to test a prototype of the interface with the public through spring and summer next year. The aim is to create a beta version by September 2024, with the long-term vision of the AI-led immersive experience sitting at the heart of an exhibition of Munch’s drawing archive. In the future, she has aims to explore how the initiative can be used in other museum spaces:

We’ll be looking at how the interface can become a product that travels. So, we obviously tour a lot of the exhibitions around the world. And that’s something we would probably be looking at towards the end of 2024 and the beginning of 2025.

Aga believes that this kind of forward-thinking effort is crucial to MUNCH because the museum must find ways to diversify its audiences and income streams:

We want to make this experience accessible. There’s definitely an opportunity to commercialize this technology and the machine learning behind it to deal with hard technology problems that haven't been solved yet.

She says the museum is an inherently innovative organisation that looks for creative ways to explore and present art. During the past two years, the museum has formalized its research structure and was on the lookout for innovation partners. Aga says TCS and MUNCH started having conversations as part of this process and found commonalities in research interests:

It’s quite an unusual partnership. We didn't have a kind of brief that said, ‘this is what we want’, but it's been a collaboration that has developed during the past six months or so. This is the first test of how we can work together and we’re eager to see where else this relationship can go.

The arrangement between MUNCH and TCS allows both parties to assess joint business opportunities and explore other art-based technology use cases. Aga says there’s already been some key lessons that her team has learned from the initiative. Working with AI and ML is challenging – and it becomes even tougher as you contend with the rapid pace of change:

The technology is changing almost week by week, which creates both challenges and huge opportunities. For example, we hope we'll be able to speed up some of the processes associated to our project with developments in technology that might not have been feasible a year or even six months ago.

Loading
A grey colored placeholder image