Emotionally Aware Portraiture

Machine Intelligence Competition 2007

Simon Colton, Maja Pantic, Michel Valstar

Some quick links:

The Machine Intelligence Competition is an annual event sponsored by Electrolux and organised by the British Computer Society, as part of their SGAI International Conference on Artificial Intelligence. The competition awards a prize to the best live demonstration of Artificial Intelligence software which shows the most progress towards machine intelligence. On December 11th 2007, the competition was held at Peterhouse College at the University of Cambridge. We entered our "Emotionally Enhanced Painting Fool" system, and we were lucky enough to win the competition. Our team consisted of three researchers from the Department of Computing at Imperial College: Simon Colton, Maja Pantic and Michel Valstar (although unfortunately Maja was ill on the day of the competition). We hoped to show progress towards machine intelligence by demonstrating graphics software that was able to show appreciation while simulating the painting process.

We demonstrated a combination of two pieces of software. Firstly, we used software developed by Maja Pantic, Michel Valstar and other members of the vision group at Imperial to take a video sequence of someone expressing an emotion (such as smiling, frowning, looking surprised, etc.). The software then: detected the emotion; determined where the features of the face were; and found the image in the video sequence where the emotion was being expressed the most. This information was then passed to the second piece of software in the combination, namely The Painting Fool, which proceeded to paint a portrait of the person in the video sequence. It based the portrait on the image provided from the emotional modeling software, and chose its art materials, colour palette and abstraction level according to the emotion being expressed. For instance, if it was told that the person was expressing happiness, it chose vibrant colours, and painted in simulated acrylic paints in a slapdash way. If, on the other hand, it was told that the person was sad, it chose to paint with pastels in muted colours. The Painting Fool was also able to use the information about where the facial features were provided by the emotional modelling software, i.e., when painting, it emphasised the eyes, nose and mouth in the picture, to try to capture a likeness.

To demo the system, we produced two portraits in the (strict) fifteen minute slot that we were assigned. Firstly, we painted Michel looking disgusted, and then we painted a volunteer from the audience (Paulo), who smiled for the camera. The demonstration went fairly well, but there were a few hiccups. In particular, our projector broke just minutes before the start of the demonstration. Also, The Painting Fool painted the first portrait pretty badly, as it missed out a whole section of the face. Finally, during the video capture of Paulo, lots of people in the audience decided to take photographs, and the flashes from their camera affected the video captured, so that the emotion-capture software thought that Paulo was expressing anger. But in the end, we showed the software working pretty well.


There has been some interest in emotionally enhanced auto-portraiture. Here are some links to a few portraits we have done:

With respect to the competition, it is worth remembering that we were only allowed 15 minutes for the entire demonstration, so the painting time for each portrait was only a couple of minutes. This meant that we had to use fairly sketchy styles and produce small paintings which weren't brilliant...

Here is the image that was used for the first portrait and the portrait that was painted:


Notice that the emotion tracking software correctly identified disgust as being expressed, and that The Painting Fool chose a colour palette of greys, greens and mottly browns to enhance the emotion. It also elongated the face as in Edvard Munch's painting the scream, and it used a fairly fluid painting style with simulated acrylics. Notice also how the features have been emphasised to gain a likeness (of sorts). It's not a brilliant portrait, but it does show some appreciation.

Remember that the second demonstration went wrong, because of the flashes from cameras during the videoing of the sitter. Here is the image that was used for the portrait and the portrait that was painted:


In an earlier trial run, Paulo smiled for the camera under much better lighting conditions. Here the image that was used for the portrait, and the portrait that was painted:


OK, so this is hardly a great likeness of Paulo, as the painting style is so distinctive, and his eyes have been enlarged, etc. But it is a vibrant picture, and it does appreciate the fact that Paulo was smiling for the portrait.

In training for the demonstration, we worked with a few sitters, and produced some more portraits, some of which were a little better than those for the live demonstratation. We have put the portraits from the training stage here:

Video Footage of Our Demonstration

We took a video of our demonstration. While the production values (!) and sound quality are not the best (in particular, you cannot see us, only the projection from the computer), it does give you an indication of the software working. The video is available in:

Also, the slides we presented during the demo (as a big 34Mb PDF file) are here:

Finally, Mike Swain from the Daily Mirror Newspaper interviewed Michel Valstar about the project:


We would like to thank the British Computer Society for organising this competition, in particular Max Bramer, John Gordon, Richard Ellis and Chris Needham. We would also like to thank Electrolux for sponsoring the event and for the prize money, in particular, Susan Hargreaves. We are extremely grateful to Margarita, Monica, Stavros and Uri from the Visual Information Processing group at Imperial, for being such patient subjects in the training stage for the software, and helping us to get the Emotionally Aware Painting Fool match fit!