Refik Anadol’s animations – meaningful moving parts recombine to generate art

Refik Anadol has made his name by producing super scale animations that rely on machine learning algorithms, a large cast of studio assistants, and meanings which are borrowed from the raw material consumed by the algorithms. His invited exhibition at the Serpentine in London, entitled Echoes of the Earth: Living Archive, takes over the whole of the Serpentine North Gallery, is totally eye-catching. The TL;DR is: go. Quickly. It closes April 7 2024.

The images in the exhibit range from literal-seeming to highly abstract, and the animations which sequence them flow from one mode to the other. Everything is synthetic – the images are derived from applying machine learning algorithms to very large datasets of other, real images. On which more later. The results sometimes seem literal – as with this synthetic coral:

Coral-like image
Refik Anadol Echoes of the Earth: Living Archive
Serpentine Gallery London 2024

Sometimes the images produced by the operation of the models are highly abstract, as in this candy coloured image, which seems fluid-like and tissue-like, but does not obviously represent any particular thing:

Abstract multi-coloured particle-based image
Refik Anadol Echoes of the Earth: Living Archive
Serpentine Gallery London 2024

The animations often looks like they are driven by a fluid dynamics simulation. They proceed at a comfortable pace. The colour palettes are harmonious, the surface lighting is gentle. Because of the palette and palette transitions chosen, the sequences have a tendency to look “nice”.

The visuals aren’t always explicitly representational, although the models which generate them use real image captures as inputs. But even when they aren’t interpretable as specific things, the outputs all have a biomorphic flavour to them, reflecting the characteristics of their inputs. Some sequences evoke associations of clouds of organic tissues in perpetual motion and transition. As indeed real weather and real tissues are.

Sometimes, the images flow from the purely abstract towards representational, then back off again. This gives the feeling of life-forms being created – and dissolving. The pace is slow enough that it doesn’t feel frenetic, rather graceful.

As is the way with the state of the art around synthetic ML-based images of beings, there are times when the synthesis creates “errors”, as seen in this bird with two beaks, and a foot whose claw doesn’t grip the surface:

Model outputs are driven by visual similarity not physical constraints – check the beak(s) and claws
Refik Anadol Echoes of the Earth: Living Archive
Serpentine Gallery London 2024

What are we to make of this? The images being synthesised and streamed together explore the latent space of the models – creating images of possible beings. The models are based on very very large datasets of actual images, but the model outputs are novel. Sometimes they create things which we instinctively understand would be unable to exist in reality.

In a way, these model “errors” serve to highlight the preciousness of actual life and beings. Not all corners of this very effortfully produced and highly dimensioned latent space are actually habitable. Visual representation is fundamentally a surface phenomenon, with only imperfect connections to mechanism.

Sometimes what is created is beautiful, as in this sequence of a lotus turning into a lotus:

There is treasure to be found in mining latent spaces, and Anadol and his team have certainly found some. Dreams, even deep ones, do sometimes turn to nightmares though. The potential for monstrosity, as well as beauty and impossibility, is equally latent in the technique.

The story of how the base data images are sourced, and how the resultant images are synthesised is very much a part of the work. Which is very much on trend. This display, which is part of the exhibit, dramatises the multitudes of base data images that went into creating the models:

The origin story is part of the work
Refik Anadol Echoes of the Earth: Living Archive
Serpentine Gallery London 2024

The works are made possible by the collaboration of an impressive number and range of institutional collaborators, such as the Natural History Museum, and the Smithsonian, who have collectively contributed millions of images from their own collections to be used as training data. Many of the works use what Anadol calls their Large Nature Model, a play on Large Language Models (LLMs) of ChatGTP fame. (This model is, apparently, open sourced – although, interestingly, I am not the only one who can’t find it on GitHub.) All the works seem to use NVidia StyleGan2 as the underlying model framework.

This is the first of Studio Anadol’s installations that I’ve seen in person. In concept and style it has strong links to his blockbuster 2022-23 installation, Unsupervised, at MOMA in NYC. Unsupervised was a never-ending generative machine learning-based animation using, as its raw material, 200 existing works in MOMA’s collection. This use of freely contributed material and generative ML-driven animations are common threads tying the the two exhibits together.

The MOMA installation was accompanied by righteous chat about it being “on the blockchain” – but times change. The Serpentine exhibition also surfs the zeitgeist, but doesn’t mention crypto. Now the force being harnessed is the sharp wind of eco-anxiety. This is somewhat ironic, given the compute power – and energy – which must have been involved in creating the exhibition’s works. Anadols’ explanation of the intent and process behind the highly collective work – of which the Serpentine exhibition is just the first public manifestation – was launched at Davos last year: you can check out the promo here.

So? It’s an eyeful, in a good way, and it represents boss levels of effort and collaboration. But the outputs – although they are “nice” – make me melancholy. In exploring the latent corners of a many thousand dimensioned model space built from a multitude of images of living things – what have we learned? That actual, working life is precious in its uniqueness? It’s a bold exploration, and it’s certainly uncovered some beauty, but I don’t feel the exploration has helped us, collectively, arrive anywhere. Yet.

Not OK, Google. My photos aren’t clutter.

Many of Google’s initiatives around ML and AI-augmented creativity are wonderful (more on that in another post). But even the best and brightest tech can struggle to make a good impression if it presents itself in a way that is lacking in social skills.   And that’s the flavour I got from the  conversation Google Photos just tried to have with me.  Google Photos was trying very hard to show how clever it was, and how it was working diligently to please me, but it only succeeded in being annoying.    (We’ve all been there.)

The more Google personifies itself in its self-presentation to its consumer user base, the more we expect of it – for better or worse.  One doesn’t really expect an empty search box to have a lot of personality: all the magic is hidden behind the fourth wall of the screen.  But when Google becomes an “I” and initiates dialog with me, I start to expect more than it has yet learned how deliver.

What happened? Well, it came to pass that after a period of absence, I needed to interact with Google Photos via my desktop. So I did. And what to my wondering eyes did appear but the following pile of notifications:

Google Photos notification

Google Photos’ off key notifications

What’s not to like?

The first notification just missed the mark.   That’s ok.   ‘Rediscover this day’ invites me to engage with Google Photos as a place to hang out and retrospect.   This is unlikely to succeed in the short term – but I don’t mind being asked.

It’s the next three notifications where the potential for harmony in our possible conversation seemed to totally evaporate:

  • Notifications 2 and 3  offer me the chance to look at stylised photos and collages – it tells me that it has done stuff I don’t really want, when I wasn’t looking, and now it wants to take time out of my day to show it to me
  • In Notification 4 it offers me the chance to clear my clutter – Google insults my own images… and offers to get rid of them for me.

There are reasons why each of these conversation-opening gambits is problematic.   On the upside, in combination they manage to be quite funny.

Is it art if nobody’s looking?

The ‘look mum!’ notifications – offering me a  collage and a ‘stylised photo’ – remind me of  when my kid was in infant school.   Every other day, something unasked for in the way of ‘art’ would come back from the school.  Our fridge quickly got covered with ‘art’.

This didn’t make anybody happy.   The kid dislikes art and didn’t much like having to take time out from better things, in order to do it.   For my part, I didn’t like having the ‘art’ on the fridge.  The fridge is where I put my own stuff that I don’t quite know what to do with.  Fortunately, my lovely childminder tipped me the wink that the right way to politely dispose of children’s ‘art’ was to take a digital photo of it.

 

writing-play-pattern-graffiti-crayon-kids-1196599-pxhere.com

Photo: pxhere.com

Which brings us nicely back round to Google’s ‘look mum!’ conversation starter about its collages and stuff.  It’s stuff I didn’t remember asking for, that I don’t want.

Mind you, Google did not actually ask me to print out its work and put it on the fridge.  Small blessing.  Like my kid, pretty much all it wanted was my attention.

It’s kind of poignant really.   There is no schoolbag.  There is no fridge.   There is no conversation platform at all, really,  except for the notifications channel, which I wasn’t usually seeing because I wasn’t ever hanging out in Google Photos, just using it as a service via API, with my interactions being driven by other apps.

So Google seems to have been busy doing this art thing for me silently all these years, but has had no way of telling me about it.  Frustrating!   That makes it all the more important that the conversation openers it uses in Notifications really work for me:  they offer the chance for us to engage directly, not mediated by another platform.

Magic vs meddling

I remember going to I/O extended many years ago and listening to a bunch of announcements on my interaction-isolating headset about upcoming enhancements to Photos, whereby which Google would do enhancements on my photos in Photos, if I uploaded them.   Doing art behind my back when I’m not looking is a progression from this.  Way back then, Google had total confidence that it was doing something I would really value.   I found this mystifying at the time – and I still do.   But I know I’m hypocritical about this – and the root causes of my hypocrisy are interesting.

The thing is,  I don’t really mind my totally obvious glaring faults being automagically corrected.   Who wants red-eye?  (Unless of course the picture actually is of a red-eyed monster – but that’s an edge case I am happy to throw overboard.)   I generally accept that Photoshop’s suggestions about sharpening and  level tweaking are spot on.    I think that I draw the line at automagic cropping of my images – but I really don’t.  I actually appreciate Facebook’s increasing subtleties about knowing how to display uploaded photos so their points of interest are visible and the composition is balanced.

And here’s more seeming hypocrisy, in case – unlike me – you’re in short supply:   I love love love my Pixel 2.   Which works tirelessly near-instantly and invisibly to correct whatever faults in the universe and in myself might result in a less than satisfactory image.   It just works.  It’s like putting rose-tinted glasses on my photography ability.    And I don’t mind one bit.

So why does Google Photos’ unwanted fridge art annoy me – but I love that my Pixel 2 makes me better than I really am?   Both Photos and the Pixel 2 have similar collaboration models:  the cleverness applied to my images happens without a lot of explicit interaction or negotiation about it.  In one case, that’s fine.  In the other, not so much.   Why?

I think there are two interesting ways in which the Photos case is different:

  • timing – with Photos there is a lag between my initial act of creation, and Photos’ contribution towards our collaboration
  • process and outcome – with Photos, what is being offered is creative collaboration, rather than a productivity enhancement, but I am not sure what I get out of it if I am not actually involved in it

Timing influences our perception of causality

The fact that the Pixel 2’s assistance occurs near-instantaneously makes it seems more integrated:  that’s basically just what it does.   Its rose-tinted glasses seem to be always on.   By contrast, the lag on the Photos enhancement processing and art creation introduces a feeling of separation, almost alienation between the action and the outcome.

Lag used to be normal.  Back in the days, if you wanted to have photos you’d have to drop your film off for processing, and in due course pick them up again in the form of physical prints and developed negatives.  Even if you did your own processing, this was something that took time.  But in this age of instant, it’s unusual.

Productivity and creativity augmentation need different collaboration protocols

I think of instant and automatic corrections and enhancements as a kind of productivity enhancement.   A kind of spell-checker,  or tax form filler.   (By the way, thanks for that tax form reform thing HMRC.  It’s beautiful.)

The kind of augmentation that I want for a productivity task is something that frees me up from my own errors and limitations.   I don’t necessarily want to have a conversation about it – though I might want to be asked to validate choices that the productivity agent isn’t sure about.

However,  I have different needs and expectations about the type of collaboration model and conversation that works for creative applications (as I will be explaining in another post).    The type of collaboration I have with Google Photos about our mutually produced artwork is a strange one.

I am not sure what inspires Google Photos to produce fridge art for me. When does it do it?  Which assets does it choose to do it to?   Who knows?  So when it appears, it appears as if disconnected from any sense of collaborative engagement and dialogue.    Somewhere in the mists of history, I probably ticked a box about it at some point, curious to see what it would do.   But the rhythm of this conversation is too disjoint to make it feel like there is a meaningful dialogue.

By the time my fridge art was delivered, I forgot I’d ordered it.    And I didn’t have any fun making it.

Above all – don’t be rude

So I have some doubts about the way Google has gone about collaborating with me, by doing art with my Photos when I am not looking.   That’s one problem, and it’s an interesting one.

But there’s a different problem with my last notification, which offered to get rid of my clutter.   To be polite, I thought this astonishingly ill-conceived as a conversation gambit.  Immediately after offering me up a bunch of unwanted fridge art – Google Photos has the total effrontery to refer to one of my photos as ‘clutter’?    Hello?

To me, this seems… unreasonably judgy.    It was also funny: the photo it picked out as a piece of clutter that I should get rid of was a copy of a paper-based vacation request from my just-finished stint of contracting at Google.    (You couldn’t make that up – and I didn’t.)

Organising my photos is something I might actively want to train Google Photos to do, or have it engage in dialog with me about, while it learns, and tries out various theories about what classifications I find might useful and pleasurable.   But judging what is and what is not clutter is a very personal thing.  And based on this first attempt it made, I have no reason to believe that Google Photos is ready to make this judgement on my behalf in a way that reflects my own personal needs and desires.

Google Photos may well have read Marie Kondo’s The Life-changing magic of Tidying Up.   It may also have read my vacation request and deemed it obsolete.  But I think it hasn’t yet really understood me – or the book.    And until it does, for sure I am not going to be passing over the controls to any sort of autonomous de-cluttering service it might offer.

What use to me is a Photo collection full of unwanted fridge art, while my own paperwork gets shredded?