When you can't escape an interface, it institutionalises you | Tim Murray-Browne

10 min read Original article ↗
Grid of rendered blurred humanoid silhouettes in warm reds and cool greys, resembling softly melting self-portraits. Image: Tim Murray-Browne (Convergent Self Portraits).

Image: Tim Murray Browne

What happens when we become so enmeshed with digital interfaces that they're no longer something we start and stop using, but more a persistent presence that we live with? What happens when this happens at the level of our bodies and changes how free we feel to move physically?

Hand-drawn divider of stick figure among straight lines and dots

Digital technologies can be used to transform our intentions, to narrow and focus them and to broaden and mutate them. Today we interact with that technology through conventional, designed interfaces. These are built upon layers of intellectual representations of human activity: documents, images, videos, profiles, likes, emojis, code. Every time you use software, you’re invited to represent your thoughts, your intentions, your wildest self, in terms of these structured datatypes that someone has invented and built. As a human, you need to adapt around them.

There is at least a whiff of neutrality to it all. For example, while the content on social media is personally tailored and distributed according to very secret algorithms, the boxes into which it is formatted are mostly the same for everybody. In our expressive offerings to the algorithm, we all express ourselves through the same uniform interfaces.

But an artefact of this uniformity may be a kind of mass conformity that we can’t quite see because we’re all partaking in it. Not just a conformity of ideas and ideology but of modes of expression and the importance we give them. We make little videos and we write little messages and we take photos and what doesn’t exist within this documentation can seem as unreal as things we thought but didn’t say aloud.

The tool grants us power but its way of seeing the world is projected back onto us. Through this bargain, it institutionalises us into thinking its way is the way.

Hand-drawn star divider

Back in 2011 I was making body-controlled interfaces for music – specifically the homebrew world of motion-capture unlocked by the repurposed Xbox Kinect camera.

Screenshot of the debug interface of the Impossible Alone installation by Tim Murray-Browne and Tiff Chan. It shows a bright green human silhouette and two cyan skeletal figures on a black background, indicating tracked body movement and pose matching. Surrounding panels display sliders and real-time metrics for speed, position, acceleration, and deviation, alongside abstract diagrams visualising movement relationships and system state.

Debug UI of IMPOSSIBLE ALONE. Image: Tim Murray Browne

By 2014 I was using it to creating an interactive dance performance for the stage with the dance artist Jan-Ming Lee, called This Floating World.

The Kinect unlocked full-body tracking for cheap without attaching things to the body. Tracking the whole body gave so much expressive potential. You could move freely. You don’t have to hold onto anything. It could give you skeleton tracking, a silhouette or a point cloud of a body.

This was more data than my brain could design for. I could easily code trigger zones, and parameters that go up and down as your hands go faster or slower - but these all see the body as a set of independent labelled particles moving through space. They're about body parts rather than the body as a whole. Essentially, my code was reducing the immense dynamic range of the body to these tiny little digital abstractions I just invented.

Now there's nothing intrinsically wrong with this. It's a lot of fun. But there's an interesting subtlety when the whole body is being tracked. If I hand you a button you can put it down. But if I make the touching together of your hands trigger a sound, you can’t escape, unless you leave or hide your hands. One day, rehearsing with Jan-Ming in the studio, she commented that she felt like the system was constraining her movement. I was trying to increase expressivity, but in some sense was having the opposite effect. What’s going on here?

Digital render of a human figure drawn as lines and dots as tracked by a Kinect, standing on a plane made of red dots, by Tim Murray-Browne

The embodied interface is a selective filter that reduces your expression into an abstract digital representation. But that relationship goes both ways. When our focus moves to the digital output, it’s the digital representation that matters, and the body is now subordinate to the representation. Whatever we do will be interpreted through the limits of that representation. Interaction projects the system's representation of the individual back onto them. We start to think through the limited range of this abstraction (abstraction as in: stripping away what we don't need for our model of the world).

I believe all interactive technology has this effect to some extent. But the more we entangle the interaction with our body, the less liberty our body has to move without consequence. It can become a kind of prison. Instead of controlling these respresentations as an extension of our agency, we become limited by their limitations. I think this is the fundamental mistake we make with technology: When you can’t escape it, you become institutionalised by it.

In one sense, the problem is not the interactive system but the fact we’re fixated on its output and losing touch with all the untracked parts of ourself.

Hand-drawn dancing stick figures

Jan-Ming and I took the body-to-paintstroke system from the dance work above and used it to create algorithmic portraits of how someone moves. We called this work Movement Alphabet.

Our drawing system extracted endpoints from the body's silhouette using a technique called pixel skeletons. It was glitchy with a real personality of its own. The relationship between movement and image was there, but with so much error and noise. Yet, in spite of this, we realised that the limiting factor was how someone feels when they’re confronted with a body-controlled interface, and the kinds of unnatural gestures that situation tends to elicit. So we focused less on the digital system and more on constructing a context to support someone in expressing themselves authentically in movement.

We devised a journey, led blindfolded by a guide whispering in your ear, into a private pod where you’re invited to share significant memories, and then gently assisted to express these through movement. The camera is nestled in a corner and the guide has a little remote control to choose when that movement is transcribed into a portrait.

Black background with luminous pastel strokes arranged in clusters, loosely outlining a standing human silhouette and multiple fragmented traces, representing a person’s movement over time. Caption at the bottom reads: Movement Portrait of Maria / Jan Lee & Tim Murray-Browne 28 October 2016 20:53

The portrait that emerges is a fusion of our glitching algorithms, how the person moved, and when the guide pressed the button. They all look rather similar to each other - definititely a reductive representation. Each is more recognisable as our artwork than the participant they portray. But the participant is shielded from this narrow representation while they move. This removes the feedback loop and lets their mind remain focused on their human self.

When the participant emerges from the pod we hand them their printed portrait and a handwritten note of a few words they had mentioned when each one had been taken. They were then led to some cushions to rest for ten minutes.

I found people to be much more attached to their portrait than if they had stood in front of a projector and moved with a feedback loop.

Hand-drawn stick figure with arrows going to and from a square box

Feedback loops are essential for learning (and pretty much the definition of interaction). But learning can also be toxic, like institutionalisation, like learned helplessness. Today, we have learnt how to express ourselves through little pieces of structured data. For many of us, the pursuit of our goals in life seems to depend upon doing this again and again.

I think AI-mediated interaction will change this. I don’t just mean talking to AI, as many of us are doing today, but bespoke AI-generated interfaces, both visual and embodied. Instead of working with representations designed by a human mind, AI forms its own messy representations statistically from whatever data we set it upon.

So, can we humans stop needing to think through the abstract realm of the machine, and instead have the machine think through the messy reality of the human?

(Click to pause) Dancer: Catriona Robertson.

In 2021 I explored this idea with artist and AI researcher Panagiotis Tigas in the project Sonified Body. I wanted to create a dance-to-sound system and to use machine learning to escape my previous experiences of trapping the expressive body behind a digital materialisation of my own preconceptions of what a body is, how it moves, and how that might translate into sound.

We wrote some papers if you’re interested in all the detail, but the summary of it is this:

Digital render of a human figure drawn as lines and dots as tracked by a Kinect, standing on a plane made of red dots, by Tim Murray-Browne

Given the 25 joints of the Kinect skeleton representation (in 3D space so that’s 75 numbers), we train a model to find a way to compress these into 16 numbers and then decompress them back into the original skeleton. We trained the system on around 15 hours of recordings of me dancing. In other words, we’re saying: given what you can see in these recordings, find a way of representing this body as accurately as you can in just 16 parameters.

This is still a process of reduction. But we’re not saying what to reduce the body to or how to reduce it. We’re not saying which parts of the body are more important, nor introducing concepts like ‘upright’ or ‘open’. The representation emerges rather than being designed. It's called a latent representation. And while 16 degrees of freedom is a fraction of how the body can move, it’s a huge number to be simultaneously jostling with a computer. But it's not something you think about, or can really think about – you just play and get a feel for it.

I hooked these 16 numbers up to a drum kit synth and some off-the-shelf effects. I did this quickly without thinking of musical ideas, aiming to keep the sound a raw and direct rendering of the representation. Then we invited dancers in to have a play.

I was very happy with the results. For me, the system offers a kind of openness I haven't managed to achieve through traditional coding. I don’t see the dancer get lost in my own ideas of what a body is and what it does. It’s a way of avoiding the bottleneck of designed abstractions, and instead work with representations that are derived directly from the body itself. And to be clear, we're still using the Kinect skeleton, which is a designed abstraction of the body. We haven’t really escaped the symbolic representation of the body, but we’ve taken a step to disentangle it from how we structure our interaction with the system.

Hand-drawn doodle of lines and curves

I'd like to hope Sonified Body anticipates a future where AI liberates us from the data-entry mode of being in the world that we suffer from today. But I imagine the intimacy brought by embodied interaction can be used just as readily for harm.

If we look around today and see a world of bodies contorted around phones, minds contorted by addictive designs, and spirits contorted by algorithmically mediated social lives, then we can at least take solace in reconnecting with the body as a site of truth. Just stop, take a breath, notice how you feel. But as the surveillance infrastructure that we mindlessly continue to expand starts to take a deeper interest in predicting our behaviour from our movements, we may find ourselves institutionalised in body as well as in mind.

And yet, for those of us worried about this, I don’t think it helps for us to exclude ourselves from the coming revolution and take refuge in a bubble of moral indignation. We need to build things for ourselves, to own our technologies and our means of expression, and to stay present to what it does to us and those we share it with. We need to have fun with it, and be ready to offer an alternative.

Tim
London, 28 February 2026

Based on a talk originally presented at Future Folk, Cecil Sharp House, London, 6 Feb 2026.