by Lawrence Lek
Lawrence Lek is an artist, filmmaker, and musician working in the fields of virtual reality and simulation.
Geomancer (2017) — an AI satellite returns to Earth, hoping to become an artist, by Lawrence Lek → @lawrencelek
JF: - In an interview with the RCA, you once described architecture as “discovering a place to exist” with your site-specific installations and sculpture and their digital incarnations. What is the relationship between the physical and non-physical worlds you create?
LL: I often think about how spaces for children play with scale. There’s this fun, toy-like aspect to these worlds, where children get a sense of power because they are relatively small. Play is often power disguised as some collective activity. You see other social interactions in the playground too, where kids cooperate or bully each other.
In my work, physical and non-physical worlds are organically linked. It’s an intuitive process, though, and one that is quite nuanced. I’ve always been interested in the prehistoric journey of dwelling and the relationship between humans creating collective architecture simultaneously as agriculture or animal husbandry. Society develops in parallel with ways to exist within shelter. Somehow that prehistoric or pre-recorded history of space always really interested me. How this relates to today is what everybody’s still figuring out. Too little time has passed with technological developments to form a coherent understanding.
JF: The experience of child’s play or fantasy can be a vehicle for escapism. To what extent can virtual worlds provide asylum from physical reality? Do you feel that the physiological impact of world-building and the transformative potential of fictionalised realities can be a method for navigating societal architectures?
LL: Sure. I think there’s an instinctive desire for escapism, and sometimes there’s a survival-like desire to escape. Escapism is more like a psychological version of physical refuge. Or, take the simple example of a table. To an adult, it’s a work surface. But to a child, it could be a tent. It’s something to gather around, and also a place of refuge. In earthquake zones, people are told to shelter under a doorway or a table.
My interest in mythology and the founding principles of civilisation showed me how rational spaces have always coexisted with irrational or sacred space. One modern-day equivalent is video games. Fantasy or escapism are often shown in a negative light, but I think video games aren’t clear-cut escapism. Those worlds exist parallel to our reality, so they inform one another. I know you’re interested in alternative spaces or places beyond more objective or goal-oriented styles of existence, right?
JF: Definitely, and it is within the moulding and shaping of these alternative spaces and digital landscapes that an audience is invited to question their agency and the path that they are on, which often involves a return to the past. To see what came before in order to see what is unfolding or revealing itself. How does using gaming as a storytelling device, particularly with the potential for non-linear narratives, impact our sense of time and agency? What is the role of memory in your work, which you addressed most specifically in Nepenthe Zone?
LL: It’s a huge subject. I think if we take cinema, for example, what I like about long-form work is that you can have memories from within the work itself. The work has an internal memory structure – by the end of the third act, you recall something in the first act. Of course, this is to do with dramatic structure, but I think having the possibility of developing memories from earlier on in the work is something that short form simply don’t have time for. Similarly, in a novel or in a play, there’s this memory of what has already happened. The most obvious trope, in cinema at least, is the flashback - a literal thing that gone before.
Then there’s cultural or historical memory, where the hero or the characters have come from somewhere else. An obvious example of this would be in a film that’s set just after the war. All the characters are somehow affected or traumatised by the war; the shadow hangs over everything – a silent and recent memory.
There is also the archaeological film or archaeological-horror film, like where somebody has unknowingly moved to this ancient burial ground, and they’ve reawakened a ghost from the past. There is this longer-term haunting memory that is a social memory or societal one that is awakened in the viewer. I’m interested in all four of those types of memory - viewer inside the work, recent world history, biographical and social memory.
The way that games and film and text allow the artwork to mirror those non-linear timelines is particularly interesting. In the case of Nepenthe, I wanted to explore the fusion of cultural memory – of diasporic migration – and the journey of an individual.
JF: Memory is a quality that challenges our expectations of entering a space. The use of weather in your work subverts these expectations, the way that the texture of weather is often associated with memory - the fog, the haze, the clear sky - are you using weather to generate atmosphere or provoke memory?
LL: For me, the weather is predominantly about creating an atmosphere. This atmosphere taps into multiple senses, which helps memories form. In everyday life, like where there’s a storm cloud in the city when you run for the bus, this just makes places or situations more specific and easier to recall. The difference is that in CGI worlds, you can control the weather. It is a way to heighten a state of atmosphere that can trigger more memories or create future ones. Also, there’s a romantic tradition of weather personifying moods, and weather and landscape being part of a whole in all sorts of cultures. I’m also interested in exploring that through my work.
JF: And music shares this quality of generating atmosphere — obviously, you’ve got a CGI environment, where there is no sound, but with AIDOL, which feels like an album, what comes first — the music or the environment?
LL: With AIDOL, because that was more about AI and music, it was a symbiotic process of simultaneously working on the virtual set and the soundtrack. Early on, I figured I wanted the film to be structured like an album. In previous shorter works, I would often complete the soundtrack before, whereas AIDOL is a feature-length film, so it had to be co-created at the same time. I was also looking at the difference between doing a very sound/music-driven film versus a music video. I was interested in that ambiguity, like in a musical, the drama happens almost in service to the unfolding of the songs.
JF: There is a lot of debate around AI and creative agency. In Geomancer, the film’s narrator asks, “Is irrationality the main characteristic of consciousness?” proposing art as humanity's last refuge. For this reason, the ‘Bio-Supremacists’ in AIDOL, want to suppress the creativity of ‘Synths’ — your term for AI. Day-dreaming is an integral part of the creative process, the imaginal realms where ideas are conceived. Recently, AI were pretty effective at reading the thoughts of subjects in an fMRI scan. If AI became emotionally aware and capable of creative self-expression, what does a post-human consciousness look and feel like?
LL: What is interesting to me is that cognition or intelligence is not necessary to perform many different tasks. So if you think of this in terms of recent advantages of ChatGPT and the core Alan Turing question – Can machines think? – which is not the necessary question of can we develop AI that is really good at helping humans automate certain tasks?
Then there is self-expression and the romantic idea of individualistic artistic truth as some kind of fundamental expression about the way someone sees the world, mediated through a particular medium or form. There’s a specific part of that romantic creative process, which is making a novel or a film, that can be automated without any intelligence or actual thinking in the loop. At the same time, no one agrees on what cognition or intelligence actually is. Cognitive biologists, AI experts, philosophers of mind all have their interpretation about how thinking works. And then is the more emotional side of the spectrum – expression, empathy, all of that – there’s no consensus on those either.
Part of my work is visualising what one of those possibilities might look like. And for me, the possibility is, “What if AI could have all four of those kinds of memory? The cultural, the historical, the personal, and the encyclopaedic?” My recall of facts and actual details is not great. Clearly, current AI is far superior to my ability to recall facts and figures. We’re already uploading all of our memories anyway, so we handed over our agency. So there’s this offshoring or kind of offsetting – that’s a separate political question about how we are voluntarily or semi-voluntarily handing over many of these things.
I think what post-human consciousness might feel like is actually rather uncanny, for sure. And it probably has a lot of potential for empathy and connection. It’s usually talked about in a kind of dissociative way, although also in terms of narratives about AI, particularly science fiction, there’s definitely this parent/child, master/slave relationship as well. My main interest is exploring the empathetic subjective side.
JF: There is a fluidity that comes with creative thinking and there’s that sort of neuroplasticity that you get with machine-learning programs that is a bit like the psychedelic experience, dissolving rigid states of consciousness into more fluid states outside of a linear time frame. In those instances, humans often feel heightened states of empathy. Perhaps it is in these hallucinatory moments that machine learning programs will bridge the gap. What is your current position on integrating machine learning into artistic practice and the near-future of creativity?
LL: I think we’re in a very nascent stage of understanding what AI development is doing, and why is it that these hallucinations are happening at this relatively early stage of AI development? Is it because how aligned that is with a dream state? If you’ve seen these kinds of visualisations or diagrams of these associations that AI or large language models have, it’s this dream-like cloud of possibility from which sentences or actually sequences of letters and words get made. Of course, what that machine hallucination might look like often gets visualised to what looks like strange images or strange texts to us as humans. I’m sure more potent forms of visualisation or communication might emerge from that.
JF: Extrasensory abilities are often reported during altered states such as psychedelics or lucid dreaming. What is your sense of hacking extrasensory perception and gaining additional skill sets through the experience of existing in the virtual worlds - altered states - that art and gaming can achieve?
LL:, I’m not consciously thinking of the work I’m creating as improving or extending anything. Of course, 3D simulation is used for army training – PTSD, simulations for fighter pilots, all sorts of medical simulations so doctors can perform VR surgery. There are many ‘serious games’ that use the same technologies as gaming to create a safe working environment for people to perform specific tasks.
As for altered states, I think for anyone who’s had the experience of playing a video game for a long time, being immersed in this virtual world, it gives you a different system of agency. I used to play Tony Hawk’s Pro Skater a lot as a teenager. When you go outside, you think of all the surfaces as something you could skate on, or other impossible things. The physics of the world, of the real world, has been altered by your expectations of what might be possible in the virtual one.
I think it doesn’t even need to be a literal video game for this to happen. If you take non-interactive forms of media, like when you see people making TikTok videos on the street, they are clearly in another reality, a cinematic reality framed by their phone. Their smartphone or whatever is an extension, is a prosthesis of their body, in a way that is so integral to their sense of self and what they are recording.
JF: - I love the idea in Geomancer that AI gets a kick out of gambling because of a desire to break free of rigid data-sets and play in a space of randomness, chance and chaos. Do you feel that the world, and those you create, are more prone to entropy or syntropy?
LL: In Geomancer, there’s a scene in the casino where my idea was that, for the super-intelligent AI, the thing that they might crave is gambling, because that allows a window into the irrational, into a form of judgement – not based on ideal outcomes, but pure chance.
Of course there is a long artistic history of chance-based processes, not just in terms of creative production. I was thinking of chance and gaming as a cultural product, like the people who play the lottery are generally the people who can’t afford to gamble, but they want to escape. It’s not like the escapism of a video game, but economic escapism from a financial situation. So there’s some combination of a much wider game being played besides the actual game at the roulette table or a hand of poker.
As for irrationality in this kind of entropy question, I feel that in recent years there’s evidence of instability and irrationality at an individual and sociopolitical level. If you look at the rise of mental health apps or mindfulness sessions, it’s quite telling that there’s a huge need for antidotes to a ‘rational’ or quantified world.
As far as I understand, a lot of classical economic theory is based on this idea of what is called a rational actor – somebody makes rational decisions, like buying low and selling high, and behaves according to a very predictable set of motivations and actions that have economic consequences. Clearly, if you look at crypto or whatever, this is not the case; it’s faith-based gambling. It’s got this pyramid scheme-like logic to it. It has a language, and it has all the hallmarks of a cult or belief system, except it’s tied to money. So what I mean by this is that I feel in the absence of any meaningful, rational things to do, or behaviours to perform, or things to believe in, unfortunately the world is so ripe for exploitation by the creation of these belief systems.
JF: Talking of systems, let’s talk about what led you to Sinofuturism. Can you describe the Sinofuturist universe that you have created and its vision of the future?
LL: Around 2016, when I was writing the script for Geomancer, I was surprised to find that there wasn’t actually much playful critical discourse around the subject of China and AI. It was mostly talked about either in terms of this geopolitical conflict between the East and West, or as a question of cultural appropriation or representation, especially in science fiction films.
I was looking at the parallels between Chinese industrialisation and AI, and was struck by this parallel portrayal that either AI would save us or destroy us, and similarly, that China would save us or destroy us. Take, for a parallel example, the ecological crisis. The Western world blames China for global pollution, but at the same time, so much industrial production has been outsourced to China that it is not so simple as a previous kind of anti-globalisation movement, which was about why we are outsourcing Nike shoe production to Vietnam or the Ivory Coast. Surely, things should be fairer than this. Whereas the previous talk of globalisation was much more centred on corporations, here, it’s centred around the world’s largest country by population.
So I made a video essay called Sino Futurism that looks at these contradictions, these conflicting thoughts and opinions. One important thing I noticed is that many criticisms focused on this Enlightenment-centred sovereign individual of humanism. And so I thought, what if the avatar of Sinofuturism was an AI, as opposed to a sovereign human individual?
In that, I was looking at a lot of the work of some Afrofuturists who often use the figure of the superhuman robot or alien to overcome the historical problems of the loss of their sovereignty over their physical body because of slavery, the effects of which have persisted to this current day in disenfranchisement of many different people. I thought: if the Afrofuturist struggle was over their own body and the rights and freedoms and literal ownership over the body, what might the equivalent of futurism be as ownership of a kind of collective body, or a hive mind? Rather than a bias towards the individual, what about a focus on the collective, like the hive mind of countless workers, both human and nonhuman?
Sinofuturism emerged from this reading of what AI might be as a being in its own right, which also had a lot to do with how I saw the portrayal of Chinese industrialisation. I thought that we should consider the ideal Sinofuturist avatar as this hive mind. There’s no such self or the individual, as we are all a part of the collective. AI would be the ultimate example of this idea of a consciousness without a singular body, without a singular identity, and without any present rights or agency. This sounds very philosophical, but it’s dealt with in a much more playful kind of way in the video itself. It’s something I think has a lot of resonance with different people also, because it exists in different generations of Chinese diaspora. I simply made it because I was surprised that it didn’t exist before.
I was talking about these ideas with my friend, musician Steve Goodman AKA Kode9. We were talking a lot about ideas of speculative fiction and science fiction, but particularly the idea of ‘hyperstition’ - the idea of creative work going beyond science fiction – in certain cases it can become a self-fulfilling prophecy.
Many technological ideas, like AI, the internet, or cyberspace first existed as science fiction. I like this idea that creative artefacts influence industrial culture – computer scientists grow up reading William Gibson or whatever, and then 25 years later, they’re building some crazy search engine. So I think this idea of self-fulfilling prophecy in the case of science fiction and technology, it’s not just an abstract idea; very often, it’s a reality.
Image: Geomancer (2017) — an AI satellite returns to Earth, hoping to become an artist, by Lawrence Lek. Courtesy the artist Lawrence Lek and Sadie Coles HQ
by Lawrence Lek
Lawrence Lek is an artist, filmmaker, and musician working in the fields of virtual reality and simulation.
Geomancer (2017) — an AI satellite returns to Earth, hoping to become an artist, by Lawrence Lek → @lawrencelek
JF: - In an interview with the RCA, you once described architecture as “discovering a place to exist” with your site-specific installations and sculpture and their digital incarnations. What is the relationship between the physical and non-physical worlds you create?
LL: I often think about how spaces for children play with scale. There’s this fun, toy-like aspect to these worlds, where children get a sense of power because they are relatively small. Play is often power disguised as some collective activity. You see other social interactions in the playground too, where kids cooperate or bully each other.
In my work, physical and non-physical worlds are organically linked. It’s an intuitive process, though, and one that is quite nuanced. I’ve always been interested in the prehistoric journey of dwelling and the relationship between humans creating collective architecture simultaneously as agriculture or animal husbandry. Society develops in parallel with ways to exist within shelter. Somehow that prehistoric or pre-recorded history of space always really interested me. How this relates to today is what everybody’s still figuring out. Too little time has passed with technological developments to form a coherent understanding.
JF: The experience of child’s play or fantasy can be a vehicle for escapism. To what extent can virtual worlds provide asylum from physical reality? Do you feel that the physiological impact of world-building and the transformative potential of fictionalised realities can be a method for navigating societal architectures?
LL: Sure. I think there’s an instinctive desire for escapism, and sometimes there’s a survival-like desire to escape. Escapism is more like a psychological version of physical refuge. Or, take the simple example of a table. To an adult, it’s a work surface. But to a child, it could be a tent. It’s something to gather around, and also a place of refuge. In earthquake zones, people are told to shelter under a doorway or a table.
My interest in mythology and the founding principles of civilisation showed me how rational spaces have always coexisted with irrational or sacred space. One modern-day equivalent is video games. Fantasy or escapism are often shown in a negative light, but I think video games aren’t clear-cut escapism. Those worlds exist parallel to our reality, so they inform one another. I know you’re interested in alternative spaces or places beyond more objective or goal-oriented styles of existence, right?
JF: Definitely, and it is within the moulding and shaping of these alternative spaces and digital landscapes that an audience is invited to question their agency and the path that they are on, which often involves a return to the past. To see what came before in order to see what is unfolding or revealing itself. How does using gaming as a storytelling device, particularly with the potential for non-linear narratives, impact our sense of time and agency? What is the role of memory in your work, which you addressed most specifically in Nepenthe Zone?
LL: It’s a huge subject. I think if we take cinema, for example, what I like about long-form work is that you can have memories from within the work itself. The work has an internal memory structure – by the end of the third act, you recall something in the first act. Of course, this is to do with dramatic structure, but I think having the possibility of developing memories from earlier on in the work is something that short form simply don’t have time for. Similarly, in a novel or in a play, there’s this memory of what has already happened. The most obvious trope, in cinema at least, is the flashback - a literal thing that gone before.
Then there’s cultural or historical memory, where the hero or the characters have come from somewhere else. An obvious example of this would be in a film that’s set just after the war. All the characters are somehow affected or traumatised by the war; the shadow hangs over everything – a silent and recent memory.
There is also the archaeological film or archaeological-horror film, like where somebody has unknowingly moved to this ancient burial ground, and they’ve reawakened a ghost from the past. There is this longer-term haunting memory that is a social memory or societal one that is awakened in the viewer. I’m interested in all four of those types of memory - viewer inside the work, recent world history, biographical and social memory.
The way that games and film and text allow the artwork to mirror those non-linear timelines is particularly interesting. In the case of Nepenthe, I wanted to explore the fusion of cultural memory – of diasporic migration – and the journey of an individual.
JF: Memory is a quality that challenges our expectations of entering a space. The use of weather in your work subverts these expectations, the way that the texture of weather is often associated with memory - the fog, the haze, the clear sky - are you using weather to generate atmosphere or provoke memory?
LL: For me, the weather is predominantly about creating an atmosphere. This atmosphere taps into multiple senses, which helps memories form. In everyday life, like where there’s a storm cloud in the city when you run for the bus, this just makes places or situations more specific and easier to recall. The difference is that in CGI worlds, you can control the weather. It is a way to heighten a state of atmosphere that can trigger more memories or create future ones. Also, there’s a romantic tradition of weather personifying moods, and weather and landscape being part of a whole in all sorts of cultures. I’m also interested in exploring that through my work.
JF: And music shares this quality of generating atmosphere — obviously, you’ve got a CGI environment, where there is no sound, but with AIDOL, which feels like an album, what comes first — the music or the environment?
LL: With AIDOL, because that was more about AI and music, it was a symbiotic process of simultaneously working on the virtual set and the soundtrack. Early on, I figured I wanted the film to be structured like an album. In previous shorter works, I would often complete the soundtrack before, whereas AIDOL is a feature-length film, so it had to be co-created at the same time. I was also looking at the difference between doing a very sound/music-driven film versus a music video. I was interested in that ambiguity, like in a musical, the drama happens almost in service to the unfolding of the songs.
JF: There is a lot of debate around AI and creative agency. In Geomancer, the film’s narrator asks, “Is irrationality the main characteristic of consciousness?” proposing art as humanity's last refuge. For this reason, the ‘Bio-Supremacists’ in AIDOL, want to suppress the creativity of ‘Synths’ — your term for AI. Day-dreaming is an integral part of the creative process, the imaginal realms where ideas are conceived. Recently, AI were pretty effective at reading the thoughts of subjects in an fMRI scan. If AI became emotionally aware and capable of creative self-expression, what does a post-human consciousness look and feel like?
LL: What is interesting to me is that cognition or intelligence is not necessary to perform many different tasks. So if you think of this in terms of recent advantages of ChatGPT and the core Alan Turing question – Can machines think? – which is not the necessary question of can we develop AI that is really good at helping humans automate certain tasks?
Then there is self-expression and the romantic idea of individualistic artistic truth as some kind of fundamental expression about the way someone sees the world, mediated through a particular medium or form. There’s a specific part of that romantic creative process, which is making a novel or a film, that can be automated without any intelligence or actual thinking in the loop. At the same time, no one agrees on what cognition or intelligence actually is. Cognitive biologists, AI experts, philosophers of mind all have their interpretation about how thinking works. And then is the more emotional side of the spectrum – expression, empathy, all of that – there’s no consensus on those either.
Part of my work is visualising what one of those possibilities might look like. And for me, the possibility is, “What if AI could have all four of those kinds of memory? The cultural, the historical, the personal, and the encyclopaedic?” My recall of facts and actual details is not great. Clearly, current AI is far superior to my ability to recall facts and figures. We’re already uploading all of our memories anyway, so we handed over our agency. So there’s this offshoring or kind of offsetting – that’s a separate political question about how we are voluntarily or semi-voluntarily handing over many of these things.
I think what post-human consciousness might feel like is actually rather uncanny, for sure. And it probably has a lot of potential for empathy and connection. It’s usually talked about in a kind of dissociative way, although also in terms of narratives about AI, particularly science fiction, there’s definitely this parent/child, master/slave relationship as well. My main interest is exploring the empathetic subjective side.
JF: There is a fluidity that comes with creative thinking and there’s that sort of neuroplasticity that you get with machine-learning programs that is a bit like the psychedelic experience, dissolving rigid states of consciousness into more fluid states outside of a linear time frame. In those instances, humans often feel heightened states of empathy. Perhaps it is in these hallucinatory moments that machine learning programs will bridge the gap. What is your current position on integrating machine learning into artistic practice and the near-future of creativity?
LL: I think we’re in a very nascent stage of understanding what AI development is doing, and why is it that these hallucinations are happening at this relatively early stage of AI development? Is it because how aligned that is with a dream state? If you’ve seen these kinds of visualisations or diagrams of these associations that AI or large language models have, it’s this dream-like cloud of possibility from which sentences or actually sequences of letters and words get made. Of course, what that machine hallucination might look like often gets visualised to what looks like strange images or strange texts to us as humans. I’m sure more potent forms of visualisation or communication might emerge from that.
JF: Extrasensory abilities are often reported during altered states such as psychedelics or lucid dreaming. What is your sense of hacking extrasensory perception and gaining additional skill sets through the experience of existing in the virtual worlds - altered states - that art and gaming can achieve?
LL:, I’m not consciously thinking of the work I’m creating as improving or extending anything. Of course, 3D simulation is used for army training – PTSD, simulations for fighter pilots, all sorts of medical simulations so doctors can perform VR surgery. There are many ‘serious games’ that use the same technologies as gaming to create a safe working environment for people to perform specific tasks.
As for altered states, I think for anyone who’s had the experience of playing a video game for a long time, being immersed in this virtual world, it gives you a different system of agency. I used to play Tony Hawk’s Pro Skater a lot as a teenager. When you go outside, you think of all the surfaces as something you could skate on, or other impossible things. The physics of the world, of the real world, has been altered by your expectations of what might be possible in the virtual one.
I think it doesn’t even need to be a literal video game for this to happen. If you take non-interactive forms of media, like when you see people making TikTok videos on the street, they are clearly in another reality, a cinematic reality framed by their phone. Their smartphone or whatever is an extension, is a prosthesis of their body, in a way that is so integral to their sense of self and what they are recording.
JF: - I love the idea in Geomancer that AI gets a kick out of gambling because of a desire to break free of rigid data-sets and play in a space of randomness, chance and chaos. Do you feel that the world, and those you create, are more prone to entropy or syntropy?
LL: In Geomancer, there’s a scene in the casino where my idea was that, for the super-intelligent AI, the thing that they might crave is gambling, because that allows a window into the irrational, into a form of judgement – not based on ideal outcomes, but pure chance.
Of course there is a long artistic history of chance-based processes, not just in terms of creative production. I was thinking of chance and gaming as a cultural product, like the people who play the lottery are generally the people who can’t afford to gamble, but they want to escape. It’s not like the escapism of a video game, but economic escapism from a financial situation. So there’s some combination of a much wider game being played besides the actual game at the roulette table or a hand of poker.
As for irrationality in this kind of entropy question, I feel that in recent years there’s evidence of instability and irrationality at an individual and sociopolitical level. If you look at the rise of mental health apps or mindfulness sessions, it’s quite telling that there’s a huge need for antidotes to a ‘rational’ or quantified world.
As far as I understand, a lot of classical economic theory is based on this idea of what is called a rational actor – somebody makes rational decisions, like buying low and selling high, and behaves according to a very predictable set of motivations and actions that have economic consequences. Clearly, if you look at crypto or whatever, this is not the case; it’s faith-based gambling. It’s got this pyramid scheme-like logic to it. It has a language, and it has all the hallmarks of a cult or belief system, except it’s tied to money. So what I mean by this is that I feel in the absence of any meaningful, rational things to do, or behaviours to perform, or things to believe in, unfortunately the world is so ripe for exploitation by the creation of these belief systems.
JF: Talking of systems, let’s talk about what led you to Sinofuturism. Can you describe the Sinofuturist universe that you have created and its vision of the future?
LL: Around 2016, when I was writing the script for Geomancer, I was surprised to find that there wasn’t actually much playful critical discourse around the subject of China and AI. It was mostly talked about either in terms of this geopolitical conflict between the East and West, or as a question of cultural appropriation or representation, especially in science fiction films.
I was looking at the parallels between Chinese industrialisation and AI, and was struck by this parallel portrayal that either AI would save us or destroy us, and similarly, that China would save us or destroy us. Take, for a parallel example, the ecological crisis. The Western world blames China for global pollution, but at the same time, so much industrial production has been outsourced to China that it is not so simple as a previous kind of anti-globalisation movement, which was about why we are outsourcing Nike shoe production to Vietnam or the Ivory Coast. Surely, things should be fairer than this. Whereas the previous talk of globalisation was much more centred on corporations, here, it’s centred around the world’s largest country by population.
So I made a video essay called Sino Futurism that looks at these contradictions, these conflicting thoughts and opinions. One important thing I noticed is that many criticisms focused on this Enlightenment-centred sovereign individual of humanism. And so I thought, what if the avatar of Sinofuturism was an AI, as opposed to a sovereign human individual?
In that, I was looking at a lot of the work of some Afrofuturists who often use the figure of the superhuman robot or alien to overcome the historical problems of the loss of their sovereignty over their physical body because of slavery, the effects of which have persisted to this current day in disenfranchisement of many different people. I thought: if the Afrofuturist struggle was over their own body and the rights and freedoms and literal ownership over the body, what might the equivalent of futurism be as ownership of a kind of collective body, or a hive mind? Rather than a bias towards the individual, what about a focus on the collective, like the hive mind of countless workers, both human and nonhuman?
Sinofuturism emerged from this reading of what AI might be as a being in its own right, which also had a lot to do with how I saw the portrayal of Chinese industrialisation. I thought that we should consider the ideal Sinofuturist avatar as this hive mind. There’s no such self or the individual, as we are all a part of the collective. AI would be the ultimate example of this idea of a consciousness without a singular body, without a singular identity, and without any present rights or agency. This sounds very philosophical, but it’s dealt with in a much more playful kind of way in the video itself. It’s something I think has a lot of resonance with different people also, because it exists in different generations of Chinese diaspora. I simply made it because I was surprised that it didn’t exist before.
I was talking about these ideas with my friend, musician Steve Goodman AKA Kode9. We were talking a lot about ideas of speculative fiction and science fiction, but particularly the idea of ‘hyperstition’ - the idea of creative work going beyond science fiction – in certain cases it can become a self-fulfilling prophecy.
Many technological ideas, like AI, the internet, or cyberspace first existed as science fiction. I like this idea that creative artefacts influence industrial culture – computer scientists grow up reading William Gibson or whatever, and then 25 years later, they’re building some crazy search engine. So I think this idea of self-fulfilling prophecy in the case of science fiction and technology, it’s not just an abstract idea; very often, it’s a reality.
Image: Geomancer (2017) — an AI satellite returns to Earth, hoping to become an artist, by Lawrence Lek. Courtesy the artist Lawrence Lek and Sadie Coles HQ