By Eleanor Drage
AI is Not Objective! Artificial intelligence and the Politics of the Observed World
Madame Sosostris, famous clairvoyante,
Had a bad cold, nevertheless
Is known to be the wisest woman in Europe,
With a wicked pack of cards.
And here is the one-eyed merchant, and this card,
Which is blank, is something he carries on his back,
Which I am forbidden to see.
Horoscopes and microscopes
For millennia, humanity has sought truth through technology: from tarot cards and microscopes to photographs and fingerprinting. The secularisation of society and the decline of the mystical means that not all modes of accessing the future and casting clarity over the present are as equally accepted. This is often blamed on the Enlightenment, during which time replicable experiments were the favoured way of creating proper scientific knowledge. The hope was that humanity was finally approaching an era where with the right technologies, everything was knowable. New tools of scientific observation like the thermometer and the encyclopaedia were the vehicles for this change. These artefacts - which were made in the places of science and the libraries and offices of philosophers - were seen as substantively different to spiritual forms of truth-telling. They became institutionally validated modes of classifying the world, Europe’s colonies, and citizens.
To be counted and taxonomised was to be brought into existence. This is why science is often ‘performative’, which means that it creates the world through the process of observing it. It decides what is a perfect specimen, what is atypical, and ascribes a value to both. Of course, not everyone was allowed to be in this position of observational, God-like power. The scientist had to be a man, and he must be white. The rational subject was, therefore, also a racialised and gendered one. A strange paradox, because the Enlightenment’s premise was also that scientists aren’t implicated in their experiments, so surely it wouldn’t matter who's doing the science?
It’s important to retrace the history of science, because it reminds us that the idea of ‘objective science’ actually evolves over time. Today, AI may seem to hold the keys to the known world, but perhaps this idea will yet change. In their book Objectivity, Lorraine Daston and Peter Gallison show that new technologies actually change what scientists consider to be neutral or unbiased science. AI isn’t just the product of science, it is directing its future. That future is one where - if key opinion leaders in AI get their way - it frees humanity from mortality and fallibility. Society becomes more accurate and predictable and less emotional. “May Elon Musk go to Mars”, said feminist philosopher Rosi Braidotti, but I want to stay here and merge with rather than supercede nature. I want to become compost, as feminist biologist Donna Haraway suggests. Haraway rejects the AI-enabled ‘posthuman’ that is so popular among technologists in favour of returning to ‘Humus’ - organic matter. While AI’s environmental cost is now widely known (every 5 questions you ask ChatGPT needs 1 litre of water), the industry will receive $232 billion of investment by 2025. But it’s still the horse everyone is backing in the race to ‘improve’ humanity and peel back the mysteries of the universe.
Materiality matters
There are a number of reasons why the myth of AI’s omniscience remains popular. The most crucial one is that AI is often seen to be superhuman. While the media rarely accompanies articles about AI with anything other than images of white robots with blue eyes, we need to deflate some of the hype by picturing AI as an assemblage of human data, hardware, mined materials, and hard work. We should remember that when AI appears to be talking to us, that this is the product of a design choice and not an expression of what AI ‘is’. Real people select, process, and annotate its data, they engineer the algorithms, they decide on which metrics to use, and they deploy it into the real world in conjunction with institutions that have their own politics and values. When AI observes the world, it looks at it through the eyes of those who have built it and paid to use it. Often the former end up building their AI infrastructure using Google or Microsoft software, meaning that you can trace most AI products back to Big Tech. The latter are those who can afford to procure and deploy it; unsurprisingly, the most expensive contracts with AI service providers have been bought by clients including Microsoft and the military. The issue here isn’t that AI is biased, so much as a small group of companies in the West hold a monopoly over the production of AI.
Through the looking glass
Last year, I published a piece of research with my friend and colleague Dr Federica Frabetti about a package of technologies created by a company called Dataminr. We call it ‘protest recognition software’, because it was used to track, monitor and shut down Black Lives Matter protests in the USA in 2016. We were also concerned because investigative journalist Max Colbert had discovered that the UK government had spent as much as £5 million on Dataminr tools. This was at a time when the government were making their anti-protest stance clear through the ‘noisy bill’ and a suite of other measures designed to disincentivise and break-up protests. From Extinction Rebellion and Just Stop Oil to the Chris Packham and Sarah Everard protests, these mobilisations were predominantly left-leaning, environmentally-conscious and pro-justice crowds - and in the case of the latter two, with a high proportion of women and people of colour. Dataminr already had a history of helping the police on the other side of the pond in Baltimore and Los Angeles to clamp down on BLM uprisings. They had taught their protest recognition software to learn that a dangerous protest was a diverse, politically left and anti-government crowd.
Is this an example of ‘biased’ AI? I’m suspicious of the term bias, because it implies that something can be debiased by tweaking an algorithm or changing the dataset. You can implement all the algorithmic fairness metrics in the world, but if an AI company collaborates with powerful, unjust institutions, it will still be likely to harm people who are already at risk of institutional violence. AI companies and their clients affect how the AI behaves and what its outputs are, whether they be the decision to arrest someone or deny them bail or a credit loan. As long as companies and clients retain their desire to pursue their bottom line over the need to promote justice, technologies will too.
Seeing in low definition
There’s another way that the term ‘bias’ doesn’t fully describe harmful effects of AI. A decade ago, two Stanford computer scientists attempted to create an AI system that could decode a person’s sexuality just by looking at their face. To make this tool they scraped information from online dating websites without its users’ consent (note: bad technology = unjust data harvesting practices). In fact, what the system was actually doing was associating sexuality with proxies like the tilt of someone’s head or their use of makeup. These things can be indicators of sexuality, but sexuality itself is mysterious, complicated, and often evolves over time. The engineers’ profound misunderstanding of the temporality of sexuality (erratic and wild - not stagnant, pre-ordained, and fixed) prompted a real scare for LGBTQ+ people who feared AI might one day ‘out them’. There are many issues here, not least that the practice of open experimentation in AI often means that modern reprivals of pseudoscience - like 19th-century phrenological attempts to ascertain someone’s personality by looking at their face - are common. Mark Zuckerberg also showed this urge to unravel humanity’s mysteries computationally when he wondered out loud whether friendship could be ‘solved’ using mathematics. Like friendship, sexuality is often surprising and always relational. Omise'eke Natasha Tinsley brings this to life in her book Ezili’s Mirrors, where she describes how the Ezili family of Haitian Vodou spirit forces embody shifting forms of sexual identity and practice.
My work is often in praise of life’s ambiguities and complexities. Like the blank card pulled by Madame Sosostris, some parts of life can’t be predicted. Computer science, which helps companies to earn money from forecasting trends, often attempts to erase unpredictability. This is the case with AI, which is mostly based in machine learning (ML) techniques. ML requires categorisations and binary code, leaving little room for the parts of life which evade classification or are characterised by ambiguity.
But it doesn’t have to be this way. Prototypes are being developed by radical organisations like the The Indigenous Protocol and Artificial Intelligence Working Group, who are designing speculative futures where computer code is interwoven with the DNA of humans and sea creatures. They’re not trying to create datafied approximations of people, but instead foster responsible and responsive connections between species. Less speculative perhaps are current attempts to encourage ‘reparative’ algorithms - as in, AI systems that work in favour of the marginalised and make some attempt at compensating for the inequalities of today. This is real de-biasing, because it’s impossible to create a neutral system - every AI product has a politics and favours some people over others. If the status quo is an unjust world, using algorithms to replicate it will only compound existing problems. To be silent is to be complicit - we need AI that leans in the direction of justice. This relies on us debunking the idea that AI is objective, and instead making technologies that actively do good.
++
Eleanor Drage is a Senior Research Fellow at the University of Cambridge. Her work has been covered by the BBC, Forbes, The Telegraph, The Guardian, Glamour Magazine and internationally. She is the co-host of The Good Robot Podcast, and author of An Experience of the Impossible: The Planetary Humanism of European Women’s SF (Oct 2023), and co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024), and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oct 2023).
AI is Not Objective! Artificial intelligence and the Politics of the Observed World
Madame Sosostris, famous clairvoyante,
Had a bad cold, nevertheless
Is known to be the wisest woman in Europe,
With a wicked pack of cards.
And here is the one-eyed merchant, and this card,
Which is blank, is something he carries on his back,
Which I am forbidden to see.
Horoscopes and microscopes
For millennia, humanity has sought truth through technology: from tarot cards and microscopes to photographs and fingerprinting. The secularisation of society and the decline of the mystical means that not all modes of accessing the future and casting clarity over the present are as equally accepted. This is often blamed on the Enlightenment, during which time replicable experiments were the favoured way of creating proper scientific knowledge. The hope was that humanity was finally approaching an era where with the right technologies, everything was knowable. New tools of scientific observation like the thermometer and the encyclopaedia were the vehicles for this change. These artefacts - which were made in the places of science and the libraries and offices of philosophers - were seen as substantively different to spiritual forms of truth-telling. They became institutionally validated modes of classifying the world, Europe’s colonies, and citizens.
To be counted and taxonomised was to be brought into existence. This is why science is often ‘performative’, which means that it creates the world through the process of observing it. It decides what is a perfect specimen, what is atypical, and ascribes a value to both. Of course, not everyone was allowed to be in this position of observational, God-like power. The scientist had to be a man, and he must be white. The rational subject was, therefore, also a racialised and gendered one. A strange paradox, because the Enlightenment’s premise was also that scientists aren’t implicated in their experiments, so surely it wouldn’t matter who's doing the science?
It’s important to retrace the history of science, because it reminds us that the idea of ‘objective science’ actually evolves over time. Today, AI may seem to hold the keys to the known world, but perhaps this idea will yet change. In their book Objectivity, Lorraine Daston and Peter Gallison show that new technologies actually change what scientists consider to be neutral or unbiased science. AI isn’t just the product of science, it is directing its future. That future is one where - if key opinion leaders in AI get their way - it frees humanity from mortality and fallibility. Society becomes more accurate and predictable and less emotional. “May Elon Musk go to Mars”, said feminist philosopher Rosi Braidotti, but I want to stay here and merge with rather than supercede nature. I want to become compost, as feminist biologist Donna Haraway suggests. Haraway rejects the AI-enabled ‘posthuman’ that is so popular among technologists in favour of returning to ‘Humus’ - organic matter. While AI’s environmental cost is now widely known (every 5 questions you ask ChatGPT needs 1 litre of water), the industry will receive $232 billion of investment by 2025. But it’s still the horse everyone is backing in the race to ‘improve’ humanity and peel back the mysteries of the universe.
Materiality matters
There are a number of reasons why the myth of AI’s omniscience remains popular. The most crucial one is that AI is often seen to be superhuman. While the media rarely accompanies articles about AI with anything other than images of white robots with blue eyes, we need to deflate some of the hype by picturing AI as an assemblage of human data, hardware, mined materials, and hard work. We should remember that when AI appears to be talking to us, that this is the product of a design choice and not an expression of what AI ‘is’. Real people select, process, and annotate its data, they engineer the algorithms, they decide on which metrics to use, and they deploy it into the real world in conjunction with institutions that have their own politics and values. When AI observes the world, it looks at it through the eyes of those who have built it and paid to use it. Often the former end up building their AI infrastructure using Google or Microsoft software, meaning that you can trace most AI products back to Big Tech. The latter are those who can afford to procure and deploy it; unsurprisingly, the most expensive contracts with AI service providers have been bought by clients including Microsoft and the military. The issue here isn’t that AI is biased, so much as a small group of companies in the West hold a monopoly over the production of AI.
Through the looking glass
Last year, I published a piece of research with my friend and colleague Dr Federica Frabetti about a package of technologies created by a company called Dataminr. We call it ‘protest recognition software’, because it was used to track, monitor and shut down Black Lives Matter protests in the USA in 2016. We were also concerned because investigative journalist Max Colbert had discovered that the UK government had spent as much as £5 million on Dataminr tools. This was at a time when the government were making their anti-protest stance clear through the ‘noisy bill’ and a suite of other measures designed to disincentivise and break-up protests. From Extinction Rebellion and Just Stop Oil to the Chris Packham and Sarah Everard protests, these mobilisations were predominantly left-leaning, environmentally-conscious and pro-justice crowds - and in the case of the latter two, with a high proportion of women and people of colour. Dataminr already had a history of helping the police on the other side of the pond in Baltimore and Los Angeles to clamp down on BLM uprisings. They had taught their protest recognition software to learn that a dangerous protest was a diverse, politically left and anti-government crowd.
Is this an example of ‘biased’ AI? I’m suspicious of the term bias, because it implies that something can be debiased by tweaking an algorithm or changing the dataset. You can implement all the algorithmic fairness metrics in the world, but if an AI company collaborates with powerful, unjust institutions, it will still be likely to harm people who are already at risk of institutional violence. AI companies and their clients affect how the AI behaves and what its outputs are, whether they be the decision to arrest someone or deny them bail or a credit loan. As long as companies and clients retain their desire to pursue their bottom line over the need to promote justice, technologies will too.
Seeing in low definition
There’s another way that the term ‘bias’ doesn’t fully describe harmful effects of AI. A decade ago, two Stanford computer scientists attempted to create an AI system that could decode a person’s sexuality just by looking at their face. To make this tool they scraped information from online dating websites without its users’ consent (note: bad technology = unjust data harvesting practices). In fact, what the system was actually doing was associating sexuality with proxies like the tilt of someone’s head or their use of makeup. These things can be indicators of sexuality, but sexuality itself is mysterious, complicated, and often evolves over time. The engineers’ profound misunderstanding of the temporality of sexuality (erratic and wild - not stagnant, pre-ordained, and fixed) prompted a real scare for LGBTQ+ people who feared AI might one day ‘out them’. There are many issues here, not least that the practice of open experimentation in AI often means that modern reprivals of pseudoscience - like 19th-century phrenological attempts to ascertain someone’s personality by looking at their face - are common. Mark Zuckerberg also showed this urge to unravel humanity’s mysteries computationally when he wondered out loud whether friendship could be ‘solved’ using mathematics. Like friendship, sexuality is often surprising and always relational. Omise'eke Natasha Tinsley brings this to life in her book Ezili’s Mirrors, where she describes how the Ezili family of Haitian Vodou spirit forces embody shifting forms of sexual identity and practice.
My work is often in praise of life’s ambiguities and complexities. Like the blank card pulled by Madame Sosostris, some parts of life can’t be predicted. Computer science, which helps companies to earn money from forecasting trends, often attempts to erase unpredictability. This is the case with AI, which is mostly based in machine learning (ML) techniques. ML requires categorisations and binary code, leaving little room for the parts of life which evade classification or are characterised by ambiguity.
But it doesn’t have to be this way. Prototypes are being developed by radical organisations like the The Indigenous Protocol and Artificial Intelligence Working Group, who are designing speculative futures where computer code is interwoven with the DNA of humans and sea creatures. They’re not trying to create datafied approximations of people, but instead foster responsible and responsive connections between species. Less speculative perhaps are current attempts to encourage ‘reparative’ algorithms - as in, AI systems that work in favour of the marginalised and make some attempt at compensating for the inequalities of today. This is real de-biasing, because it’s impossible to create a neutral system - every AI product has a politics and favours some people over others. If the status quo is an unjust world, using algorithms to replicate it will only compound existing problems. To be silent is to be complicit - we need AI that leans in the direction of justice. This relies on us debunking the idea that AI is objective, and instead making technologies that actively do good.
++
Eleanor Drage is a Senior Research Fellow at the University of Cambridge. Her work has been covered by the BBC, Forbes, The Telegraph, The Guardian, Glamour Magazine and internationally. She is the co-host of The Good Robot Podcast, and author of An Experience of the Impossible: The Planetary Humanism of European Women’s SF (Oct 2023), and co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024), and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oct 2023).
Dr Eleanor Drage is Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, a project leader at the Desirable Digitisation Project exploring AI ethics and co-host on The Good Robot Podcast, on gender, feminism and technologyauthor of An Experience of the Impossible: The Planetary Humanism of European Women’s SF (Oct 2023), and co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024), and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oct 2023).
By Eleanor Drage
AI is Not Objective! Artificial intelligence and the Politics of the Observed World
Madame Sosostris, famous clairvoyante,
Had a bad cold, nevertheless
Is known to be the wisest woman in Europe,
With a wicked pack of cards.
And here is the one-eyed merchant, and this card,
Which is blank, is something he carries on his back,
Which I am forbidden to see.
Horoscopes and microscopes
For millennia, humanity has sought truth through technology: from tarot cards and microscopes to photographs and fingerprinting. The secularisation of society and the decline of the mystical means that not all modes of accessing the future and casting clarity over the present are as equally accepted. This is often blamed on the Enlightenment, during which time replicable experiments were the favoured way of creating proper scientific knowledge. The hope was that humanity was finally approaching an era where with the right technologies, everything was knowable. New tools of scientific observation like the thermometer and the encyclopaedia were the vehicles for this change. These artefacts - which were made in the places of science and the libraries and offices of philosophers - were seen as substantively different to spiritual forms of truth-telling. They became institutionally validated modes of classifying the world, Europe’s colonies, and citizens.
To be counted and taxonomised was to be brought into existence. This is why science is often ‘performative’, which means that it creates the world through the process of observing it. It decides what is a perfect specimen, what is atypical, and ascribes a value to both. Of course, not everyone was allowed to be in this position of observational, God-like power. The scientist had to be a man, and he must be white. The rational subject was, therefore, also a racialised and gendered one. A strange paradox, because the Enlightenment’s premise was also that scientists aren’t implicated in their experiments, so surely it wouldn’t matter who's doing the science?
It’s important to retrace the history of science, because it reminds us that the idea of ‘objective science’ actually evolves over time. Today, AI may seem to hold the keys to the known world, but perhaps this idea will yet change. In their book Objectivity, Lorraine Daston and Peter Gallison show that new technologies actually change what scientists consider to be neutral or unbiased science. AI isn’t just the product of science, it is directing its future. That future is one where - if key opinion leaders in AI get their way - it frees humanity from mortality and fallibility. Society becomes more accurate and predictable and less emotional. “May Elon Musk go to Mars”, said feminist philosopher Rosi Braidotti, but I want to stay here and merge with rather than supercede nature. I want to become compost, as feminist biologist Donna Haraway suggests. Haraway rejects the AI-enabled ‘posthuman’ that is so popular among technologists in favour of returning to ‘Humus’ - organic matter. While AI’s environmental cost is now widely known (every 5 questions you ask ChatGPT needs 1 litre of water), the industry will receive $232 billion of investment by 2025. But it’s still the horse everyone is backing in the race to ‘improve’ humanity and peel back the mysteries of the universe.
Materiality matters
There are a number of reasons why the myth of AI’s omniscience remains popular. The most crucial one is that AI is often seen to be superhuman. While the media rarely accompanies articles about AI with anything other than images of white robots with blue eyes, we need to deflate some of the hype by picturing AI as an assemblage of human data, hardware, mined materials, and hard work. We should remember that when AI appears to be talking to us, that this is the product of a design choice and not an expression of what AI ‘is’. Real people select, process, and annotate its data, they engineer the algorithms, they decide on which metrics to use, and they deploy it into the real world in conjunction with institutions that have their own politics and values. When AI observes the world, it looks at it through the eyes of those who have built it and paid to use it. Often the former end up building their AI infrastructure using Google or Microsoft software, meaning that you can trace most AI products back to Big Tech. The latter are those who can afford to procure and deploy it; unsurprisingly, the most expensive contracts with AI service providers have been bought by clients including Microsoft and the military. The issue here isn’t that AI is biased, so much as a small group of companies in the West hold a monopoly over the production of AI.
Through the looking glass
Last year, I published a piece of research with my friend and colleague Dr Federica Frabetti about a package of technologies created by a company called Dataminr. We call it ‘protest recognition software’, because it was used to track, monitor and shut down Black Lives Matter protests in the USA in 2016. We were also concerned because investigative journalist Max Colbert had discovered that the UK government had spent as much as £5 million on Dataminr tools. This was at a time when the government were making their anti-protest stance clear through the ‘noisy bill’ and a suite of other measures designed to disincentivise and break-up protests. From Extinction Rebellion and Just Stop Oil to the Chris Packham and Sarah Everard protests, these mobilisations were predominantly left-leaning, environmentally-conscious and pro-justice crowds - and in the case of the latter two, with a high proportion of women and people of colour. Dataminr already had a history of helping the police on the other side of the pond in Baltimore and Los Angeles to clamp down on BLM uprisings. They had taught their protest recognition software to learn that a dangerous protest was a diverse, politically left and anti-government crowd.
Is this an example of ‘biased’ AI? I’m suspicious of the term bias, because it implies that something can be debiased by tweaking an algorithm or changing the dataset. You can implement all the algorithmic fairness metrics in the world, but if an AI company collaborates with powerful, unjust institutions, it will still be likely to harm people who are already at risk of institutional violence. AI companies and their clients affect how the AI behaves and what its outputs are, whether they be the decision to arrest someone or deny them bail or a credit loan. As long as companies and clients retain their desire to pursue their bottom line over the need to promote justice, technologies will too.
Seeing in low definition
There’s another way that the term ‘bias’ doesn’t fully describe harmful effects of AI. A decade ago, two Stanford computer scientists attempted to create an AI system that could decode a person’s sexuality just by looking at their face. To make this tool they scraped information from online dating websites without its users’ consent (note: bad technology = unjust data harvesting practices). In fact, what the system was actually doing was associating sexuality with proxies like the tilt of someone’s head or their use of makeup. These things can be indicators of sexuality, but sexuality itself is mysterious, complicated, and often evolves over time. The engineers’ profound misunderstanding of the temporality of sexuality (erratic and wild - not stagnant, pre-ordained, and fixed) prompted a real scare for LGBTQ+ people who feared AI might one day ‘out them’. There are many issues here, not least that the practice of open experimentation in AI often means that modern reprivals of pseudoscience - like 19th-century phrenological attempts to ascertain someone’s personality by looking at their face - are common. Mark Zuckerberg also showed this urge to unravel humanity’s mysteries computationally when he wondered out loud whether friendship could be ‘solved’ using mathematics. Like friendship, sexuality is often surprising and always relational. Omise'eke Natasha Tinsley brings this to life in her book Ezili’s Mirrors, where she describes how the Ezili family of Haitian Vodou spirit forces embody shifting forms of sexual identity and practice.
My work is often in praise of life’s ambiguities and complexities. Like the blank card pulled by Madame Sosostris, some parts of life can’t be predicted. Computer science, which helps companies to earn money from forecasting trends, often attempts to erase unpredictability. This is the case with AI, which is mostly based in machine learning (ML) techniques. ML requires categorisations and binary code, leaving little room for the parts of life which evade classification or are characterised by ambiguity.
But it doesn’t have to be this way. Prototypes are being developed by radical organisations like the The Indigenous Protocol and Artificial Intelligence Working Group, who are designing speculative futures where computer code is interwoven with the DNA of humans and sea creatures. They’re not trying to create datafied approximations of people, but instead foster responsible and responsive connections between species. Less speculative perhaps are current attempts to encourage ‘reparative’ algorithms - as in, AI systems that work in favour of the marginalised and make some attempt at compensating for the inequalities of today. This is real de-biasing, because it’s impossible to create a neutral system - every AI product has a politics and favours some people over others. If the status quo is an unjust world, using algorithms to replicate it will only compound existing problems. To be silent is to be complicit - we need AI that leans in the direction of justice. This relies on us debunking the idea that AI is objective, and instead making technologies that actively do good.
++
Eleanor Drage is a Senior Research Fellow at the University of Cambridge. Her work has been covered by the BBC, Forbes, The Telegraph, The Guardian, Glamour Magazine and internationally. She is the co-host of The Good Robot Podcast, and author of An Experience of the Impossible: The Planetary Humanism of European Women’s SF (Oct 2023), and co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024), and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oct 2023).
AI is Not Objective! Artificial intelligence and the Politics of the Observed World
Madame Sosostris, famous clairvoyante,
Had a bad cold, nevertheless
Is known to be the wisest woman in Europe,
With a wicked pack of cards.
And here is the one-eyed merchant, and this card,
Which is blank, is something he carries on his back,
Which I am forbidden to see.
Horoscopes and microscopes
For millennia, humanity has sought truth through technology: from tarot cards and microscopes to photographs and fingerprinting. The secularisation of society and the decline of the mystical means that not all modes of accessing the future and casting clarity over the present are as equally accepted. This is often blamed on the Enlightenment, during which time replicable experiments were the favoured way of creating proper scientific knowledge. The hope was that humanity was finally approaching an era where with the right technologies, everything was knowable. New tools of scientific observation like the thermometer and the encyclopaedia were the vehicles for this change. These artefacts - which were made in the places of science and the libraries and offices of philosophers - were seen as substantively different to spiritual forms of truth-telling. They became institutionally validated modes of classifying the world, Europe’s colonies, and citizens.
To be counted and taxonomised was to be brought into existence. This is why science is often ‘performative’, which means that it creates the world through the process of observing it. It decides what is a perfect specimen, what is atypical, and ascribes a value to both. Of course, not everyone was allowed to be in this position of observational, God-like power. The scientist had to be a man, and he must be white. The rational subject was, therefore, also a racialised and gendered one. A strange paradox, because the Enlightenment’s premise was also that scientists aren’t implicated in their experiments, so surely it wouldn’t matter who's doing the science?
It’s important to retrace the history of science, because it reminds us that the idea of ‘objective science’ actually evolves over time. Today, AI may seem to hold the keys to the known world, but perhaps this idea will yet change. In their book Objectivity, Lorraine Daston and Peter Gallison show that new technologies actually change what scientists consider to be neutral or unbiased science. AI isn’t just the product of science, it is directing its future. That future is one where - if key opinion leaders in AI get their way - it frees humanity from mortality and fallibility. Society becomes more accurate and predictable and less emotional. “May Elon Musk go to Mars”, said feminist philosopher Rosi Braidotti, but I want to stay here and merge with rather than supercede nature. I want to become compost, as feminist biologist Donna Haraway suggests. Haraway rejects the AI-enabled ‘posthuman’ that is so popular among technologists in favour of returning to ‘Humus’ - organic matter. While AI’s environmental cost is now widely known (every 5 questions you ask ChatGPT needs 1 litre of water), the industry will receive $232 billion of investment by 2025. But it’s still the horse everyone is backing in the race to ‘improve’ humanity and peel back the mysteries of the universe.
Materiality matters
There are a number of reasons why the myth of AI’s omniscience remains popular. The most crucial one is that AI is often seen to be superhuman. While the media rarely accompanies articles about AI with anything other than images of white robots with blue eyes, we need to deflate some of the hype by picturing AI as an assemblage of human data, hardware, mined materials, and hard work. We should remember that when AI appears to be talking to us, that this is the product of a design choice and not an expression of what AI ‘is’. Real people select, process, and annotate its data, they engineer the algorithms, they decide on which metrics to use, and they deploy it into the real world in conjunction with institutions that have their own politics and values. When AI observes the world, it looks at it through the eyes of those who have built it and paid to use it. Often the former end up building their AI infrastructure using Google or Microsoft software, meaning that you can trace most AI products back to Big Tech. The latter are those who can afford to procure and deploy it; unsurprisingly, the most expensive contracts with AI service providers have been bought by clients including Microsoft and the military. The issue here isn’t that AI is biased, so much as a small group of companies in the West hold a monopoly over the production of AI.
Through the looking glass
Last year, I published a piece of research with my friend and colleague Dr Federica Frabetti about a package of technologies created by a company called Dataminr. We call it ‘protest recognition software’, because it was used to track, monitor and shut down Black Lives Matter protests in the USA in 2016. We were also concerned because investigative journalist Max Colbert had discovered that the UK government had spent as much as £5 million on Dataminr tools. This was at a time when the government were making their anti-protest stance clear through the ‘noisy bill’ and a suite of other measures designed to disincentivise and break-up protests. From Extinction Rebellion and Just Stop Oil to the Chris Packham and Sarah Everard protests, these mobilisations were predominantly left-leaning, environmentally-conscious and pro-justice crowds - and in the case of the latter two, with a high proportion of women and people of colour. Dataminr already had a history of helping the police on the other side of the pond in Baltimore and Los Angeles to clamp down on BLM uprisings. They had taught their protest recognition software to learn that a dangerous protest was a diverse, politically left and anti-government crowd.
Is this an example of ‘biased’ AI? I’m suspicious of the term bias, because it implies that something can be debiased by tweaking an algorithm or changing the dataset. You can implement all the algorithmic fairness metrics in the world, but if an AI company collaborates with powerful, unjust institutions, it will still be likely to harm people who are already at risk of institutional violence. AI companies and their clients affect how the AI behaves and what its outputs are, whether they be the decision to arrest someone or deny them bail or a credit loan. As long as companies and clients retain their desire to pursue their bottom line over the need to promote justice, technologies will too.
Seeing in low definition
There’s another way that the term ‘bias’ doesn’t fully describe harmful effects of AI. A decade ago, two Stanford computer scientists attempted to create an AI system that could decode a person’s sexuality just by looking at their face. To make this tool they scraped information from online dating websites without its users’ consent (note: bad technology = unjust data harvesting practices). In fact, what the system was actually doing was associating sexuality with proxies like the tilt of someone’s head or their use of makeup. These things can be indicators of sexuality, but sexuality itself is mysterious, complicated, and often evolves over time. The engineers’ profound misunderstanding of the temporality of sexuality (erratic and wild - not stagnant, pre-ordained, and fixed) prompted a real scare for LGBTQ+ people who feared AI might one day ‘out them’. There are many issues here, not least that the practice of open experimentation in AI often means that modern reprivals of pseudoscience - like 19th-century phrenological attempts to ascertain someone’s personality by looking at their face - are common. Mark Zuckerberg also showed this urge to unravel humanity’s mysteries computationally when he wondered out loud whether friendship could be ‘solved’ using mathematics. Like friendship, sexuality is often surprising and always relational. Omise'eke Natasha Tinsley brings this to life in her book Ezili’s Mirrors, where she describes how the Ezili family of Haitian Vodou spirit forces embody shifting forms of sexual identity and practice.
My work is often in praise of life’s ambiguities and complexities. Like the blank card pulled by Madame Sosostris, some parts of life can’t be predicted. Computer science, which helps companies to earn money from forecasting trends, often attempts to erase unpredictability. This is the case with AI, which is mostly based in machine learning (ML) techniques. ML requires categorisations and binary code, leaving little room for the parts of life which evade classification or are characterised by ambiguity.
But it doesn’t have to be this way. Prototypes are being developed by radical organisations like the The Indigenous Protocol and Artificial Intelligence Working Group, who are designing speculative futures where computer code is interwoven with the DNA of humans and sea creatures. They’re not trying to create datafied approximations of people, but instead foster responsible and responsive connections between species. Less speculative perhaps are current attempts to encourage ‘reparative’ algorithms - as in, AI systems that work in favour of the marginalised and make some attempt at compensating for the inequalities of today. This is real de-biasing, because it’s impossible to create a neutral system - every AI product has a politics and favours some people over others. If the status quo is an unjust world, using algorithms to replicate it will only compound existing problems. To be silent is to be complicit - we need AI that leans in the direction of justice. This relies on us debunking the idea that AI is objective, and instead making technologies that actively do good.
++
Eleanor Drage is a Senior Research Fellow at the University of Cambridge. Her work has been covered by the BBC, Forbes, The Telegraph, The Guardian, Glamour Magazine and internationally. She is the co-host of The Good Robot Podcast, and author of An Experience of the Impossible: The Planetary Humanism of European Women’s SF (Oct 2023), and co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024), and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oct 2023).
Dr Eleanor Drage is Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, a project leader at the Desirable Digitisation Project exploring AI ethics and co-host on The Good Robot Podcast, on gender, feminism and technologyauthor of An Experience of the Impossible: The Planetary Humanism of European Women’s SF (Oct 2023), and co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024), and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oct 2023).
By Eleanor Drage
AI is Not Objective! Artificial intelligence and the Politics of the Observed World
Madame Sosostris, famous clairvoyante,
Had a bad cold, nevertheless
Is known to be the wisest woman in Europe,
With a wicked pack of cards.
And here is the one-eyed merchant, and this card,
Which is blank, is something he carries on his back,
Which I am forbidden to see.
Horoscopes and microscopes
For millennia, humanity has sought truth through technology: from tarot cards and microscopes to photographs and fingerprinting. The secularisation of society and the decline of the mystical means that not all modes of accessing the future and casting clarity over the present are as equally accepted. This is often blamed on the Enlightenment, during which time replicable experiments were the favoured way of creating proper scientific knowledge. The hope was that humanity was finally approaching an era where with the right technologies, everything was knowable. New tools of scientific observation like the thermometer and the encyclopaedia were the vehicles for this change. These artefacts - which were made in the places of science and the libraries and offices of philosophers - were seen as substantively different to spiritual forms of truth-telling. They became institutionally validated modes of classifying the world, Europe’s colonies, and citizens.
To be counted and taxonomised was to be brought into existence. This is why science is often ‘performative’, which means that it creates the world through the process of observing it. It decides what is a perfect specimen, what is atypical, and ascribes a value to both. Of course, not everyone was allowed to be in this position of observational, God-like power. The scientist had to be a man, and he must be white. The rational subject was, therefore, also a racialised and gendered one. A strange paradox, because the Enlightenment’s premise was also that scientists aren’t implicated in their experiments, so surely it wouldn’t matter who's doing the science?
It’s important to retrace the history of science, because it reminds us that the idea of ‘objective science’ actually evolves over time. Today, AI may seem to hold the keys to the known world, but perhaps this idea will yet change. In their book Objectivity, Lorraine Daston and Peter Gallison show that new technologies actually change what scientists consider to be neutral or unbiased science. AI isn’t just the product of science, it is directing its future. That future is one where - if key opinion leaders in AI get their way - it frees humanity from mortality and fallibility. Society becomes more accurate and predictable and less emotional. “May Elon Musk go to Mars”, said feminist philosopher Rosi Braidotti, but I want to stay here and merge with rather than supercede nature. I want to become compost, as feminist biologist Donna Haraway suggests. Haraway rejects the AI-enabled ‘posthuman’ that is so popular among technologists in favour of returning to ‘Humus’ - organic matter. While AI’s environmental cost is now widely known (every 5 questions you ask ChatGPT needs 1 litre of water), the industry will receive $232 billion of investment by 2025. But it’s still the horse everyone is backing in the race to ‘improve’ humanity and peel back the mysteries of the universe.
Materiality matters
There are a number of reasons why the myth of AI’s omniscience remains popular. The most crucial one is that AI is often seen to be superhuman. While the media rarely accompanies articles about AI with anything other than images of white robots with blue eyes, we need to deflate some of the hype by picturing AI as an assemblage of human data, hardware, mined materials, and hard work. We should remember that when AI appears to be talking to us, that this is the product of a design choice and not an expression of what AI ‘is’. Real people select, process, and annotate its data, they engineer the algorithms, they decide on which metrics to use, and they deploy it into the real world in conjunction with institutions that have their own politics and values. When AI observes the world, it looks at it through the eyes of those who have built it and paid to use it. Often the former end up building their AI infrastructure using Google or Microsoft software, meaning that you can trace most AI products back to Big Tech. The latter are those who can afford to procure and deploy it; unsurprisingly, the most expensive contracts with AI service providers have been bought by clients including Microsoft and the military. The issue here isn’t that AI is biased, so much as a small group of companies in the West hold a monopoly over the production of AI.
Through the looking glass
Last year, I published a piece of research with my friend and colleague Dr Federica Frabetti about a package of technologies created by a company called Dataminr. We call it ‘protest recognition software’, because it was used to track, monitor and shut down Black Lives Matter protests in the USA in 2016. We were also concerned because investigative journalist Max Colbert had discovered that the UK government had spent as much as £5 million on Dataminr tools. This was at a time when the government were making their anti-protest stance clear through the ‘noisy bill’ and a suite of other measures designed to disincentivise and break-up protests. From Extinction Rebellion and Just Stop Oil to the Chris Packham and Sarah Everard protests, these mobilisations were predominantly left-leaning, environmentally-conscious and pro-justice crowds - and in the case of the latter two, with a high proportion of women and people of colour. Dataminr already had a history of helping the police on the other side of the pond in Baltimore and Los Angeles to clamp down on BLM uprisings. They had taught their protest recognition software to learn that a dangerous protest was a diverse, politically left and anti-government crowd.
Is this an example of ‘biased’ AI? I’m suspicious of the term bias, because it implies that something can be debiased by tweaking an algorithm or changing the dataset. You can implement all the algorithmic fairness metrics in the world, but if an AI company collaborates with powerful, unjust institutions, it will still be likely to harm people who are already at risk of institutional violence. AI companies and their clients affect how the AI behaves and what its outputs are, whether they be the decision to arrest someone or deny them bail or a credit loan. As long as companies and clients retain their desire to pursue their bottom line over the need to promote justice, technologies will too.
Seeing in low definition
There’s another way that the term ‘bias’ doesn’t fully describe harmful effects of AI. A decade ago, two Stanford computer scientists attempted to create an AI system that could decode a person’s sexuality just by looking at their face. To make this tool they scraped information from online dating websites without its users’ consent (note: bad technology = unjust data harvesting practices). In fact, what the system was actually doing was associating sexuality with proxies like the tilt of someone’s head or their use of makeup. These things can be indicators of sexuality, but sexuality itself is mysterious, complicated, and often evolves over time. The engineers’ profound misunderstanding of the temporality of sexuality (erratic and wild - not stagnant, pre-ordained, and fixed) prompted a real scare for LGBTQ+ people who feared AI might one day ‘out them’. There are many issues here, not least that the practice of open experimentation in AI often means that modern reprivals of pseudoscience - like 19th-century phrenological attempts to ascertain someone’s personality by looking at their face - are common. Mark Zuckerberg also showed this urge to unravel humanity’s mysteries computationally when he wondered out loud whether friendship could be ‘solved’ using mathematics. Like friendship, sexuality is often surprising and always relational. Omise'eke Natasha Tinsley brings this to life in her book Ezili’s Mirrors, where she describes how the Ezili family of Haitian Vodou spirit forces embody shifting forms of sexual identity and practice.
My work is often in praise of life’s ambiguities and complexities. Like the blank card pulled by Madame Sosostris, some parts of life can’t be predicted. Computer science, which helps companies to earn money from forecasting trends, often attempts to erase unpredictability. This is the case with AI, which is mostly based in machine learning (ML) techniques. ML requires categorisations and binary code, leaving little room for the parts of life which evade classification or are characterised by ambiguity.
But it doesn’t have to be this way. Prototypes are being developed by radical organisations like the The Indigenous Protocol and Artificial Intelligence Working Group, who are designing speculative futures where computer code is interwoven with the DNA of humans and sea creatures. They’re not trying to create datafied approximations of people, but instead foster responsible and responsive connections between species. Less speculative perhaps are current attempts to encourage ‘reparative’ algorithms - as in, AI systems that work in favour of the marginalised and make some attempt at compensating for the inequalities of today. This is real de-biasing, because it’s impossible to create a neutral system - every AI product has a politics and favours some people over others. If the status quo is an unjust world, using algorithms to replicate it will only compound existing problems. To be silent is to be complicit - we need AI that leans in the direction of justice. This relies on us debunking the idea that AI is objective, and instead making technologies that actively do good.
++
Eleanor Drage is a Senior Research Fellow at the University of Cambridge. Her work has been covered by the BBC, Forbes, The Telegraph, The Guardian, Glamour Magazine and internationally. She is the co-host of The Good Robot Podcast, and author of An Experience of the Impossible: The Planetary Humanism of European Women’s SF (Oct 2023), and co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024), and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oct 2023).
AI is Not Objective! Artificial intelligence and the Politics of the Observed World
Madame Sosostris, famous clairvoyante,
Had a bad cold, nevertheless
Is known to be the wisest woman in Europe,
With a wicked pack of cards.
And here is the one-eyed merchant, and this card,
Which is blank, is something he carries on his back,
Which I am forbidden to see.
Horoscopes and microscopes
For millennia, humanity has sought truth through technology: from tarot cards and microscopes to photographs and fingerprinting. The secularisation of society and the decline of the mystical means that not all modes of accessing the future and casting clarity over the present are as equally accepted. This is often blamed on the Enlightenment, during which time replicable experiments were the favoured way of creating proper scientific knowledge. The hope was that humanity was finally approaching an era where with the right technologies, everything was knowable. New tools of scientific observation like the thermometer and the encyclopaedia were the vehicles for this change. These artefacts - which were made in the places of science and the libraries and offices of philosophers - were seen as substantively different to spiritual forms of truth-telling. They became institutionally validated modes of classifying the world, Europe’s colonies, and citizens.
To be counted and taxonomised was to be brought into existence. This is why science is often ‘performative’, which means that it creates the world through the process of observing it. It decides what is a perfect specimen, what is atypical, and ascribes a value to both. Of course, not everyone was allowed to be in this position of observational, God-like power. The scientist had to be a man, and he must be white. The rational subject was, therefore, also a racialised and gendered one. A strange paradox, because the Enlightenment’s premise was also that scientists aren’t implicated in their experiments, so surely it wouldn’t matter who's doing the science?
It’s important to retrace the history of science, because it reminds us that the idea of ‘objective science’ actually evolves over time. Today, AI may seem to hold the keys to the known world, but perhaps this idea will yet change. In their book Objectivity, Lorraine Daston and Peter Gallison show that new technologies actually change what scientists consider to be neutral or unbiased science. AI isn’t just the product of science, it is directing its future. That future is one where - if key opinion leaders in AI get their way - it frees humanity from mortality and fallibility. Society becomes more accurate and predictable and less emotional. “May Elon Musk go to Mars”, said feminist philosopher Rosi Braidotti, but I want to stay here and merge with rather than supercede nature. I want to become compost, as feminist biologist Donna Haraway suggests. Haraway rejects the AI-enabled ‘posthuman’ that is so popular among technologists in favour of returning to ‘Humus’ - organic matter. While AI’s environmental cost is now widely known (every 5 questions you ask ChatGPT needs 1 litre of water), the industry will receive $232 billion of investment by 2025. But it’s still the horse everyone is backing in the race to ‘improve’ humanity and peel back the mysteries of the universe.
Materiality matters
There are a number of reasons why the myth of AI’s omniscience remains popular. The most crucial one is that AI is often seen to be superhuman. While the media rarely accompanies articles about AI with anything other than images of white robots with blue eyes, we need to deflate some of the hype by picturing AI as an assemblage of human data, hardware, mined materials, and hard work. We should remember that when AI appears to be talking to us, that this is the product of a design choice and not an expression of what AI ‘is’. Real people select, process, and annotate its data, they engineer the algorithms, they decide on which metrics to use, and they deploy it into the real world in conjunction with institutions that have their own politics and values. When AI observes the world, it looks at it through the eyes of those who have built it and paid to use it. Often the former end up building their AI infrastructure using Google or Microsoft software, meaning that you can trace most AI products back to Big Tech. The latter are those who can afford to procure and deploy it; unsurprisingly, the most expensive contracts with AI service providers have been bought by clients including Microsoft and the military. The issue here isn’t that AI is biased, so much as a small group of companies in the West hold a monopoly over the production of AI.
Through the looking glass
Last year, I published a piece of research with my friend and colleague Dr Federica Frabetti about a package of technologies created by a company called Dataminr. We call it ‘protest recognition software’, because it was used to track, monitor and shut down Black Lives Matter protests in the USA in 2016. We were also concerned because investigative journalist Max Colbert had discovered that the UK government had spent as much as £5 million on Dataminr tools. This was at a time when the government were making their anti-protest stance clear through the ‘noisy bill’ and a suite of other measures designed to disincentivise and break-up protests. From Extinction Rebellion and Just Stop Oil to the Chris Packham and Sarah Everard protests, these mobilisations were predominantly left-leaning, environmentally-conscious and pro-justice crowds - and in the case of the latter two, with a high proportion of women and people of colour. Dataminr already had a history of helping the police on the other side of the pond in Baltimore and Los Angeles to clamp down on BLM uprisings. They had taught their protest recognition software to learn that a dangerous protest was a diverse, politically left and anti-government crowd.
Is this an example of ‘biased’ AI? I’m suspicious of the term bias, because it implies that something can be debiased by tweaking an algorithm or changing the dataset. You can implement all the algorithmic fairness metrics in the world, but if an AI company collaborates with powerful, unjust institutions, it will still be likely to harm people who are already at risk of institutional violence. AI companies and their clients affect how the AI behaves and what its outputs are, whether they be the decision to arrest someone or deny them bail or a credit loan. As long as companies and clients retain their desire to pursue their bottom line over the need to promote justice, technologies will too.
Seeing in low definition
There’s another way that the term ‘bias’ doesn’t fully describe harmful effects of AI. A decade ago, two Stanford computer scientists attempted to create an AI system that could decode a person’s sexuality just by looking at their face. To make this tool they scraped information from online dating websites without its users’ consent (note: bad technology = unjust data harvesting practices). In fact, what the system was actually doing was associating sexuality with proxies like the tilt of someone’s head or their use of makeup. These things can be indicators of sexuality, but sexuality itself is mysterious, complicated, and often evolves over time. The engineers’ profound misunderstanding of the temporality of sexuality (erratic and wild - not stagnant, pre-ordained, and fixed) prompted a real scare for LGBTQ+ people who feared AI might one day ‘out them’. There are many issues here, not least that the practice of open experimentation in AI often means that modern reprivals of pseudoscience - like 19th-century phrenological attempts to ascertain someone’s personality by looking at their face - are common. Mark Zuckerberg also showed this urge to unravel humanity’s mysteries computationally when he wondered out loud whether friendship could be ‘solved’ using mathematics. Like friendship, sexuality is often surprising and always relational. Omise'eke Natasha Tinsley brings this to life in her book Ezili’s Mirrors, where she describes how the Ezili family of Haitian Vodou spirit forces embody shifting forms of sexual identity and practice.
My work is often in praise of life’s ambiguities and complexities. Like the blank card pulled by Madame Sosostris, some parts of life can’t be predicted. Computer science, which helps companies to earn money from forecasting trends, often attempts to erase unpredictability. This is the case with AI, which is mostly based in machine learning (ML) techniques. ML requires categorisations and binary code, leaving little room for the parts of life which evade classification or are characterised by ambiguity.
But it doesn’t have to be this way. Prototypes are being developed by radical organisations like the The Indigenous Protocol and Artificial Intelligence Working Group, who are designing speculative futures where computer code is interwoven with the DNA of humans and sea creatures. They’re not trying to create datafied approximations of people, but instead foster responsible and responsive connections between species. Less speculative perhaps are current attempts to encourage ‘reparative’ algorithms - as in, AI systems that work in favour of the marginalised and make some attempt at compensating for the inequalities of today. This is real de-biasing, because it’s impossible to create a neutral system - every AI product has a politics and favours some people over others. If the status quo is an unjust world, using algorithms to replicate it will only compound existing problems. To be silent is to be complicit - we need AI that leans in the direction of justice. This relies on us debunking the idea that AI is objective, and instead making technologies that actively do good.
++
Eleanor Drage is a Senior Research Fellow at the University of Cambridge. Her work has been covered by the BBC, Forbes, The Telegraph, The Guardian, Glamour Magazine and internationally. She is the co-host of The Good Robot Podcast, and author of An Experience of the Impossible: The Planetary Humanism of European Women’s SF (Oct 2023), and co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024), and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oct 2023).
Dr Eleanor Drage is Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, a project leader at the Desirable Digitisation Project exploring AI ethics and co-host on The Good Robot Podcast, on gender, feminism and technologyauthor of An Experience of the Impossible: The Planetary Humanism of European Women’s SF (Oct 2023), and co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024), and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oct 2023).
By Eleanor Drage
AI is Not Objective! Artificial intelligence and the Politics of the Observed World
Madame Sosostris, famous clairvoyante,
Had a bad cold, nevertheless
Is known to be the wisest woman in Europe,
With a wicked pack of cards.
And here is the one-eyed merchant, and this card,
Which is blank, is something he carries on his back,
Which I am forbidden to see.
Horoscopes and microscopes
For millennia, humanity has sought truth through technology: from tarot cards and microscopes to photographs and fingerprinting. The secularisation of society and the decline of the mystical means that not all modes of accessing the future and casting clarity over the present are as equally accepted. This is often blamed on the Enlightenment, during which time replicable experiments were the favoured way of creating proper scientific knowledge. The hope was that humanity was finally approaching an era where with the right technologies, everything was knowable. New tools of scientific observation like the thermometer and the encyclopaedia were the vehicles for this change. These artefacts - which were made in the places of science and the libraries and offices of philosophers - were seen as substantively different to spiritual forms of truth-telling. They became institutionally validated modes of classifying the world, Europe’s colonies, and citizens.
To be counted and taxonomised was to be brought into existence. This is why science is often ‘performative’, which means that it creates the world through the process of observing it. It decides what is a perfect specimen, what is atypical, and ascribes a value to both. Of course, not everyone was allowed to be in this position of observational, God-like power. The scientist had to be a man, and he must be white. The rational subject was, therefore, also a racialised and gendered one. A strange paradox, because the Enlightenment’s premise was also that scientists aren’t implicated in their experiments, so surely it wouldn’t matter who's doing the science?
It’s important to retrace the history of science, because it reminds us that the idea of ‘objective science’ actually evolves over time. Today, AI may seem to hold the keys to the known world, but perhaps this idea will yet change. In their book Objectivity, Lorraine Daston and Peter Gallison show that new technologies actually change what scientists consider to be neutral or unbiased science. AI isn’t just the product of science, it is directing its future. That future is one where - if key opinion leaders in AI get their way - it frees humanity from mortality and fallibility. Society becomes more accurate and predictable and less emotional. “May Elon Musk go to Mars”, said feminist philosopher Rosi Braidotti, but I want to stay here and merge with rather than supercede nature. I want to become compost, as feminist biologist Donna Haraway suggests. Haraway rejects the AI-enabled ‘posthuman’ that is so popular among technologists in favour of returning to ‘Humus’ - organic matter. While AI’s environmental cost is now widely known (every 5 questions you ask ChatGPT needs 1 litre of water), the industry will receive $232 billion of investment by 2025. But it’s still the horse everyone is backing in the race to ‘improve’ humanity and peel back the mysteries of the universe.
Materiality matters
There are a number of reasons why the myth of AI’s omniscience remains popular. The most crucial one is that AI is often seen to be superhuman. While the media rarely accompanies articles about AI with anything other than images of white robots with blue eyes, we need to deflate some of the hype by picturing AI as an assemblage of human data, hardware, mined materials, and hard work. We should remember that when AI appears to be talking to us, that this is the product of a design choice and not an expression of what AI ‘is’. Real people select, process, and annotate its data, they engineer the algorithms, they decide on which metrics to use, and they deploy it into the real world in conjunction with institutions that have their own politics and values. When AI observes the world, it looks at it through the eyes of those who have built it and paid to use it. Often the former end up building their AI infrastructure using Google or Microsoft software, meaning that you can trace most AI products back to Big Tech. The latter are those who can afford to procure and deploy it; unsurprisingly, the most expensive contracts with AI service providers have been bought by clients including Microsoft and the military. The issue here isn’t that AI is biased, so much as a small group of companies in the West hold a monopoly over the production of AI.
Through the looking glass
Last year, I published a piece of research with my friend and colleague Dr Federica Frabetti about a package of technologies created by a company called Dataminr. We call it ‘protest recognition software’, because it was used to track, monitor and shut down Black Lives Matter protests in the USA in 2016. We were also concerned because investigative journalist Max Colbert had discovered that the UK government had spent as much as £5 million on Dataminr tools. This was at a time when the government were making their anti-protest stance clear through the ‘noisy bill’ and a suite of other measures designed to disincentivise and break-up protests. From Extinction Rebellion and Just Stop Oil to the Chris Packham and Sarah Everard protests, these mobilisations were predominantly left-leaning, environmentally-conscious and pro-justice crowds - and in the case of the latter two, with a high proportion of women and people of colour. Dataminr already had a history of helping the police on the other side of the pond in Baltimore and Los Angeles to clamp down on BLM uprisings. They had taught their protest recognition software to learn that a dangerous protest was a diverse, politically left and anti-government crowd.
Is this an example of ‘biased’ AI? I’m suspicious of the term bias, because it implies that something can be debiased by tweaking an algorithm or changing the dataset. You can implement all the algorithmic fairness metrics in the world, but if an AI company collaborates with powerful, unjust institutions, it will still be likely to harm people who are already at risk of institutional violence. AI companies and their clients affect how the AI behaves and what its outputs are, whether they be the decision to arrest someone or deny them bail or a credit loan. As long as companies and clients retain their desire to pursue their bottom line over the need to promote justice, technologies will too.
Seeing in low definition
There’s another way that the term ‘bias’ doesn’t fully describe harmful effects of AI. A decade ago, two Stanford computer scientists attempted to create an AI system that could decode a person’s sexuality just by looking at their face. To make this tool they scraped information from online dating websites without its users’ consent (note: bad technology = unjust data harvesting practices). In fact, what the system was actually doing was associating sexuality with proxies like the tilt of someone’s head or their use of makeup. These things can be indicators of sexuality, but sexuality itself is mysterious, complicated, and often evolves over time. The engineers’ profound misunderstanding of the temporality of sexuality (erratic and wild - not stagnant, pre-ordained, and fixed) prompted a real scare for LGBTQ+ people who feared AI might one day ‘out them’. There are many issues here, not least that the practice of open experimentation in AI often means that modern reprivals of pseudoscience - like 19th-century phrenological attempts to ascertain someone’s personality by looking at their face - are common. Mark Zuckerberg also showed this urge to unravel humanity’s mysteries computationally when he wondered out loud whether friendship could be ‘solved’ using mathematics. Like friendship, sexuality is often surprising and always relational. Omise'eke Natasha Tinsley brings this to life in her book Ezili’s Mirrors, where she describes how the Ezili family of Haitian Vodou spirit forces embody shifting forms of sexual identity and practice.
My work is often in praise of life’s ambiguities and complexities. Like the blank card pulled by Madame Sosostris, some parts of life can’t be predicted. Computer science, which helps companies to earn money from forecasting trends, often attempts to erase unpredictability. This is the case with AI, which is mostly based in machine learning (ML) techniques. ML requires categorisations and binary code, leaving little room for the parts of life which evade classification or are characterised by ambiguity.
But it doesn’t have to be this way. Prototypes are being developed by radical organisations like the The Indigenous Protocol and Artificial Intelligence Working Group, who are designing speculative futures where computer code is interwoven with the DNA of humans and sea creatures. They’re not trying to create datafied approximations of people, but instead foster responsible and responsive connections between species. Less speculative perhaps are current attempts to encourage ‘reparative’ algorithms - as in, AI systems that work in favour of the marginalised and make some attempt at compensating for the inequalities of today. This is real de-biasing, because it’s impossible to create a neutral system - every AI product has a politics and favours some people over others. If the status quo is an unjust world, using algorithms to replicate it will only compound existing problems. To be silent is to be complicit - we need AI that leans in the direction of justice. This relies on us debunking the idea that AI is objective, and instead making technologies that actively do good.
++
Eleanor Drage is a Senior Research Fellow at the University of Cambridge. Her work has been covered by the BBC, Forbes, The Telegraph, The Guardian, Glamour Magazine and internationally. She is the co-host of The Good Robot Podcast, and author of An Experience of the Impossible: The Planetary Humanism of European Women’s SF (Oct 2023), and co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024), and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oct 2023).
AI is Not Objective! Artificial intelligence and the Politics of the Observed World
Madame Sosostris, famous clairvoyante,
Had a bad cold, nevertheless
Is known to be the wisest woman in Europe,
With a wicked pack of cards.
And here is the one-eyed merchant, and this card,
Which is blank, is something he carries on his back,
Which I am forbidden to see.
Horoscopes and microscopes
For millennia, humanity has sought truth through technology: from tarot cards and microscopes to photographs and fingerprinting. The secularisation of society and the decline of the mystical means that not all modes of accessing the future and casting clarity over the present are as equally accepted. This is often blamed on the Enlightenment, during which time replicable experiments were the favoured way of creating proper scientific knowledge. The hope was that humanity was finally approaching an era where with the right technologies, everything was knowable. New tools of scientific observation like the thermometer and the encyclopaedia were the vehicles for this change. These artefacts - which were made in the places of science and the libraries and offices of philosophers - were seen as substantively different to spiritual forms of truth-telling. They became institutionally validated modes of classifying the world, Europe’s colonies, and citizens.
To be counted and taxonomised was to be brought into existence. This is why science is often ‘performative’, which means that it creates the world through the process of observing it. It decides what is a perfect specimen, what is atypical, and ascribes a value to both. Of course, not everyone was allowed to be in this position of observational, God-like power. The scientist had to be a man, and he must be white. The rational subject was, therefore, also a racialised and gendered one. A strange paradox, because the Enlightenment’s premise was also that scientists aren’t implicated in their experiments, so surely it wouldn’t matter who's doing the science?
It’s important to retrace the history of science, because it reminds us that the idea of ‘objective science’ actually evolves over time. Today, AI may seem to hold the keys to the known world, but perhaps this idea will yet change. In their book Objectivity, Lorraine Daston and Peter Gallison show that new technologies actually change what scientists consider to be neutral or unbiased science. AI isn’t just the product of science, it is directing its future. That future is one where - if key opinion leaders in AI get their way - it frees humanity from mortality and fallibility. Society becomes more accurate and predictable and less emotional. “May Elon Musk go to Mars”, said feminist philosopher Rosi Braidotti, but I want to stay here and merge with rather than supercede nature. I want to become compost, as feminist biologist Donna Haraway suggests. Haraway rejects the AI-enabled ‘posthuman’ that is so popular among technologists in favour of returning to ‘Humus’ - organic matter. While AI’s environmental cost is now widely known (every 5 questions you ask ChatGPT needs 1 litre of water), the industry will receive $232 billion of investment by 2025. But it’s still the horse everyone is backing in the race to ‘improve’ humanity and peel back the mysteries of the universe.
Materiality matters
There are a number of reasons why the myth of AI’s omniscience remains popular. The most crucial one is that AI is often seen to be superhuman. While the media rarely accompanies articles about AI with anything other than images of white robots with blue eyes, we need to deflate some of the hype by picturing AI as an assemblage of human data, hardware, mined materials, and hard work. We should remember that when AI appears to be talking to us, that this is the product of a design choice and not an expression of what AI ‘is’. Real people select, process, and annotate its data, they engineer the algorithms, they decide on which metrics to use, and they deploy it into the real world in conjunction with institutions that have their own politics and values. When AI observes the world, it looks at it through the eyes of those who have built it and paid to use it. Often the former end up building their AI infrastructure using Google or Microsoft software, meaning that you can trace most AI products back to Big Tech. The latter are those who can afford to procure and deploy it; unsurprisingly, the most expensive contracts with AI service providers have been bought by clients including Microsoft and the military. The issue here isn’t that AI is biased, so much as a small group of companies in the West hold a monopoly over the production of AI.
Through the looking glass
Last year, I published a piece of research with my friend and colleague Dr Federica Frabetti about a package of technologies created by a company called Dataminr. We call it ‘protest recognition software’, because it was used to track, monitor and shut down Black Lives Matter protests in the USA in 2016. We were also concerned because investigative journalist Max Colbert had discovered that the UK government had spent as much as £5 million on Dataminr tools. This was at a time when the government were making their anti-protest stance clear through the ‘noisy bill’ and a suite of other measures designed to disincentivise and break-up protests. From Extinction Rebellion and Just Stop Oil to the Chris Packham and Sarah Everard protests, these mobilisations were predominantly left-leaning, environmentally-conscious and pro-justice crowds - and in the case of the latter two, with a high proportion of women and people of colour. Dataminr already had a history of helping the police on the other side of the pond in Baltimore and Los Angeles to clamp down on BLM uprisings. They had taught their protest recognition software to learn that a dangerous protest was a diverse, politically left and anti-government crowd.
Is this an example of ‘biased’ AI? I’m suspicious of the term bias, because it implies that something can be debiased by tweaking an algorithm or changing the dataset. You can implement all the algorithmic fairness metrics in the world, but if an AI company collaborates with powerful, unjust institutions, it will still be likely to harm people who are already at risk of institutional violence. AI companies and their clients affect how the AI behaves and what its outputs are, whether they be the decision to arrest someone or deny them bail or a credit loan. As long as companies and clients retain their desire to pursue their bottom line over the need to promote justice, technologies will too.
Seeing in low definition
There’s another way that the term ‘bias’ doesn’t fully describe harmful effects of AI. A decade ago, two Stanford computer scientists attempted to create an AI system that could decode a person’s sexuality just by looking at their face. To make this tool they scraped information from online dating websites without its users’ consent (note: bad technology = unjust data harvesting practices). In fact, what the system was actually doing was associating sexuality with proxies like the tilt of someone’s head or their use of makeup. These things can be indicators of sexuality, but sexuality itself is mysterious, complicated, and often evolves over time. The engineers’ profound misunderstanding of the temporality of sexuality (erratic and wild - not stagnant, pre-ordained, and fixed) prompted a real scare for LGBTQ+ people who feared AI might one day ‘out them’. There are many issues here, not least that the practice of open experimentation in AI often means that modern reprivals of pseudoscience - like 19th-century phrenological attempts to ascertain someone’s personality by looking at their face - are common. Mark Zuckerberg also showed this urge to unravel humanity’s mysteries computationally when he wondered out loud whether friendship could be ‘solved’ using mathematics. Like friendship, sexuality is often surprising and always relational. Omise'eke Natasha Tinsley brings this to life in her book Ezili’s Mirrors, where she describes how the Ezili family of Haitian Vodou spirit forces embody shifting forms of sexual identity and practice.
My work is often in praise of life’s ambiguities and complexities. Like the blank card pulled by Madame Sosostris, some parts of life can’t be predicted. Computer science, which helps companies to earn money from forecasting trends, often attempts to erase unpredictability. This is the case with AI, which is mostly based in machine learning (ML) techniques. ML requires categorisations and binary code, leaving little room for the parts of life which evade classification or are characterised by ambiguity.
But it doesn’t have to be this way. Prototypes are being developed by radical organisations like the The Indigenous Protocol and Artificial Intelligence Working Group, who are designing speculative futures where computer code is interwoven with the DNA of humans and sea creatures. They’re not trying to create datafied approximations of people, but instead foster responsible and responsive connections between species. Less speculative perhaps are current attempts to encourage ‘reparative’ algorithms - as in, AI systems that work in favour of the marginalised and make some attempt at compensating for the inequalities of today. This is real de-biasing, because it’s impossible to create a neutral system - every AI product has a politics and favours some people over others. If the status quo is an unjust world, using algorithms to replicate it will only compound existing problems. To be silent is to be complicit - we need AI that leans in the direction of justice. This relies on us debunking the idea that AI is objective, and instead making technologies that actively do good.
++
Eleanor Drage is a Senior Research Fellow at the University of Cambridge. Her work has been covered by the BBC, Forbes, The Telegraph, The Guardian, Glamour Magazine and internationally. She is the co-host of The Good Robot Podcast, and author of An Experience of the Impossible: The Planetary Humanism of European Women’s SF (Oct 2023), and co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024), and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oct 2023).
Dr Eleanor Drage is Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, a project leader at the Desirable Digitisation Project exploring AI ethics and co-host on The Good Robot Podcast, on gender, feminism and technologyauthor of An Experience of the Impossible: The Planetary Humanism of European Women’s SF (Oct 2023), and co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024), and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oct 2023).