The master’s tools will never dismantle the master’s house, said Audre Lorde in 1979 – the same year the Association for the Advancement of Artificial Intelligence was founded. Forty years on AI and algorithmic technologies are ubiquitous – from the mundane to the morbid, they exert a profound influence on our daily lives. Lorde’s words as relevant as ever: algorithms currently function as a tool of the proverbial master, while their applications ensure the robustness of his house. With dominant AI technologies emerging from and reproducing colonial logics, the worlds we build with them reflect the violence of our own.
But design can interrupt this process: at WDCD Live 2025, we considered how to reclaim the algorithm and imagine new technologies for emancipatory futures.
Drawing on his interdisciplinary practice as a creative technologist, digital anthropologist and poet Abdelrahman Hassan led a breakout session at WDCD Live 2025 exploring the urgent question: what if we designed AI not just to be efficient, but to be caring, fair, and deeply connected to communities?
At the outset, Hassan asked the group: what’s the craziest interaction you’ve had with an AI tool? The room was immediately buzzing, the question eliciting anecdotes from the wholesome to the unhinged. The ethical – and even existential – anxieties around AI technologies are wide-ranging: How do we grapple with these non-humans taking on increasingly human functions and roles of companions and confidants? What does it mean to seek emotional advice about embodied experience from a bot that is inherently unemotional and disembodied? Who gets access to this data? And how do these processes fracture our sense of self, command our agency, and influence our increasing disconnection from nature and each other?
World-building & power – whose technological imagination are we living in?
A slide on screen shows the individuals at the helm of today’s biggest tech companies – not a diverse group, to put it lightly. One way of understanding this cohort of CEOs who preach about tech-utopias and trillion dollar opportunities is through the idea of the ‘TESCREAL bundle’: an overlapping set of ideologies that champion various forms of extreme human perfectibility. While proclaiming bold visions of capital P progress, these philosophies have been historically bound up in processes of exploitation and marginalisation. AI in this crucial sense is not artificial – it’s trained by some human minds on some human data with all of the biases of these selective stories getting baked in to systems from the outset. While it promises futures of maximised efficiency and a transcendence of multiple boundaries (including even the body itself), it cannot transcend the biases we input to it. So, while it’s not uncommon, points out Hassan, for AI skeptics to be charged with a kind pessimistic conspiracy thinking, these technologies are built on distinctive and highly questionable views about what it means to be human, creating an “imagination trap” – that’s really not a conspiracy.
Data coloniality is not a metaphor
State and corporate actors employ data in ways that are patently extractive. This plays out in a plethora of ways, extending legacies of colonialism and mirroring its logics. Data has become a valuable resource seen as up for grabs by the tech giants that profit exponentially off of it. Data centres are built on occupied land in Israel, algorithms are trained and content moderated through the outsourcing of cheap labour in Kenya, while Katy Perry boards a rocket owned by Jeff Bezos.
In the second part of the workshop Hassan took us through a mapping exercise to highlight the entanglements of AI with these systems of power and violence: from the Compass policing scandal in New York, to the revelation that Siri and Alexa eavesdrop on couples having sex, to the Muslim prayer app selling user data to the CIA, we took in the extent of how hegemonic actors employ tech in ways that extend surveillance and encroach on the needs and freedoms of the publics they’re meant to serve.
Anatomy of a black mirror episode
The scope and depth of the systems at play is overwhelming: the next part of the workshop took us through a speculative exercise to better break the processes down. Through four categories – “toxic imaginary”, “oppressive technology”, “harmful implication” and “benefitting actor” – Hassan had our groups work collectively to chart potential harms and reveal the consistency in the logic. Using these blocks, we collectively imagined various colonial loops, exploring the ways that tech is both colonised by power – through addictive apps, cookies, unequal computing infrastructures – and simultaneously colonises more and more aspects of our worlds – time, access to the city, housing, healthcare.
Like the Netflix series Black Mirror, there was an uncanny sense of proximity: for each troubling scenario put forward, someone could name a real world prototype. The colonisation of bodily autonomy, of agency are not just abstract dystopian spectres. The gap between these speculative doom-scapes and the realities we’re building is minimal: science fiction always becomes science fact, Hassan points out.
Designing for emancipatory tech: getting out of the imagination trap
Our final activity was a communal design exercise to inspire ways of refusing and disrupting the colonial loops instigated and upheld by dominant AIs.
When it comes to design, Hassan advocates for a new paradigm and practice: one between toxic positivity & toxic negativity. While toxic positivity preaches tech-solutionism – the blind and misguided belief in the possibility of technologies to solve our collective problems while misrepresenting the nature & complexity of said problems – toxic negativity describe the anxiety and paralysis that can result from critically confronting the full depth and implications of AI’s potential harms.
For Hassan, the space between these poles is one of “joyful resistance” that can allow us to retain crucial insights from critique, reject oversimplified solutionism, and tap into imagination. Drawing on his practice as a poet, he describes how we can start to break out of big-tech’s imagination traps. Metaphors, myths and literary tropes can be devices for introducing interventions into the design process. For example, the ghost can provoke us to think of ways technology can be used to haunt abusers of power. Projects like Mimi Onuoha’s library of missing data sets embodies a kind of counter-oppressive logic and points to the potential of a different kind of data-enabled intelligence.
Through the workshop, Hassan guided us to recognise how we might escape the imagination trap by seeing the trap. By recognising its narrowness, and calling to mind all the vast possibilities of the human that are not currently built in to the technological infrastructures we might then ask: who gets to imagine the tech, who uses it, and who is it used against? Among other decolonial thinkers referenced in the session, Hassan draws our attention to Achille Mbembe: the task of thinking today consists in reimagining the world as something other than an extension of the present. By distancing ourselves from the common logic of optimisation – and the other colonial logics that underpin current tech ecosystems – we can tap into places of ambiguity, refusal, dissent and start to design a more human kind of algorithm.
Words by Saoirse Walsh, freelance writer based in Amsterdam, NL