Imagine a world where every action you take, every choice you make, is reduced to a mere data point — your individuality, spontaneity, and complexity stripped away. This isn’t a far-fetched dystopian vision; it’s a real concern in the development and operation of artificial intelligence (AI). Reductionism, a prevailing method in AI, simplifies complex human behaviors and experiences into patterns that algorithms can process, often at the expense of context and nuance. This reductionist approach mirrors the dangers of totalitarianism, where individuality is erased in favor of control and predictability.
The suppression of plurality and the elimination of diversity of thought create a monolithic narrative, aligning with the goals of a totalitarian regime. Accordingly, this suppression aims to control how people think and how they perceive reality, representing a key feature of totalitarianism.
— Larissa Wicker
In this paper, I explore the intersection of feminist epistemology and AI, proposing that the reductionist tendencies in AI reflect what Hannah Arendt termed the “banality of evil”—a kind of moral thoughtlessness that leads to dehumanizing practices. By synthesizing the critiques of Vandana Shiva and Arendt’s analysis of totalitarianism, I argue that AI systems risk perpetuating biases and injustices unless they account for the rich, qualitative aspects of human life. Just as totalitarian regimes suppress diversity to maintain control, AI systems — designed with reductionist logic — can marginalize and oppress individuals by reducing them to data points.
The implications of this are profound. As algorithms increasingly shape decisions in areas such as hiring, policing, and social welfare, the lack of context in these systems results in biased outcomes. Take, for example, the Dutch childcare benefits scandal, where a biased algorithm wrongly identified thousands of families, particularly minorities, as fraudsters. The algorithm, trained on historical data, lacked the ethical awareness to question its assumptions, leading to devastating real-world consequences.
To prevent such digital totalitarianism, I emphasize the need for feminist philosophy in AI development. Standpoint epistemology, which advocates for the inclusion of diverse perspectives in knowledge creation, offers a pathway to more ethical AI. By incorporating qualitative data and recognizing the situated knowledge of marginalized groups, we can move beyond the simplistic, reductionist models currently dominating AI.
In conclusion, the future of AI must be inclusive, diverse, and transparent. Only by embedding these principles into AI systems can we ensure they serve the needs of all individuals, not just the privileged few. Through feminist critique, we have the tools to build a more just and humane technological future.
If you are interested in the full paper, just get in touch with me.
Quick Links
Legal Stuff