The text is one of four speeches by Bartosz Frąckowiak introducing the subsequent discussion blocks during Signal Forum in Prague in 2024.

Let us begin with a provocative hypothesis: while AI appears to be a breakthrough in modern technology and promises to stimulate transformations in many domains, it paradoxically reinforces existing power structures rather than challenging them.

AI is primarily developed and deployed with the support of large corporations, governments, and military institutions, which often use it to consolidate control over societies, economies, and even political systems. Inequalities are not disappearing; massive energy and water consumption by huge data centers has been increasing. Sometimes AI is also used to calculate and predict targets during war, resulting in more, not fewer, casualties among civilians.

These corporate and state entities impose new systems of control and surveillance to further influence or manipulate decision-making, affective reactions, cognitive processes, and physiological states. The result is not necessarily a democratization or distribution of power.

We know from many studies, books, and artworks that deploying AI systems that automate decision-making processes could perpetuate the inequalities that already exist within our societies. While those systems are trained on vast amounts of data, they reflect the biases, worldviews, stereotypes, and prejudices embedded in those datasets — datasets created in particular social, cultural, ideological, and political conditions, whether racial, gendered, or socioeconomic.

Infrastructures for the Common Good

In this context, it is crucial to reflect on what kind of infrastructures or innovation systems could help us design technologies that support our striving for the common good. How could art or design be involved in imagining, conceiving, creating, and sustaining not only such technologies, but also such infrastructures or systems?

The work of Michaela Magas offers some interesting insights here. As the Founder of MTF Labs and Chair of the Industry Commons Foundation, Magas leads groundbreaking technology experiments that test the impact of AI, deep learning, and brain-computer interfaces. Her work explores how humans can shape frontier technologies to enhance agency and accountability. The systems we design — whether through technology or mental models — fundamentally alter our collective experience. Her focus on Ecosystem Living, Industry Commons, and JUST Data explores how technology can empower humans while safeguarding well-being, equality, and transparency.

At the same time, we may ask: what is the role of art in this process? Can art help in designing fairer, more sustainable, and more equitable technologies?

There are at least three modalities to consider: art that critically analyzes AI and its social, cultural, and political conditions and consequences; art that is involved in consciously and intentionally redefining and designing AI models; and art generated or co-created by AI.

Mean Images

To better understand the risks and challenges we face, it is worth engaging with Hito Steyerl’s concept of “mean images.” This term refers to statistical images generated by artificial intelligence models — images that represent averaged versions of visual material collected in databases. They are based on past patterns, sometimes blended, merged, mixed, or expanded. But they never introduce innovations of a paradigmatic nature or new aesthetic conventions.

While mean images can be useful for tasks like facial recognition or predictive analytics, they are not innovative in terms of new styles. They do not invent, nor do they solve any visual problem.

Isn’t it the case that machines can perfectly simulate creativity, but after a long time we experience fatigue with the predictability and repetitiveness of images, or irritation caused by the perfect nature of the style emulated by Large Language Models? And above all — don’t they secretly normalize visual, social, cultural, and ethical realities? Not as a human being would do intentionally, but as code in disguise, operating on the border of our perception or slightly beyond it.

Fitting everything to the curve of the statistical average, AI models remove anomalies and non-obvious perspectives — all the most interesting aspects of visual representation.

A statistical approach to reality could lead to a flattening of knowledge, cultural and social diversity, as well as the suppression of any form of progressive thought and imagination. We run the risk that in nourishing a belief in the progressive power of artificial intelligence, we have in fact created a radically conservative technology, based on old taxonomies, human frailty, and assumptions present not only in training databases but also in the categories given to objects during human-supervised training — the non-visible mechanical Turk that is the hidden condition of what appears to be automation — and predictions based on extrapolation of past patterns.

One could say: a mythological machine that reproduces the same thing over and over again, reconfiguring, mixing, blending past patterns.

Gazing Machines

The work of artist Quayola deeply engages with these very tensions. Quayola’s practice explores the relationship between the natural and artificial, the figurative and abstract. His project Gazing Machines dives into the complex relationship between humans and machines, particularly discussing new aesthetics and algorithmic gestures inspired by reimagining historical aesthetics like landscape painting and classical sculpture.

In the context of Quayola’s works, we can try to challenge Steyerl’s theory and ask new questions. Do machines really produce mean images? Mean sculptures? Mean digital landscape paintings?

Quayola’s practice inspires reflection on the role of human beings in imagining and conducting structured exploration of possible futures so as to shape them intentionally and consciously. The present moment was once an unimaginable future. However, someone imagined it and influenced it. If we do not take up the challenge, the future may turn out to be a product of thoughts, affects, and values far different from what we consider to be the desired future.

Art can help us in taking an active role in this process, providing us with tools and methods for prototyping, testing, hacking, transforming, reworking, and designing more desired or preferable futures.

Challenging Technological Myths

This question leads us to the work of Irini Papadimitriou, a curator who has been exploring how art challenges technological myths and proposes alternatives to the dominant narratives of AI. Her exhibitions, including AI: Who’s Looking After Me? and You and AI: Through the Algorithmic Lens, focus on how art provides critical material for understanding the hidden structures behind technological systems.

Her work explores how technology, particularly AI, is often mythologized as an independent force when in fact it is shaped by human fallibilities, biases, and inequalities. Papadimitriou shares examples of how artists reframe definitions of posthumanism and offer alternative visions of futurity, countering Western technological colonialism and reductionism.

Art is an area of experimentation and of externalized imagination. Some artists focus on prototyping weird, non-canonical, anomalous technologies, while others direct their attention to regenerative or sustainable technologies, entangled with nature or ecosystems. Some of them speculate about and test new relationships between humans and machines, outside or beyond the Valley — understood literally as Silicon Valley and metaphorically as the rules established by the IT industry.

Beyond the Valley

The concept of Machinery Missionaries invites us to embark on a speculative journey beyond the Valley. And what is beyond the Valley? What kind of technological systems could we discover in these new territories? What kind of new rules could we imagine there?

Silicon Valley corporations like to speak about the necessity of aligning technologies with human values. But are we sure we share the same understanding of them? Are values from the Valley the same as values from beyond it?

Values vary culturally and politically, influenced by their place in systems of power as well as economic interests. Do we want machines to represent Western values, Yoruba values, Palestinian values, or universal human rights values? Who is going to define them?

Toward a New Universalism?

There is one more provocation worth considering. Following Susan Buck-Morss, we might ask about the possibility of inventing and co-creating some kind of “new universalism.” Although this notion is not traditionally applied in the context of technology, it offers an interesting perspective for discussing the development of planetary technological systems and their infrastructures.

One of Buck-Morss’s key arguments, articulated in Hegel, Haiti and Universal History, is that the Haitian Revolution is crucial to understanding a more inclusive form of universalism because it embodies a struggle for universal freedom that transcends the limitations of European Enlightenment ideals. Buck-Morss suggests that true universalism must incorporate the perspectives and experiences of historically marginalized people, rather than perpetuating exclusionary narratives that conform to colonial and imperial power structures.

Could we imagine a new inclusive universalism – non-Western and anti-colonial, in line with Buck-Morss’s postulate – in relation to technologies and, specifically, thinking machines? How could we work on it, and what values could it be based on?

Could our journey beyond the Valley into new territories help us in this endeavor? The question remains open, but the necessity of asking it grows more urgent with each passing day.