Lookback on Zanzibar: How AI is Atomizing Society
Source: Dev.to
Foreword
Hi. This is my first article, so it might read a bit like a paper—I’m more used to that format. If you have any questions or thoughts, be sure to leave them in the comments below. This is a discussion post/article, after all.
The article aims to discuss how AI (not only large‑language models, but also advancements in data analysis) impacts human relationships as a whole. Public discussions around artificial intelligence are often dominated by a few common fears: loss of autonomy, mass surveillance, and the possibility of centralized technological control. However, the fear of a grandiose, apocalyptic AI obscures the more immediate effects of AI on everyday human interaction—namely atomisation and dependence, two phenomena tied to how algorithms are used in social media.
This systemic perspective reminds me of the situation in Stand on Zanzibar, where society struggled not from a single authoritarian power or rogue technology, but from the sum of rational, optimised decisions across many institutions. This framing is how I chose to examine the ethical risks of modern algorithmic systems.
Algorithmic Optimisation in Social Media Systems
Personalisation becomes problematic when it evolves into atomisation. In an atomised digital environment, individuals increasingly encounter information that aligns with their pre‑existing preferences, while exposure to shared narratives or common frames of reference diminishes. Over time, the probability that two users are engaging with the same information, in the same context, decreases significantly.
The paper “Echo Chambers and Algorithmic Bias” puts the effects of social‑media personalisation into clear terms:
Social media algorithms personalise content feeds, presenting users with information that reinforces their existing beliefs. This creates echo chambers, where users are isolated from diverse viewpoints.
— Salsa Della Guitara Putri, Eko Priyo Purnomo, Tiara Khairunissa
This fragmentation does not require ideological manipulation or deliberate polarisation. It arises naturally from systems that prioritise relevance and engagement over commonality. The result is not more confrontation or arguments; the result is people who are no longer discussing the same subject at all.
In such an environment, social cohesion weakens because the conditions necessary for collective understanding no longer reliably exist. Public discourse becomes a collection of parallel conversations, each internally coherent yet increasingly disconnected from the others.
Ethical Risks Beyond Control
Loss of Collective Agency
One of the most significant ethical risks of algorithmic atomisation is the slow erosion of collective agency. When individuals experience social issues through personalised informational streams, systemic problems become personal concerns. Political, economic, and even social challenges become matters of individual perception rather than a shared reality that the public faces.
This is not a one‑way system either. Many people may assume that their personal matters are matters the entire public faces, further misaligning individual and collective perceptions.
Collective action depends on shared awareness—a population recognising not only that a problem exists, but that it exists for others as well. Algorithmic personalisation undermines this prerequisite by fragmenting attention and experience. The result is a society that struggles to coordinate responses to large‑scale issues, even when technical capacity and resources are available.
Algorithmic Invisibility and Exclusion
A further ethical concern lies in the treatment of those who do not fit well within algorithmic categories. Social media and AI systems function by detecting patterns in data; users who generate limited engagement, atypical behaviour, or low‑value signals are less likely to be prioritised, amplified, or even recognised.
This produces a form of exclusion that is neither intentional nor easily observable. Individuals and communities may find themselves algorithmically invisible. Unlike traditional forms of marginalisation, this invisibility does not provoke resistance or accountability, precisely because it lacks a clear source and distinct human control.
From an ethical standpoint, this raises questions about fairness, representation, and responsibility in systems where harm emerges from a lack of action.
Normalisation of Systemic Harm
Perhaps the most challenging ethical issue is the diffusion of responsibility. Algorithmic systems are rarely controlled by a single actor; they emerge from interactions between corporate incentives, technical constraints, regulatory environments, and user behaviour. Each component may operate rationally and ethically within its own domain, yet the system as a whole produces harmful outcomes.
This mirrors a broader challenge in AI ethics: harms that arise without malicious actors are often the hardest to address. When no individual decision appears unethical in isolation, systemic consequences are easily dismissed as unintended side effects rather than ethical failures.
Social Media as a Case Study in Atomisation
Social media platforms provide a clear illustration of how algorithmic optimisation can undermine shared social space:
- News feeds prioritise emotionally resonant content.
- Recommendation systems reinforce identity‑based engagement.
- Ranking algorithms amplify content that maximises interaction, regardless of its contribution to a common public discourse.
These mechanisms collectively fragment the information ecosystem, making it increasingly difficult for societies to maintain a shared sense of reality and, consequently, to act collectively on the challenges they face.
Algorithmic Atomisation and the Future of AI
The Problem of Algorithmic Atomisation
Social media platforms use algorithmic curation to maximise user engagement. By continuously selecting what is visible, relevant, and salient, these systems shape attention and, consequently, users’ perception of reality. The ethical issue is not persuasion per se, but selection: what is shown, what is omitted, and what is rendered invisible.
As engagement‑driven systems scale, outrage, reinforcement, and emotional intensity become statistically favoured, while nuance, shared context, and slow consensus‑building are deprioritised. The resulting environment rewards fragmentation without requiring any explicit intention to divide.
Implications for Future AI Adoption
The proliferation of artificial‑intelligence systems beyond social media—into education, employment, healthcare, and public services—can increase atomisation. AI‑powered personalised learning platforms, work‑allocation tools that adapt to individual needs, and algorithmic decision‑making offer efficiency and maximised individual outcomes; however, they also risk reducing the shared experiences that bind societies.
If left unchecked, AI systems may minimise social cohesion, the glue that sustains trust, collaborative effort, and communal responsibility. Ethical considerations of AI should therefore include systemic impacts on social cohesion, not just concerns such as accuracy, discrimination, or transparency. Many harms will manifest slowly, through the gradual erosion of the shared frameworks on which effective societal interaction depends.
Ethical Considerations and Design Implications
Addressing algorithmic atomisation does not require rejecting personalisation or AI‑driven systems outright. Instead, it calls for broader ethical metrics and design principles, such as:
- Impact‑based evaluation – assess systems on their effect on shared context, not only on individual outcomes.
- Hybrid information spaces – design mechanisms that preserve common informational environments alongside personalisation.
- Transparency – make optimisation goals and trade‑offs visible to users and stakeholders.
- Cohesion as a design concern – treat social cohesion as a legitimate design objective rather than an externality.
Ethical AI design must recognise that some values—shared understanding, collective agency, and social cohesion—are difficult to quantify yet essential to preserve.
Conclusion
Technological systems rarely fail because they are outright malicious; more often, they cause harm through excessive optimisation. As R. Wang et al. (2023) note:
“Smart technologies facilitate precise and focused advertising and marketing efforts, potentially impacting user behavior and decision‑making processes.”
The ethical challenge for present and future AI is two‑fold:
- Prevent manipulation of individuals.
- Address the invisible fragmentation of social reality caused by the disruption of physical‑digital relationships.
The greatest risk is not a world where machines rule, but a world where individuals become ever more alone together—optimised for personal gain while collective social structures continue to deteriorate.
