AI-Generated Text and the Detection Arms Race
Source: Schneier on Security
Source: Schneier on Security – “The AI‑Generated Text Arms Race”
AI‑Generated Submissions Flooding Institutions
In 2023, the science‑fiction literary magazine Clarkesworld stopped accepting new submissions because a large proportion were generated by artificial intelligence. Editors discovered that many writers had copied the magazine’s detailed story guidelines into an AI and submitted the resulting text. Clarkesworld is not alone—other fiction magazines have reported a similar surge of AI‑generated submissions.
This is just one illustration of a broader trend. Legacy systems historically relied on the difficulty of writing and cognition to limit volume. Generative AI overwhelms those systems because the humans on the receiving end cannot keep up.
Where the flood is happening
- Newspapers – inundated by AI‑generated letters to the editor.
- Academic journals – swamped with AI‑written papers.
- Legislatures – flooded with AI‑generated constituent comments.
- Courts worldwide – deluged with AI‑generated filings, especially from self‑representing litigants.
- AI conferences – saturated with AI‑generated research papers.
- Social media – awash with AI‑generated posts.
- Other domains – music, open‑source software, education, investigative journalism, and hiring are experiencing the same story.
Institutional Responses: Shut‑Downs vs. Counter‑Measures
Like Clarkesworld’s initial response, some institutions have shut down their submissions processes. Others have met the onslaught of AI inputs with defensive responses, often involving a counter‑acting use of AI:
- Academic peer reviewers increasingly use AI to evaluate papers that may have been generated by AI.
- Social‑media platforms turn to AI moderators.
- Court systems employ AI to triage and process litigation volumes supercharged by AI.
- Employers use AI tools to review candidate applications.
- Educators use AI not just to grade papers and administer exams, but also as a feedback tool for students.
These are all arms races: rapid, adversarial iteration to apply a common technology to opposing purposes. Many of these arms races have clearly deleterious effects.
Negative consequences
- Legal system: courts clogged with frivolous, AI‑manufactured cases.
- Academia: publications and citations accrue to those who fraudulently submit AI‑written letters and papers rather than to those whose ideas have the most impact.
- Society: fraudulent behavior enabled by AI threatens the institutions on which we rely.
Upsides of AI
Even amid AI arms races, there are surprising hidden benefits. The hope is that institutions can adapt in ways that make them stronger.
Science
AI is poised to strengthen scientific work, though it also introduces the risk of AI‑generated errors (e.g., nonsensical phrasing slipping into papers).
- A scientist who uses AI to assist in writing an academic paper can benefit if the tool is used carefully and with full disclosure.
- AI is becoming a primary research tool: literature review, programming, coding, and data analysis.
- For many non‑native English speakers, AI offers affordable writing assistance that previously required costly human editors.
Fiction
- Fraudulently submitted AI‑generated works harm human authors (by increasing competition) and readers (who feel deceived).
- Some outlets may welcome AI‑assisted submissions if authors provide appropriate disclosure and follow clear guidelines, using AI to evaluate originality, fit, and quality.
- Outlets that wish to exclusively publish human‑written works will need to restrict submissions to trusted authors; transparent policies let readers choose the format they prefer.
Employment
- It is acceptable for a job seeker to use AI to polish a résumé or write stronger cover letters—wealthy and privileged individuals have long had access to human assistance.
- The line is crossed when AI is used to misrepresent identity or experience or to cheat during job interviews.
Democracy
- A healthy democracy requires that citizens be able to express opinions to representatives and through the media.
- Historically, the rich could hire writers to turn ideas into persuasive prose; AI democratizes that assistance.
- Risks: AI mistakes and bias can be harmful, and citizens may rely on AI for statements about history, law, or policy that they cannot independently verify.
## Fraud Booster
What we don’t want is for lobbyists to use AIs in **astroturf campaigns**, writing multiple letters and passing them off as individual opinions. This is an older problem that AI is making worse.
### Power dynamics
- **Positive application:** AI reduces the effort for a citizen to share lived experience with a legislator → power‑equalizing, enhances participatory democracy.
- **Negative application:** The same technology enables corporate interests to misrepresent the public at scale → power‑concentrating, threatens democracy.
Bottom Line
In general, we believe writing and cognitive assistance, long available only to the rich and powerful, should be available to everyone. The problem arises when AI makes fraud easier. Any responsible deployment must balance the democratizing potential of AI with safeguards against its misuse.
Balancing Harms with Benefits
The response needs to balance embracing the newfound democratization of access with preventing fraud.
- There is no way to turn this technology off. Highly capable AIs are widely available and can run on a laptop.
- Ethical guidelines and clear professional boundaries can help—for those acting in good faith.
- However, we will never be able to stop academic writers, job seekers, or citizens from using these tools, whether as legitimate assistance or to commit fraud. This means more comments, more letters, more applications, and more submissions.
The problem is that the recipients of this AI‑fueled deluge cannot cope with the increased volume. What can help is developing assistive AI tools that benefit institutions and society while also limiting fraud. That may mean embracing AI assistance even in adversarial systems, accepting that defensive AI will never achieve absolute supremacy.
The Science‑Fiction Community’s Experience
The science‑fiction community has been wrestling with AI since 2023. Clarkesworld eventually reopened submissions, claiming it has an adequate way of separating human‑ and AI‑written stories. No one knows how long, or how well, that will continue to work.
The Ongoing Arms Race
There is no simple way to tell whether the potential benefits of AI will outweigh the harms, now or in the future. As a society, we can influence the balance of harms it wreaks and the opportunities it presents as we navigate the changing technological landscape.
This essay was written with Nathan E. Sanders and originally appeared in The Conversation.