Post

The Slop Report — April 29, 2026

Today’s digest: Google goes all-in with the Pentagon, AI floods political channels, peer review is compromised, and the courts start forcing accountability.


TechCrunch

Google Grants Pentagon Unrestricted AI Access After Anthropic declined a DoD contract over concerns about autonomous weapons and mass surveillance applications, Google signed it. Over 950 Google employees publicly protested the deal — it went through regardless. The contract gives the Pentagon access to Google’s full AI stack with no documented restrictions on use cases.


Axios

OpenAI & Anthropic Brief Congress on AI Cyber Threats Anthropic is withholding its “Mythos Preview” model from general release entirely because it can identify and exploit critical security vulnerabilities with too much reliability. OpenAI took a tiered-access approach with GPT-5.4-Cyber, releasing it to vetted security researchers only. Both companies briefed the Senate Select Committee on Intelligence this week.


Phys.org

‘Slopaganda’ Floods Canadian Politics — 40M Views on AI-Generated Separatism Content Researchers identified 20 AI-powered channels that generated over 40 million views pushing fabricated content about Alberta separatism. A parallel Iranian operation was caught using AI-generated military footage to misrepresent conflict zones. Academics are now calling this category of influence operation “slopaganda” — cheap, scalable, and increasingly hard to detect.


Fortune

200+ Organizations Demand YouTube Ban AI Slop from Kids Platform A coalition of over 200 child advocacy, education, and media literacy organizations filed a formal demand with YouTube to remove AI-generated content from YouTube Kids. The groups cite a flood of algorithmically-optimized but meaningless videos targeting children’s recommendation feeds — designed to maximize watch time with zero editorial intent.


HowAIWorks.ai / ICLR 2026

21% of ICLR 2026 Peer Reviews Are AI-Generated Analysis of ICLR 2026 submissions found that roughly one in five peer reviews were written primarily by AI. GPTZero’s independent audit also identified more than 50 factual hallucinations in papers that were accepted and published. The integrity of AI research peer review is now openly questioned within the community.


OpenReview / ICLR 2026

“The Reasoning Trap”: Smarter AI Agents Hallucinate More A paper accepted at ICLR 2026 demonstrates that reinforcement learning improvements to reasoning models increase tool-use hallucination rates proportionally to performance gains. As models get better at complex reasoning, they also get more confidently wrong when calling external tools — a significant problem for enterprise deployments that rely on agent reliability.


TechCrunch

GPT-5.5 Released — OpenAI’s Second Frontier Model in Two Months OpenAI released GPT-5.5 just weeks after GPT-5, offering end-to-end multimodal capabilities gated to paid tiers. The release cadence is now faster than any meaningful external safety review cycle. Critics note that even internal evaluations are struggling to keep pace with the speed of deployment.


Reuters Institute

AI Slop Is Quietly Conquering the Web A Reuters Institute study found hundreds of ad-revenue domains built entirely with automated site builders and AI text generators — zero human editorial input, designed entirely to rank in search and serve programmatic ads. Google’s spam enforcement is failing to keep pace with the volume and sophistication of the operations.


NotebookCheck

AI Slop Is Winning the Front Page — Authentic Technical Content Is Getting Buried Hardware benchmark communities report that authentic technical content is increasingly hard to find as AI-generated SEO slop dominates search results. Forum moderators describe whack-a-mole enforcement against sites that clone real reviews, swap out model names, and publish instantly at scale.


Norton Rose Fulbright / Supreme Court

OpenAI Ordered to Produce 78M Output Logs; Supreme Court Cements AI Copyright Limits Courts compelled OpenAI to hand over 78 million generated output logs in ongoing copyright litigation. Separately, the U.S. Supreme Court declined to hear Thaler v. Vidal on appeal, cementing that works created solely by AI with no human authorship are not eligible for copyright protection.

This post is licensed under CC BY 4.0 by the author.