This commit is contained in:
Raven Scott 2025-06-08 02:28:56 -04:00
parent 8ca84c1684
commit 9602893cf9

View File

@ -14,8 +14,6 @@ The AI landscape in 2025 is a dystopian fever dream, a chaotic gold rush where t
The transformer architecture, a statistical trick for predicting text, has been inflated into a godlike entity, worshipped with fanatical zeal while ignoring the wreckage it leaves behind. This obsession with scale is collective madness. Models are trained on datasets so colossal - trillions of tokens scraped from the internets cesspool, books, and corporate sludge - that even their creators cant untangle the mess. The transformer architecture, a statistical trick for predicting text, has been inflated into a godlike entity, worshipped with fanatical zeal while ignoring the wreckage it leaves behind. This obsession with scale is collective madness. Models are trained on datasets so colossal - trillions of tokens scraped from the internets cesspool, books, and corporate sludge - that even their creators cant untangle the mess.
Every company, every startup, every wannabe AI guru is unleashing armies of scrapers to plunder the web, hammering servers and destabilizing the digital ecosystem. High-profile failures - like Samsung banning ChatGPT after code leaks, Googles Bard hallucinating, Zillows AI pricing flop costing millions, and IBM Watson Healths erroneous cancer recommendations - underscore the chaos [Lakera, 2024](https://www.lakera.ai/blog/risks-of-ai). Were not building progress; were orchestrating a digital apocalypse.
## Apples *Illusion of Thinking*: A Flamethrower to AIs Lies ## Apples *Illusion of Thinking*: A Flamethrower to AIs Lies
Apples June 2025 paper, *The Illusion of Thinking*, is a flamethrower torching the AI industrys lies. Authors Parshin Shojaee, Iman Mirzadeh, and their team devised ingenious puzzle environments to test LRMs so-called reasoning, demanding real problem-solving, not regurgitated answers. The results are a flaming middle finger to every AI evangelist. Apples June 2025 paper, *The Illusion of Thinking*, is a flamethrower torching the AI industrys lies. Authors Parshin Shojaee, Iman Mirzadeh, and their team devised ingenious puzzle environments to test LRMs so-called reasoning, demanding real problem-solving, not regurgitated answers. The results are a flaming middle finger to every AI evangelist.
@ -24,9 +22,7 @@ LRMs breeze through simple tasks but implode spectacularly on complex ones, spew
These models are erratic, nailing one puzzle only to choke on a near-identical one, guessing instead of reasoning, their outputs as reliable as a drunk gamblers dice roll. Humiliatingly, basic LLMs often outshine LRMs on simple tasks, with the “reasoning” baggage slowing them down or causing errors. Why are we worshipping a downgrade? These models are erratic, nailing one puzzle only to choke on a near-identical one, guessing instead of reasoning, their outputs as reliable as a drunk gamblers dice roll. Humiliatingly, basic LLMs often outshine LRMs on simple tasks, with the “reasoning” baggage slowing them down or causing errors. Why are we worshipping a downgrade?
The “thinking processes” LRMs boast are a marketing stunt, revealed by Apple as a chaotic mess of incoherent leaps, dead ends, and half-baked ideas - not thought, but algorithmic vomit. LRMs fail to use explicit algorithms, even when essential, faking it with statistical sleight-of-hand that collapses under scrutiny. This brittleness isnt theoretical: IBM Watson Healths cancer AI made erroneous treatment recommendations, risking malpractice, and Googles Bard hallucinated inaccurate information [Lakera, 2024](https://www.lakera.ai/blog/risks-of-ai). The “thinking processes” LRMs boast are a marketing stunt, revealed by Apple as a chaotic mess of incoherent leaps, dead ends, and half-baked ideas - not thought, but algorithmic vomit. LRMs fail to use explicit algorithms, even when essential, faking it with statistical sleight-of-hand that collapses under scrutiny.
A January 2025 McKinsey report notes that 50% of employees worry about AI inaccuracy, 51% fear cybersecurity risks, and many cite data leaks, aligning with Apples findings of unreliable outputs [McKinsey, 2025](https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work). This isnt a warning - its a guillotine.
## Amodeis Confession: Were Flying Blind ## Amodeis Confession: Were Flying Blind
@ -34,8 +30,7 @@ If Apples paper lit the fuse, Dario Amodeis essay is the explosion that le
Were not tweaking a buggy app; were wielding tech that could reshape civilization, navigating it with a Magic 8-Ball. Amodeis dream of an “MRI on AI” is a desperate cry, not a roadmap. He admits we cant explain why a model picks one word or makes an error. This isnt like not knowing a cars engine - you can still drive. Its like not knowing why a nuclear reactor keeps melting down, yet firing it up. Were not tweaking a buggy app; were wielding tech that could reshape civilization, navigating it with a Magic 8-Ball. Amodeis dream of an “MRI on AI” is a desperate cry, not a roadmap. He admits we cant explain why a model picks one word or makes an error. This isnt like not knowing a cars engine - you can still drive. Its like not knowing why a nuclear reactor keeps melting down, yet firing it up.
Anthropics red-teaming experiments, breaking models to study flaws, are a Band-Aid on a severed artery. Were light-years from cracking the black box. A January 2025 McKinsey report calls LLMs “black boxes” lacking transparency, eroding trust in critical tasks [McKinsey, 2025](https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work). A March 2025 IBM article stresses that without traceability, risks like data leakage escalate [IBM, 2025](https://www.ibm.com/think/insights/ai-agents-2025-expectations-vs-reality). Anthropics red-teaming experiments, breaking models to study flaws, are a Band-Aid on a severed artery. Were light-years from cracking the black box.
## Web Scrapings Reign of Terror ## Web Scrapings Reign of Terror
The AI industrys data addiction is a digital plague, and web scraping is its weapon. An X post glorifying Scrapy, a Python framework with over 55,000 GitHub stars, exposes the truth: the industry is waging war on the internet [Scrapy Post, 2025](https://x.com/birgenbilge_mk/status/1930558228590428457?s=46). Scrapys “event-driven architecture” and “asynchronous engine” hammer servers with hundreds of simultaneous requests, ripping data at breakneck speed. Its CSS/XPath selectors and JSONL exports make it a darling for LLM pipelines. The AI industrys data addiction is a digital plague, and web scraping is its weapon. An X post glorifying Scrapy, a Python framework with over 55,000 GitHub stars, exposes the truth: the industry is waging war on the internet [Scrapy Post, 2025](https://x.com/birgenbilge_mk/status/1930558228590428457?s=46). Scrapys “event-driven architecture” and “asynchronous engine” hammer servers with hundreds of simultaneous requests, ripping data at breakneck speed. Its CSS/XPath selectors and JSONL exports make it a darling for LLM pipelines.
@ -68,8 +63,6 @@ Amodeis right: we cant explain a damn thing these models do. If an AI deni
Benchmarks are a con, rigged because models train on test problems, faking genius. Apples puzzles exposed the truth: in the real world, theyre clueless. Deploying based on fake scores is fraud. LLMs, built on stolen data, spew bias, lies, and hate. Scaling that is weaponizing chaos - an AI newsroom churning propaganda, a hiring tool blacklisting groups, a legal bot fabricating evidence. This is how civilizations rot. Benchmarks are a con, rigged because models train on test problems, faking genius. Apples puzzles exposed the truth: in the real world, theyre clueless. Deploying based on fake scores is fraud. LLMs, built on stolen data, spew bias, lies, and hate. Scaling that is weaponizing chaos - an AI newsroom churning propaganda, a hiring tool blacklisting groups, a legal bot fabricating evidence. This is how civilizations rot.
LRMs and scraping guzzle resources. Apple proved extra “thinking” is hot air; Scrapys server attacks burn more. Training one model or running scraping pipelines emits CO2 like a coal plant, strangling the planet for garbage tech. A January 2025 McKinsey report notes 15% of employees worry about AIs environmental impact [McKinsey, 2025](https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work).
The AI hype train is a cult, brainwashing us into worshipping statistical parrots as gods, screaming “reasoning!” and “intelligence!” when its all lies, driving us off a cliff. Real-world nightmares abound: an LRM in a self-driving car misreads an intersection, causing a pileup killing dozens; an AI teacher misgrades exams, ruining futures; a power grid AI miscalculates, triggering blackouts for millions. These are headlines waiting to happen. The AI hype train is a cult, brainwashing us into worshipping statistical parrots as gods, screaming “reasoning!” and “intelligence!” when its all lies, driving us off a cliff. Real-world nightmares abound: an LRM in a self-driving car misreads an intersection, causing a pileup killing dozens; an AI teacher misgrades exams, ruining futures; a power grid AI miscalculates, triggering blackouts for millions. These are headlines waiting to happen.
Outsourcing decisions to AI strips human agency, turning us into drones who cant question or innovate - not augmenting humanity but lobotomizing it. Production use breeds dependence. When LLMs fail, systems collapse - hospitals halt, markets freeze, supply chains implode, leaving us one glitch from anarchy. LLMs craft lies that fool experts. In production, they could sway elections, manipulate juries, or radicalize masses - were not ready for AI-powered propaganda. Outsourcing decisions to AI strips human agency, turning us into drones who cant question or innovate - not augmenting humanity but lobotomizing it. Production use breeds dependence. When LLMs fail, systems collapse - hospitals halt, markets freeze, supply chains implode, leaving us one glitch from anarchy. LLMs craft lies that fool experts. In production, they could sway elections, manipulate juries, or radicalize masses - were not ready for AI-powered propaganda.
@ -78,9 +71,6 @@ Governments are asleep, with no framework to govern AI or scrapings risks, no
Autonomous AI in critical systems, powered by flawed LRMs, is a death sentence - Apples research shows failures go unchecked without human oversight, amplifying harm exponentially. Scrapings data theft, glorified by the X post, steals from creators, undermining the webs creative ecosystem. Deploying LLMs built on this is endorsing piracy at scale. Scrapings server attacks are killing the open web, forcing websites behind paywalls or offline, shrinking the internets diversity. LLMs are complicit in this murder. Autonomous AI in critical systems, powered by flawed LRMs, is a death sentence - Apples research shows failures go unchecked without human oversight, amplifying harm exponentially. Scrapings data theft, glorified by the X post, steals from creators, undermining the webs creative ecosystem. Deploying LLMs built on this is endorsing piracy at scale. Scrapings server attacks are killing the open web, forcing websites behind paywalls or offline, shrinking the internets diversity. LLMs are complicit in this murder.
Scraped data fuels LLMs that churn soulless text, drowning human creativity and turning culture into algorithmic sludge, disconnecting us from authenticity A 2020 Harvard Gazette report notes that AIs lack of oversight risks societal harm, with regulators ill-equipped to keep pace [Harvard Gazette, 2020](https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/).
## Toxic Incentives: Profit Over Existence ## Toxic Incentives: Profit Over Existence
This insanity is driven by perverse incentives. Venture capitalists demand unicorn returns, so companies rush half-baked models and scraping pipelines to market. OpenAIs profit-chasing pivot, as Amodei criticized, is the blueprint for this rot. Safety, ethics, and infrastructure are roadkill under “move fast and break things.” This insanity is driven by perverse incentives. Venture capitalists demand unicorn returns, so companies rush half-baked models and scraping pipelines to market. OpenAIs profit-chasing pivot, as Amodei criticized, is the blueprint for this rot. Safety, ethics, and infrastructure are roadkill under “move fast and break things.”
@ -112,19 +102,4 @@ These steps arent optional - theyre the only way to save ourselves from th
The AI industrys peddling a fairy tale, and were the suckers buying it. LLMs and LRMs arent saviors - theyre ticking bombs wrapped in buzzwords, built on a dying internets ashes. Apples *The Illusion of Thinking* and Amodeis confession are klaxons blaring in our faces. Scrapys server-killing rampage, glorified on X, is the final straw - were not just risking failure; were murdering the digital world that sustains us. The AI industrys peddling a fairy tale, and were the suckers buying it. LLMs and LRMs arent saviors - theyre ticking bombs wrapped in buzzwords, built on a dying internets ashes. Apples *The Illusion of Thinking* and Amodeis confession are klaxons blaring in our faces. Scrapys server-killing rampage, glorified on X, is the final straw - were not just risking failure; were murdering the digital world that sustains us.
From high-profile deployment failures - Samsung, Google, Zillow, IBM - to the ethical quagmire of web scraping, from AIs environmental toll to its persistent opacity, the evidence is overwhelming. IBM warns of escalating risks like data leakage [IBM, 2025](https://www.ibm.com/think/insights/ai-agents-2025-expectations-vs-reality). Lakera documents privacy violations from scraping, amplifying harm [Lakera, 2024](https://www.lakera.ai/blog/risks-of-ai). This isnt a mistake - its a betrayal of humanitys trust. Deploying LLMs and LRMs, fueled by scrapings destruction, isnt just dumb - its a crime against our survival. Lock them in the lab, crack the code, and stop the internets slaughter, or brace for the apocalypse. The clocks ticking, and were out of excuses.
Deploying LLMs and LRMs, fueled by scrapings destruction, isnt just dumb - its a crime against our survival. Lock them in the lab, crack the code, and stop the internets slaughter, or brace for the apocalypse. The clocks ticking, and were out of excuses.
## Sources
- Shojaee, Parshin, et al. “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.” Apple Machine Learning Research, June 2025, https://machinelearning.apple.com/research/illusion-of-thinking.
- Amodei, Dario. “Essay on AI Interpretability.” Personal website, 2025, quoted in Futurism, https://futurism.com/anthropic-ceo-admits-ai-ignorance.
- Anonymous. “The web scraping tool Scrapy.” X post, 2025, https://x.com/birgenbilge_mk/status/1930558228590428457?s=46
- Lakera. “AI Risks: Exploring the Critical Challenges of Artificial Intelligence.” 2024, https://www.lakera.ai/blog/risks-of-ai.
- McKinsey & Company. “AI in the workplace: A report for 2025.” January 2025, https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work.
- IBM. “AI Agents in 2025: Expectations vs. Reality.” March 2025, https://www.ibm.com/think/insights/ai-agents-2025-expectations-vs-reality.
- Simplilearn. “Top 15 Challenges of Artificial Intelligence in 2025.” May 2025, https://www.simplilearn.com/challenges-of-artificial-intelligence-article.
- EPIC. “Scraping for Me, Not for Thee: Large Language Models, Web Data, and Privacy-Problematic Paradigms.” February 2025, https://epic.org/scraping-for-me-not-for-thee-large-language-models-web-data-and-privacy-problematic-paradigms/.
- arXiv. “Ethical and social risks of harm from Language Models.” 2021, https://arxiv.org/abs/2112.04359.
- Harvard Gazette. “Ethical concerns mount as AI takes bigger decision-making role.” 2020, https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/.