This commit is contained in:
Raven Scott 2025-06-08 03:29:29 -04:00
parent 9602893cf9
commit 0ad2307e56

View File

@ -83,7 +83,9 @@ The industrys “well make AI safe” mantra is bullshit. Apples resear
## The Path Forward: Research, Not Recklessness
This is a five-alarm fire. Deploying LLMs and LRMs, fueled by scrapings destruction, is suicidal. They must stay in labs until we crack the black box and stop killing the internet. Heres the plan:
This is a five-alarm fire. Deploying LLMs and LRMs, fueled by scrapings destruction, is suicidal. They must stay in labs until we crack the black box and stop killing the internet.
Ideas:
- Ban these models from critical systems - healthcare, finance, defense, governance - allowing only tightly overseen non-critical uses like content generation.
- Pour resources into interpretability, chasing Amodeis “MRI” vision until we trace every decision.
@ -96,8 +98,6 @@ This is a five-alarm fire. Deploying LLMs and LRMs, fueled by scrapings destr
- Freeze model size, compute, and scraping until we understand what weve got - bigger is riskier, not better.
- Force companies to pay for scraped data or face lawsuits, protecting the webs creative ecosystem.
These steps arent optional - theyre the only way to save ourselves from the abyss.
## My Final Takeaway
The AI industrys peddling a fairy tale, and were the suckers buying it. LLMs and LRMs arent saviors - theyre ticking bombs wrapped in buzzwords, built on a dying internets ashes. Apples *The Illusion of Thinking* and Amodeis confession are klaxons blaring in our faces. Scrapys server-killing rampage, glorified on X, is the final straw - were not just risking failure; were murdering the digital world that sustains us.