From e0d0177bc08a9a814fbfe394d2d6935d0f66387d Mon Sep 17 00:00:00 2001 From: Raven Scott Date: Sat, 7 Jun 2025 20:19:57 -0400 Subject: [PATCH] update --- ...aping Are Killing the Internet and Must Stay in the Lab.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/markdown/LLMs, LRMs, and Scraping Are Killing the Internet and Must Stay in the Lab.md b/markdown/LLMs, LRMs, and Scraping Are Killing the Internet and Must Stay in the Lab.md index 1279e9e..cb2be48 100644 --- a/markdown/LLMs, LRMs, and Scraping Are Killing the Internet and Must Stay in the Lab.md +++ b/markdown/LLMs, LRMs, and Scraping Are Killing the Internet and Must Stay in the Lab.md @@ -93,8 +93,6 @@ The X post’s Scrapy worship shows developers are complicit, treating server ab The industry’s “we’ll make AI safe” mantra is bullshit. Apple’s research shows LRMs are inherently unreliable, and Amodei admits we can’t define the problem. Safety measures like alignment are guesswork - Anthropic’s red-teaming caught flaws, but scaling that is a fantasy. Scraping’s ethical rot makes it worse: models built on stolen data are tainted from birth. “Safe AI” is a marketing ploy. Deploying now is boarding a plane with a “probably not crashing” guarantee. -A 2022 WIRED article cites DeepMind’s admission that no lab knows how to make AI less toxic, with risks like an AI ethics model endorsing genocide or Alexa encouraging dangerous behavior [WIRED, 2022](https://www.wired.com/story/dark-risk-large-language-models/). A 2024 *ScienceDirect* article on LLMs in healthcare warns that without human oversight, these models risk spreading misinformation at unprecedented scale [ScienceDirect, 2024](https://www.sciencedirect.com/science/article/pii/S2589750023026597). - ## The Path Forward: Research, Not Recklessness This is a five-alarm fire. Deploying LLMs and LRMs, fueled by scraping’s destruction, is suicidal. They must stay in labs until we crack the black box and stop killing the internet. Here’s the plan: @@ -132,6 +130,4 @@ Deploying LLMs and LRMs, fueled by scraping’s destruction, isn’t just dumb - - EPIC. “Scraping for Me, Not for Thee: Large Language Models, Web Data, and Privacy-Problematic Paradigms.” February 2025, https://epic.org/scraping-for-me-not-for-thee-large-language-models-web-data-and-privacy-problematic-paradigms/. - arXiv. “Ethical and social risks of harm from Language Models.” 2021, https://arxiv.org/abs/2112.04359. - Harvard Gazette. “Ethical concerns mount as AI takes bigger decision-making role.” 2020, https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/. -- TechTarget. “Generative AI Ethics: 11 Biggest Concerns and Risks.” March 2025, https://www.techtarget.com/searchenterpriseai/feature/Generative-AI-Ethics-11-Biggest-Concerns-and-Risks. - WIRED. “The Dark Risk of Large Language Models.” 2022, https://www.wired.com/story/dark-risk-large-language-models/. -- ScienceDirect. “Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine.” 2024, https://www.sciencedirect.com/science/article/pii/S2589750023026597. \ No newline at end of file