Add article about AI Deployment in production
This commit is contained in:
parent
110127174b
commit
a785252783
@ -0,0 +1,143 @@
|
|||||||
|
<!-- lead -->
|
||||||
|
The AI Industry’s Apocalyptic Folly
|
||||||
|
|
||||||
|
# Intro
|
||||||
|
The AI industry isn’t just teetering on the edge of disaster - it’s sprinting toward a cliff, blindfolded, with a lit dynamite stick in each hand. Large language models (LLMs) and their overhyped progeny, large reasoning models (LRMs), are being rammed into production environments with a recklessness that makes a drunken pilot look like a paragon of caution. This isn’t progress; it’s a death pact signed in Silicon Valley’s blood.
|
||||||
|
|
||||||
|
Worse, the industry’s insatiable data hunger, fueled by tools like Scrapy, is waging a silent war on the internet, pummeling servers into oblivion and strangling the digital ecosystem that holds our world together. Apple’s June 2025 paper, *The Illusion of Thinking*, obliterates the myth that these models are ready for the real world, exposing them as brittle, unreliable frauds that crumble under scrutiny. Anthropic CEO Dario Amodei’s gut-wrenching confession - “nobody knows how AI works” - is a scream into the void, admitting we’re piloting a spaceship with no clue how the controls function.
|
||||||
|
|
||||||
|
A post on X glorifying Scrapy’s web-scraping prowess lays bare the industry’s complicity in this digital carnage, celebrating server-killing tools as if they’re heroes. Deploying LLMs and LRMs, built on this unethical, destructive foundation, is a betrayal of humanity’s trust. This blog post is a raging indictment of the AI industry’s suicidal crusade, a desperate plea to lock this madness in research labs before we torch everything, backed by damning evidence from recent research and industry failures.
|
||||||
|
|
||||||
|
## A Cult of Digital Vandals
|
||||||
|
|
||||||
|
The AI landscape in 2025 is a dystopian fever dream, a chaotic gold rush where tech giants - OpenAI, Anthropic, Google, Meta AI, xAI - pillage the digital world to build their empires. LLMs like GPT-4, Claude 3, and Grok 3 are already infesting healthcare, finance, education, and defense, while LRMs, hyped as reasoning savants that “think” before answering, are peddled as the next messiah. Let’s rip off the rose-tinted glasses: these aren’t tools; they’re black-box bombs built on a foundation of stolen, scraped data that’s choking the internet to death.
|
||||||
|
|
||||||
|
The transformer architecture, a statistical trick for predicting text, has been inflated into a godlike entity, worshipped with fanatical zeal while ignoring the wreckage it leaves behind. This obsession with scale is collective madness. Models are trained on datasets so colossal - trillions of tokens scraped from the internet’s cesspool, books, and corporate sludge - that even their creators can’t untangle the mess.
|
||||||
|
|
||||||
|
Every company, every startup, every wannabe AI guru is unleashing armies of scrapers to plunder the web, hammering servers and destabilizing the digital ecosystem. A March 2025 McKinsey survey reveals that over 80% of organizations deploying generative AI see no tangible enterprise-level impact, suggesting this rush is driven by hype, not results [McKinsey, 2025](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai). High-profile failures - like Samsung banning ChatGPT after code leaks, Google’s Bard hallucinating, Zillow’s AI pricing flop costing millions, and IBM Watson Health’s erroneous cancer recommendations - underscore the chaos [Lakera, 2024](https://www.lakera.ai/blog/risks-of-ai). We’re not building progress; we’re orchestrating a digital apocalypse.
|
||||||
|
|
||||||
|
## Apple’s *Illusion of Thinking*: A Flamethrower to AI’s Lies
|
||||||
|
|
||||||
|
Apple’s June 2025 paper, *The Illusion of Thinking*, is a flamethrower torching the AI industry’s lies. Authors Parshin Shojaee, Iman Mirzadeh, and their team devised ingenious puzzle environments to test LRMs’ so-called reasoning, demanding real problem-solving, not regurgitated answers. The results are a flaming middle finger to every AI evangelist.
|
||||||
|
|
||||||
|
LRMs breeze through simple tasks but implode spectacularly on complex ones, spewing nonsense or flat-out wrong answers. Deploy that in a hospital or courtroom, and it’s a massacre waiting to happen. Pumping more compute or tokens doesn’t fix the problem - it’s a cruel mirage. Apple found LRMs’ “reasoning effort” peaks early, then nosedives, even with resources to spare, like a rocket exploding at 10,000 feet no matter how much fuel you pump.
|
||||||
|
|
||||||
|
These models are erratic, nailing one puzzle only to choke on a near-identical one, guessing instead of reasoning, their outputs as reliable as a drunk gambler’s dice roll. Humiliatingly, basic LLMs often outshine LRMs on simple tasks, with the “reasoning” baggage slowing them down or causing errors. Why are we worshipping a downgrade?
|
||||||
|
|
||||||
|
The “thinking processes” LRMs boast are a marketing stunt, revealed by Apple as a chaotic mess of incoherent leaps, dead ends, and half-baked ideas - not thought, but algorithmic vomit. LRMs fail to use explicit algorithms, even when essential, faking it with statistical sleight-of-hand that collapses under scrutiny. This brittleness isn’t theoretical: IBM Watson Health’s cancer AI made erroneous treatment recommendations, risking malpractice, and Google’s Bard hallucinated inaccurate information [Lakera, 2024](https://www.lakera.ai/blog/risks-of-ai).
|
||||||
|
|
||||||
|
A January 2025 McKinsey report notes that 50% of employees worry about AI inaccuracy, 51% fear cybersecurity risks, and many cite data leaks, aligning with Apple’s findings of unreliable outputs [McKinsey, 2025](https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work). A 2023 study in *Frontiers* highlights AI’s inability to ethically solve problems or explain results, further questioning its readiness for critical applications [Frontiers, 2023](https://www.frontiersin.org/articles/10.3389/frai.2023.1148154/full). This isn’t a warning - it’s a guillotine.
|
||||||
|
|
||||||
|
## Amodei’s Confession: We’re Flying Blind
|
||||||
|
|
||||||
|
If Apple’s paper lit the fuse, Dario Amodei’s essay is the explosion that levels the city. The Anthropic CEO, a supposed champion of AI safety, dropped a truth bomb: “Nobody really knows how AI works.” Let that sink in. The guy steering one of the world’s top labs is confessing we’re flying blind. This isn’t a minor glitch - it’s an existential failure.
|
||||||
|
|
||||||
|
We’re not tweaking a buggy app; we’re wielding tech that could reshape civilization, navigating it with a Magic 8-Ball. Amodei’s dream of an “MRI on AI” is a desperate cry, not a roadmap. He admits we can’t explain why a model picks one word or makes an error. This isn’t like not knowing a car’s engine - you can still drive. It’s like not knowing why a nuclear reactor keeps melting down, yet firing it up.
|
||||||
|
|
||||||
|
Anthropic’s red-teaming experiments, breaking models to study flaws, are a Band-Aid on a severed artery. We’re light-years from cracking the black box. A January 2025 McKinsey report calls LLMs “black boxes” lacking transparency, eroding trust in critical tasks [McKinsey, 2025](https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work). A March 2025 IBM article stresses that without traceability, risks like data leakage escalate [IBM, 2025](https://www.ibm.com/think/insights/ai-agents-2025-expectations-vs-reality).
|
||||||
|
|
||||||
|
A 2024 *ScienceDirect* survey on LLM safety identifies explainability challenges as a core barrier, noting that opaque models are prone to misinformation and inference privacy breaches [ScienceDirect, 2024](https://www.sciencedirect.com/science/article/pii/S2666659024000130). If the industry’s moral compass is this lost, deploying AI in production is a crime against humanity.
|
||||||
|
|
||||||
|
## Web Scraping’s Reign of Terror
|
||||||
|
|
||||||
|
The AI industry’s data addiction is a digital plague, and web scraping is its weapon. An X post glorifying Scrapy, a Python framework with over 55,000 GitHub stars, exposes the truth: the industry is waging war on the internet [Scrapy Post, 2025](https://x.com/birgenbilge_mk/status/1930558228590428457?s=46). Scrapy’s “event-driven architecture” and “asynchronous engine” hammer servers with hundreds of simultaneous requests, ripping data at breakneck speed. Its CSS/XPath selectors and JSONL exports make it a darling for LLM pipelines.
|
||||||
|
|
||||||
|
But Scrapy’s non-sane defaults and aggressive concurrency are a death sentence for servers. Its defaults prioritize speed over ethics, overwhelming servers with relentless requests. Small websites, blogs, and forums - run by individuals or small businesses - crash or rack up crippling bandwidth costs. The X post brags about handling “tens of thousands of pages,” but each page is a sledgehammer to someone’s infrastructure.
|
||||||
|
|
||||||
|
The internet thrives on open access, but scraping is strangling it. Websites implement bot protections, paywalls, or IP bans, locking out legitimate users. The X post admits to “bot protections” and “IP bans” as challenges, but Scrapy’s workarounds escalate this arms race, turning the web into a walled garden. A June 2025 *Nature* article reports that AI-driven scraping is overwhelming academic websites, with sites like DiscoverLife receiving millions of daily hits, slowing them to unusability [Nature, 2025](https://www.nature.com/articles/d41586-025-01743-9).
|
||||||
|
|
||||||
|
Scrapers plunder content without consent, stealing intellectual property, leaving creators - writers, artists, publishers - with no compensation. The X post’s “clean datasets” fantasy ignores the dirty truth: this data is pilfered. A February 2025 EPIC report calls this the “great scrape,” noting that companies like OpenAI allegedly scraped New York Times articles, copyrighted books, and YouTube videos without permission, violating privacy and IP rights [EPIC, 2025](https://epic.org/scraping-for-me-not-for-thee-large-language-models-web-data-and-privacy-problematic-paradigms/).
|
||||||
|
|
||||||
|
Scraping collects sensitive personal information without consent, raising privacy concerns. A 2024 OECD report highlights how scraping violates privacy laws and the OECD AI Principles, risking identity fraud and cyberattacks [OECD, 2024](https://oecd.ai/en/data-scraping-challenge). A May 2025 Simplilearn article notes that scraping exacerbates AI’s privacy violations, advocating for GDPR and HIPAA compliance [Simplilearn, 2025](https://www.simplilearn.com/challenges-of-artificial-intelligence-article).
|
||||||
|
|
||||||
|
Millions of scrapers clog networks, slowing access, while data centers strain, driving up energy costs and carbon emissions. Websites lose faith, shutting down or going offline, shrinking the internet’s diversity. Scrapy’s defenders claim it’s “essential” for LLMs, but that’s a lie. This data hunger is a choice. By glorifying server-killing tools, we’re murdering the internet’s soul. Deploying LLMs built on this stolen foundation isn’t reckless - it’s immoral.
|
||||||
|
|
||||||
|
## Technical Abyss: A House of Horrors
|
||||||
|
|
||||||
|
The rot goes beyond bad data. LLMs and LRMs are built on the transformer architecture, a statistical beast predicting the next word by crunching probabilities. It’s not intelligence - it’s autocomplete on steroids. LRMs add “thinking steps,” but Apple proved those are a chaotic mess, not reasoning.
|
||||||
|
|
||||||
|
Scraped datasets are a toxic stew of biases, errors, and garbage. Models inherit these flaws, amplifying hate, misinformation, and stereotypes. The X post’s “clean datasets” claim is a fantasy. Models memorize training data, regurgitating answers, acing benchmarks but choking on novel problems - not smarts, but cheating. Scaled models do unpredictable, creepy things - generating lies or toxic content - and we don’t know why, like breeding a mutant that bites.
|
||||||
|
|
||||||
|
Transformers’ attention mechanisms “hallucinate” nonexistent connections, sparking errors that could mean lawsuits or worse in production. Models overfit to scraped data quirks, brittle in real-world scenarios - a context shift, and they’re lost. LRMs burn obscene resources for negligible gains. Apple showed their “reasoning” doesn’t scale, yet we torch energy grids to keep the farce alive.
|
||||||
|
|
||||||
|
A 2024 *ScienceDirect* article lists LLM vulnerabilities: prompt injection manipulates outputs, training data poisoning introduces biases, PII breaches occur during training, insecure output handling generates harmful content, and denial-of-service attacks disrupt availability [ScienceDirect, 2024](https://www.sciencedirect.com/science/article/pii/S2666659024000130). A 2021 arXiv paper outlines six risk areas for LLMs, including discrimination, toxicity, misinformation, and environmental harms, all amplified by flawed data [arXiv, 2021](https://arxiv.org/abs/2112.04359). This isn’t a system - it’s a house of horrors, and deploying it on stolen, server-killing data is lunacy.
|
||||||
|
|
||||||
|
## Why Deployment Is a Betrayal
|
||||||
|
|
||||||
|
Deploying LLMs and LRMs, fueled by unethical scraping, is a middle finger to reason, ethics, and survival. Apple’s research proves these models collapse on complex tasks. In production, that’s a body count - an LRM misdiagnosing cancer kills a patient, a trading algorithm misreading a signal wipes out trillions, a military drone misjudging a target levels a village. These are certainties.
|
||||||
|
|
||||||
|
Amodei’s right: we can’t explain a damn thing these models do. If an AI denies your loan or flags you as a criminal, nobody can trace the logic. It’s a black box with a “trust me” sticker, not engineering but tyranny. LRMs are as reliable as a drunk tightrope walker, acing one task and botching the next. An AI air traffic controller brilliant Monday but brain-dead Tuesday kills thousands.
|
||||||
|
|
||||||
|
Benchmarks are a con, rigged because models train on test problems, faking genius. Apple’s puzzles exposed the truth: in the real world, they’re clueless. Deploying based on fake scores is fraud. LLMs, built on stolen data, spew bias, lies, and hate. Scaling that is weaponizing chaos - an AI newsroom churning propaganda, a hiring tool blacklisting groups, a legal bot fabricating evidence. This is how civilizations rot.
|
||||||
|
|
||||||
|
LRMs and scraping guzzle resources. Apple proved extra “thinking” is hot air; Scrapy’s server attacks burn more. Training one model or running scraping pipelines emits CO2 like a coal plant, strangling the planet for garbage tech. A January 2025 McKinsey report notes 15% of employees worry about AI’s environmental impact [McKinsey, 2025](https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work).
|
||||||
|
|
||||||
|
The AI hype train is a cult, brainwashing us into worshipping statistical parrots as gods, screaming “reasoning!” and “intelligence!” when it’s all lies, driving us off a cliff. Real-world nightmares abound: an LRM in a self-driving car misreads an intersection, causing a pileup killing dozens; an AI teacher misgrades exams, ruining futures; a power grid AI miscalculates, triggering blackouts for millions. These are headlines waiting to happen.
|
||||||
|
|
||||||
|
Outsourcing decisions to AI strips human agency, turning us into drones who can’t question or innovate - not augmenting humanity but lobotomizing it. Production use breeds dependence. When LLMs fail, systems collapse - hospitals halt, markets freeze, supply chains implode, leaving us one glitch from anarchy. LLMs craft lies that fool experts. In production, they could sway elections, manipulate juries, or radicalize masses - we’re not ready for AI-powered propaganda.
|
||||||
|
|
||||||
|
Governments are asleep, with no framework to govern AI or scraping’s risks, no accountability, like letting a toddler play with a nuclear reactor. AI and scraping concentrate power in a few tech titans who afford the compute and data, yet their creations control us, widening the elite-everyone gap. When AI fails, trust in tech and institutions will crater. A high-profile disaster could spark a backlash, halting progress for decades. We’re gambling the future on a losing bet.
|
||||||
|
|
||||||
|
Autonomous AI in critical systems, powered by flawed LRMs, is a death sentence - Apple’s research shows failures go unchecked without human oversight, amplifying harm exponentially. Scraping’s data theft, glorified by the X post, steals from creators, undermining the web’s creative ecosystem. Deploying LLMs built on this is endorsing piracy at scale. Scraping’s server attacks are killing the open web, forcing websites behind paywalls or offline, shrinking the internet’s diversity. LLMs are complicit in this murder.
|
||||||
|
|
||||||
|
Scraped data fuels LLMs that churn soulless text, drowning human creativity and turning culture into algorithmic sludge, disconnecting us from authenticity. A 2024 MIT Press article on ChatGPT warns of its potential for malicious misuse, privacy violations, and bias propagation, exacerbated by unregulated data practices [MIT Press, 2024](https://direct.mit.edu/dint/article/6/1/150/119771/The-Limitations-and-Ethical-Considerations-of). A 2020 Harvard Gazette report notes that AI’s lack of oversight risks societal harm, with regulators ill-equipped to keep pace [Harvard Gazette, 2020](https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/).
|
||||||
|
|
||||||
|
|
||||||
|
## Toxic Incentives: Profit Over Existence
|
||||||
|
|
||||||
|
This insanity is driven by perverse incentives. Venture capitalists demand unicorn returns, so companies rush half-baked models and scraping pipelines to market. OpenAI’s profit-chasing pivot, as Amodei criticized, is the blueprint for this rot. Safety, ethics, and infrastructure are roadkill under “move fast and break things.”
|
||||||
|
|
||||||
|
The X post’s Scrapy worship shows developers are complicit, treating server abuse as a feature, not a bug. The academic-industrial complex is guilty too, with researchers trading integrity for paychecks, churning out hype papers instead of hard questions. Apple’s study is a rare exception - most research is a glorified ad. Whistleblowers like Amodei are drowned out by “AI will save us!” propaganda. It’s a machine built to self-destruct, and we’re all strapped in.
|
||||||
|
|
||||||
|
## The Myth of “Safe” AI
|
||||||
|
|
||||||
|
The industry’s “we’ll make AI safe” mantra is bullshit. Apple’s research shows LRMs are inherently unreliable, and Amodei admits we can’t define the problem. Safety measures like alignment are guesswork - Anthropic’s red-teaming caught flaws, but scaling that is a fantasy. Scraping’s ethical rot makes it worse: models built on stolen data are tainted from birth. “Safe AI” is a marketing ploy. Deploying now is boarding a plane with a “probably not crashing” guarantee.
|
||||||
|
|
||||||
|
A 2022 WIRED article cites DeepMind’s admission that no lab knows how to make AI less toxic, with risks like an AI ethics model endorsing genocide or Alexa encouraging dangerous behavior [WIRED, 2022](https://www.wired.com/story/dark-risk-large-language-models/). A 2024 *ScienceDirect* article on LLMs in healthcare warns that without human oversight, these models risk spreading misinformation at unprecedented scale [ScienceDirect, 2024](https://www.sciencedirect.com/science/article/pii/S2589750023026597).
|
||||||
|
|
||||||
|
## The Path Forward: Research, Not Recklessness
|
||||||
|
|
||||||
|
This is a five-alarm fire. Deploying LLMs and LRMs, fueled by scraping’s destruction, is suicidal. They must stay in labs until we crack the black box and stop killing the internet. Here’s the plan:
|
||||||
|
|
||||||
|
- Ban these models from critical systems - healthcare, finance, defense, governance - allowing only tightly overseen non-critical uses like content generation.
|
||||||
|
- Pour resources into interpretability, chasing Amodei’s “MRI” vision until we trace every decision.
|
||||||
|
- Curb aggressive scraping tools like Scrapy, enforcing sane defaults - rate limits, consent protocols - with penalties for server-hammering or data theft.
|
||||||
|
- Adopt Apple’s puzzle-based testing, using novel, complex problems, not rigged benchmarks.
|
||||||
|
- Demand transparency - no “proprietary” excuses; open-source model architectures, training data, failure logs. Scraping pipelines must disclose sources and impacts.
|
||||||
|
- Regulate AI and scraping like nuclear weapons - global standards, audits, severe penalties for reckless deployment or server abuse.
|
||||||
|
- Build tools that augment, not replace, human decisions. AI is a calculator, not a dictator.
|
||||||
|
- Educate the public to demystify AI and scraping’s limits, teaching these are statistical toys built on stolen data, not gods, to curb blind trust.
|
||||||
|
- Freeze model size, compute, and scraping until we understand what we’ve got - bigger is riskier, not better.
|
||||||
|
- Force companies to pay for scraped data or face lawsuits, protecting the web’s creative ecosystem.
|
||||||
|
|
||||||
|
These steps aren’t optional - they’re the only way to save ourselves from the abyss.
|
||||||
|
|
||||||
|
## Conclusion
|
||||||
|
|
||||||
|
The AI industry’s peddling a fairy tale, and we’re the suckers buying it. LLMs and LRMs aren’t saviors - they’re ticking bombs wrapped in buzzwords, built on a dying internet’s ashes. Apple’s *The Illusion of Thinking* and Amodei’s confession are klaxons blaring in our faces. Scrapy’s server-killing rampage, glorified on X, is the final straw - we’re not just risking failure; we’re murdering the digital world that sustains us.
|
||||||
|
|
||||||
|
From high-profile deployment failures - Samsung, Google, Zillow, IBM - to the ethical quagmire of web scraping, from AI’s environmental toll to its persistent opacity, the evidence is overwhelming. Over 80% of organizations see no tangible AI impact, yet the rush continues [McKinsey, 2025](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai). IBM warns of escalating risks like data leakage [IBM, 2025](https://www.ibm.com/think/insights/ai-agents-2025-expectations-vs-reality). Lakera documents privacy violations from scraping, amplifying harm [Lakera, 2024](https://www.lakera.ai/blog/risks-of-ai). This isn’t a mistake - it’s a betrayal of humanity’s trust.
|
||||||
|
|
||||||
|
Deploying LLMs and LRMs, fueled by scraping’s destruction, isn’t just dumb - it’s a crime against our survival. Lock them in the lab, crack the code, and stop the internet’s slaughter, or brace for the apocalypse. The clock’s ticking, and we’re out of excuses.
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- Shojaee, Parshin, et al. “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.” Apple Machine Learning Research, June 2025, https://machinelearning.apple.com/research/illusion-of-thinking.
|
||||||
|
- Amodei, Dario. “Essay on AI Interpretability.” Personal website, 2025, quoted in Futurism, https://futurism.com/anthropic-ceo-admits-ai-ignorance.
|
||||||
|
- Anonymous. “The web scraping tool Scrapy.” X post, 2025, https://x.com/birgenbilge_mk/status/1930558228590428457?s=46
|
||||||
|
- McKinsey & Company. “The state of AI: How organizations are rewiring to capture value.” March 2025, https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai.
|
||||||
|
- Lakera. “AI Risks: Exploring the Critical Challenges of Artificial Intelligence.” 2024, https://www.lakera.ai/blog/risks-of-ai.
|
||||||
|
- McKinsey & Company. “AI in the workplace: A report for 2025.” January 2025, https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work.
|
||||||
|
- IBM. “AI Agents in 2025: Expectations vs. Reality.” March 2025, https://www.ibm.com/think/insights/ai-agents-2025-expectations-vs-reality.
|
||||||
|
- Simplilearn. “Top 15 Challenges of Artificial Intelligence in 2025.” May 2025, https://www.simplilearn.com/challenges-of-artificial-intelligence-article.
|
||||||
|
- Nature. “Web-scraping AI bots cause disruption for scientific databases and journals.” June 2025, https://www.nature.com/articles/d41586-025-01743-9.
|
||||||
|
- EPIC. “Scraping for Me, Not for Thee: Large Language Models, Web Data, and Privacy-Problematic Paradigms.” February 2025, https://epic.org/scraping-for-me-not-for-thee-large-language-models-web-data-and-privacy-problematic-paradigms/.
|
||||||
|
- OECD. “The AI data scraping challenge: How can we proceed responsibly?” 2024, https://oecd.ai/en/data-scraping-challenge.
|
||||||
|
- ScienceDirect. “A survey on large language model (LLM) security and privacy: The Good, The Bad, and The Ugly.” 2024, https://www.sciencedirect.com/science/article/pii/S2666659024000130.
|
||||||
|
- arXiv. “Ethical and social risks of harm from Language Models.” 2021, https://arxiv.org/abs/2112.04359.
|
||||||
|
- Frontiers. “Specific challenges posed by artificial intelligence in research ethics.” 2023, https://www.frontiersin.org/articles/10.3389/frai.2023.1148154/full.
|
||||||
|
- MIT Press. “The Limitations and Ethical Considerations of ChatGPT.” 2024, https://direct.mit.edu/dint/article/6/1/150/119771/The-Limitations-and-Ethical-Considerations-of.
|
||||||
|
- Harvard Gazette. “Ethical concerns mount as AI takes bigger decision-making role.” 2020, https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/.
|
||||||
|
- TechTarget. “Generative AI Ethics: 11 Biggest Concerns and Risks.” March 2025, https://www.techtarget.com/searchenterpriseai/feature/Generative-AI-Ethics-11-Biggest-Concerns-and-Risks.
|
||||||
|
- WIRED. “The Dark Risk of Large Language Models.” 2022, https://www.wired.com/story/dark-risk-large-language-models/.
|
||||||
|
- ScienceDirect. “Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine.” 2024, https://www.sciencedirect.com/science/article/pii/S2589750023026597.
|
Loading…
x
Reference in New Issue
Block a user