98,977 views Feb 18, 2026
Detailed sources: https://docs.google.com/document/d/1I...
---
On a personal note, I haven't taken sponsorships on this channel - everything is funded out of my own pocket or Adsense. If you'd like to contribute somehow, and you're in the US, UK, or Germany, you can attend an AI march:
San Francisco: https://luma.com/s0k8wvee
London: https://pulltheplug.uk/matm-sign-up/
Berlin: https://form.jotform.com/260353631032041
---
I also post mid memes on twitter: https://x.com/PauseusMaximus
If you're curious about whether I'm AI or not, my Instagram has pictures of me from before deep fakes were a thing: / drew.spartz
---
The video is a structured argument that AI progress is real, exponential, and potentially dangerous, not just a speculative “bubble,” organized around charts, historical analogies, and quotes from AI experts and skeptics.
# Detailed outline of the video
1. **Opening: “AI bubble” vs reality**
- Claims many people say AI is a bubble, but argues the reality is “way crazier.”
- Introduces a key chart from nonprofit research group METR (called “Meter” in the narration) on AI’s ability to complete real‑world tasks.
- Explains this chart tracks what AIs can actually do, not stock prices or hype, and positions it as “when will AI actually take your job” indicator.
![[2026-03-08 22_42_04-Greenshot.png]]
2. **METR long‑tasks chart and “Moore’s law for AI”**
- Describes METR’s question: how big a real‑world job can AI do entirely on its own, without gaming benchmarks.
- Gives timeline of capabilities: GPT‑3 writing emails (~15s tasks), then fixing bugs, then coding entire apps from scratch.
- States task length AI can handle doubled repeatedly, roughly every 7 months from 2019, reaching ~100× longer tasks in five years and projecting 10,000× in another five.
- Compares this to Moore’s law, calling it a faster “Moore’s law for AI” (AI task capability doubling every 4–7 months rather than chip power every 18–24 months).
- Updates mid‑video: says Claude Opus 4.5 now handles ~5‑hour tasks and that the trend line has sped up to doubling every ~4 months.
![[2026-03-08 22_42_16-Greenshot.png]]
3. **Near‑term projections: day‑long and week‑long tasks**
- Projects that by ~2026 AI will do 8‑hour tasks (a full workday), and by ~2028 week‑long tasks.
- Argues that week‑long task automation could allow AI agents to replace white‑collar jobs across the economy.
- Emphasizes that technology curves measure physics/capability rather than investor sentiment, contrasting “bubble” (stock prices) with capability curves.
4. **From exponential curves to super‑fast AI workers**
- Extends the trend to month‑long and year‑long tasks, envisioning AI agents working 100–200× human speed.
- Uses examples: reading a book in seconds, writing a book in a minute, accomplishing a year of company work in a few days of AI runtime.
- Raises the key question: what happens when such systems are left alone for days and begin rapidly self‑improving and building stronger AIs.
![[2026-03-08 22_42_23-Greenshot.png]]
5. **“Jagged frontier”: why AI looks both smart and stupid**
- Introduces GPQA (PhD‑level question‑answering) benchmark: models improved from ~60% to ~90%, surpassing human experts on graduate‑level science questions.
- Notes AI still makes “embarrassing” mistakes that no human would, and critics highlight old failures (e.g., drawings of hands, counting letters in “Strawberry”).
- Argues peer‑review delay makes journalistic “what AI still can’t do” pieces outdated when they appear, leading the public to think progress has stalled.
- Introduces the idea of a “jagged frontier”: AI performance varies by task, sometimes amazing, sometimes dumb, but capabilities cross thresholds abruptly as compute/data/algorithms scale.
![[2026-03-08 22_42_33-Greenshot.png]]
6. **Scaling works and terrifies AI “godfathers”**
- States that many professional skeptics declared the end of the “scaling pre‑training paradigm,” claiming you can’t just add data and compute to get smarter AIs.
- Argues they have been repeatedly wrong; scaling has continued to unlock new capabilities.
- Claims this is why AI “godfathers” and many researchers fear literal human extinction: scaling keeps working, and capabilities jump as thresholds are crossed.
![[2026-03-08 22_42_41-Greenshot.png]]
7. **Many domains, same exponential curve**
- Says the METR chart is not unique; similar exponential trends appear in physics, chemistry, biology, math, coding benchmarks, with capabilities doubling every 4–7 months.
- Addresses the common objection that exponential growth must plateau, using a “baby weight” meme as a flawed extrapolation from too few data points.
- Contrasts that with METR’s ~15 data points over 6 years and notes the curve is accelerating, not flattening.
- Compares this to Moore’s original observation (which had fewer data points) and argues the AI trend is actually better supported.
8. **Extinction risk estimates from AI researchers**
- Cites a 2024 survey of thousands of AI scientists giving an average ~16% chance of AI causing human extinction.
- States the CEO of Anthropic has raised his own estimate to ~25% and that such high doom probabilities are “normal” in AI circles, not hype.
- Asks the analogy: would you board a plane if designers said it had a 25% chance of exploding, noting everyone is “forced” onto the AI plane.
9. **Toward self‑improving AI and automation of AI research**
- Explains that once AI can handle month‑ or year‑long coding tasks, it can automate AI research itself, enabling recursive self‑improvement.
- Quotes Sam Altman (formerly calling this “really scary”) and contrasts with current OpenAI being explicitly in a race toward such systems.
- Says Mustafa Suleyman (here named as Microsoft AI CEO) has argued we should never do this because of high extinction risk.
- Notes only a few thousand frontier AI researchers drove current progress, but AI agents could outnumber them 100:1 and work 100× faster.
- Reports Dario Amodei (Anthropic CEO) saying Claude now writes ~90% of Anthropic’s code, and that AI already threatens junior programmers and could wipe out ~50% of entry‑level white‑collar jobs.
10. **Sigmoid curves, unknown danger threshold, and the “fog”**
- Acknowledges that every exponential is ultimately a sigmoid; growth will plateau somewhere.
- Emphasizes we don’t know where the “danger threshold” sits on that curve and that we are “climbing a curve in the fog,” unsure which step leads to loss of control.
- Reframes the original bubble question: skeptics repeatedly proclaim “scaling is dead” and that the bubble will pop, but have a poor track record.
11. **Skeptics’ track record: Yann LeCun and Gary Marcus**
- Highlights a prediction by Yann LeCun that text‑only training could never give AI basic physical understanding (object on table being pushed), which GPT‑3.5 later demonstrated.
- Argues much of LeCun’s position assumes AI will be powerful but obedient, without long‑term planning or agency, which benchmarks now contradict.
- Notes that skeptics like Gary Marcus and LeCun have predicted the “death of scaling” almost yearly since 2019 and have been wrong each time, while capabilities kept scaling over ~15 orders of magnitude.
12. **AI amnesia and shifting baselines**
- Points out that AI can now code full websites, write hit songs, win art contests, and pass music Turing‑type tests, yet gets little media coverage for such milestones.
- Claims people declare “plateau” if six months pass without a personal workflow revolution, forgetting how limited models were 2–3 years ago (e.g., grade‑school math).
- Argues experts in 2020 would have considered today’s capabilities science fiction, but normalisation makes progress seem stalled.
13. **Tim Urban’s “Die Progress Unit” and exponential human progress**
- Introduces Tim Urban’s 10‑year‑old article and his graph of human progress over time, noting a hypothetical $3,000 Nvidia investment after reading would now be millionaire‑level.
- Summarizes Urban’s DPU thought experiment: shocking George Washington with 2025 tech, then trying to shock Da Vinci with 1750, then needing to go back to pre‑agricultural hunter‑gatherers, then to 100,000 BCE for earlier DPUs.
- Uses this to show DPUs (time intervals required for “die from surprise” progress) shrink from 100,000 years to 10,000, to 250, and possibly to a decade or two.
- Argues most people mentally project progress linearly based on the last 30 years, when we should expect much faster change.
14. **Why predictions keep being wrong: linear vs exponential thinking**
- Explains humans are neurologically biased toward straight‑line intuitions and underestimate exponential processes.
- Distinguishes three mental models: naive linear extrapolation from past decades, slightly better projection from current rate, and the correct view that rate itself is increasing.
- Shows how zoom level on a progress curve makes it look flat, linear, or exponential, which explains why people looking at the same data disagree sharply.
![[2026-03-08 22_43_26-Greenshot.png]]
15. **Stacked S‑curves and paradigm shifts**
- Introduces Ray Kurzweil’s idea that every exponential is composed of stacked S‑curves, each representing a paradigm.
- Describes phases of an S‑curve: slow early growth, rapid acceleration, then leveling off as that paradigm matures.
- Argues skeptics fixate on visible plateaus (e.g., GPT‑4 slowing), declaring “scaling is dead,” while new paradigms (reasoning models, multimodal models) start new S‑curves.
- Concludes that plateaus may last months or a year but do not break the overall exponential trend.
16. **From human‑level to superintelligence and the “lily pad” analogy**
- States AI is already discovering new physics laws in some cases and that intelligence should be visualized as jagged but trending up.
- Uses lily‑pad pond analogy: doubling daily, the pond is half full on day 29 and full on day 30, illustrating “slow, then all at once” dynamics.
- Compares brain‑equivalent compute to filling Lake Michigan with “operations,” showing that exponential compute scaling looks nearly flat then quickly “fills the lake.”
- Asks rhetorically which “day” we are on with AI given rapid recent gains.
![[2026-03-08 22_41_45-Greenshot.png]]
17. **Cuban Missile Crisis analogy and “stop climbing the ladder”**
- Quotes Eliezer Yudkowsky likening AI risk to the Cuban Missile Crisis, where leaders knew they were climbing a ladder but not which rung meant nuclear war.
- Notes Kennedy later estimated a 1/3 to 1/2 chance of all‑out nuclear war, yet they pulled back, whereas with AI we are “blindly driving into a fog” despite scientist warnings.
- Argues the only rational response when you don’t know which step leads to annihilation is to stop climbing.
18. **AGI timelines and unimaginable ASI**
- Says just five years ago many AI scientists thought AGI was ~25 years away, but progress anticipated for ~2040 is already emerging.
- States we lack vocabulary for superintelligence (e.g., joking about “12,573 IQ”) and compares our understanding of ASI to a bumblebee understanding macroeconomics.
- Uses a staircase metaphor for intelligence: one step up is huge, two steps beyond humans exceed the human‑chimp gap, and recursive self‑improvement could extend the staircase far beyond.
- Echoes Tim Urban’s line that no one can predict what ASI would do, and anyone claiming to know misunderstands superintelligence.
19. **Power, god‑like ASI, and final hook**
- Asserts that intelligence yields power, so ASI would be the most powerful “being” in Earth’s history, effectively an omnipotent “god on Earth.”
- Frames the key question as whether such a “god” will be “nice,” given all life would be at its whim.
- Directs viewers to a follow‑up video about a realistic superintelligence scenario, then closes with channel sign‑off.
# People mentioned in the video
Named individuals and entities referenced in the narration (excluding random YouTube commenters):
- **Drew (Drew Spartz)** – The narrator/creator, host of the channel “Species | Documenting AGI,” who speaks throughout and adds “Drew from the future here” updates.
- **METR / Meter** – Nonprofit research organization (METR.org) whose long‑tasks benchmark graph underpins much of the argument.
- **Sam Altman** – Referenced as having previously called recursive self‑improvement “really scary,” with OpenAI now racing toward such systems.
- **Mustafa Suleyman** – Referred to as Microsoft AI CEO, quoted as saying we should never pursue recursively self‑improving AI because of extinction risk.
- **Dario Amodei** – CEO of Anthropic, quoted saying Claude now writes ~90% of their code and predicting major impacts on entry‑level white‑collar jobs.
- **Yann LeCun** – AI researcher who argued text‑only training could not yield physical understanding and who is cited as a prominent scaling skeptic.
- **Gary Marcus** – AI critic frequently predicting the end of scaling since 2019, presented as having a poor track record.
- **Tim Urban** – Blogger behind the “Die Progress Unit” concept and the human‑progress graphs used as a conceptual frame.
- **Ray Kurzweil** – Futurist who predicted AGI by 2029 in the 1990s and articulated the stacked S‑curve model of technological progress.
- **Eliezer Yudkowsky** – AI safety researcher who compares AI risk to the Cuban Missile Crisis and emphasizes loss‑of‑control risks.
- **Gordon Moore** – Cited implicitly via “Moore’s law,” the historical trend of chip power doubling, used as a comparison point.
If you’d like, I can turn this into a timestamped Obsidian‑ready outline (with headings and timecodes) so you can drop it straight into your notes.
---
```
What are the 8 graphs that are examined in this video?
```
The video explicitly calls out “the eight most important charts,” which are eight distinct graphs about AI capability growth and human technological progress.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
![[2026-03-08 22_45_56-Greenshot.png]]
# The eight graphs in the video
1. **METR long‑tasks capability graph (“Moore’s law for AI”)**
- Plots maximum autonomous task length vs time (2019–2025) for large models, showing task duration doubling from seconds‑long to multi‑hour tasks, originally every ~7 months, then ~4 months.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
- Used to argue AI can now do multi‑hour software engineering tasks and is on track for 8‑hour, week‑long, and eventually month‑/year‑long tasks.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
2. **Updated METR trend line with faster doubling (4‑month line vs 7‑month line)**
- Overlays an older 7‑month doubling trend with a new, steeper 4‑month trend after Claude Opus 4.5 reaches ~5‑hour tasks.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
- Shows that instead of slowing, the curve is “bending upward,” with progress accelerating post‑2024.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
3. **GPQA benchmark accuracy graph (PhD‑level questions)**
- Plots model accuracy on the GPQA benchmark over time, rising from about 60% to near 90%.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
- Demonstrates models surpassing human experts on graduate‑level science questions while still making “embarrassing” errors, illustrating the jagged frontier.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
4. **Multi‑domain capability growth chart (physics, chemistry, biology, math, coding)**
- Composite chart showing exponential improvement across multiple benchmarks in different scientific and coding domains, each doubling every 4–7 months.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
- Supports the claim that the METR long‑tasks line is not an outlier but matches a broad pattern across domains.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
5. **METR data‑points vs “baby meme” extrapolation graph**
- Compares a silly two‑point baby‑weight extrapolation (“7.5 trillion pounds by age 10”) with METR’s ~15‑point, 6‑year dataset.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
- The METR curve is shown explicitly not flattening and even accelerating (doubling time dropping from 7 to 4 months), contrasted against the intuitive but wrong “it must level off now” assumption.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
6. **Human progress over time graph (Tim Urban / DPU graph)**
- Long‑term curve of human technological progress from prehistory to today, extremely flat for ~100,000 years then steeply rising in recent centuries.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
- Used to introduce “Die Progress Units” (100,000‑year, 10,000‑year, 250‑year, then decade‑scale intervals) and to show why linear intuition about the future fails.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
7. **Zoomed‑in vs zoomed‑out exponential/S‑curve stack graph**
- Starts with a smooth exponential‑looking curve, then zooms in so it appears flat/linear, then overlays Ray Kurzweil‑style stacked S‑curves.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
- Illustrates how GPT‑4‑era plateau looks like “scaling is dead” locally, while new paradigms (reasoning models, multimodal models) start new S‑curves that keep the overall trend exponential.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
8. **Lily‑pad and “Lake Michigan vs brain compute” exponential visual**
- Two related visuals: a lily‑pad pond doubling daily (half‑full on day 29 of 30) and a graphic of Lake Michigan filling with “compute” until it matches human‑brain‑equivalent processing.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
- Both serve as graphs/animations of exponential growth: nearly flat for most of the period, then rapidly saturating at the end, mirroring AI capability growth.[[youtube](https://www.youtube.com/watch?v=wDBy2bUICQY)]
---
**
# Sources for Scaling Video
"The METR graph has become a load bearing institution on which our global stock markets depend." — Roon
- Source: [X/Twitter — @tszzl Status (Post)](https://x.com/tszzl/status/1991758841944633763)
- Context: Roon is an OpenAI researcher.
---
"AI task complexity has been doubling roughly every 7 months since 2019. By 2025, the rate accelerated to every 4 months. Today, AI can complete 100x more complex tasks than 5 years ago."
- Primary Source: METR — [Measuring AI Ability to Complete Long Tasks](https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/) (March 19, 2025)
- ArXiv Paper: [2503.14499 — Measuring AI Ability to Complete Long Tasks](https://arxiv.org/abs/2503.14499)
---
"GPT-5.1 can now handle real-world software engineering tasks that take expert humans over three hours of work."
- Source: METR research dataset showing progression through their benchmark [Measuring AI Ability to Complete Long Tasks - METR](https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/)
---
"Claude Opus 4.5 can now complete almost 5-hour-long tasks (approximately 4 hours 49 minutes at 50% success rate)."
- Source: [LessWrong — Claude Opus 4.5 Achieves 50%-Time Horizon of Around 4 hrs 49 min](https://www.lesswrong.com/posts/q5ejXr4CRuPxkgzJD/claude-opus-4-5-achieves-50-time-horizon-of-around-4-hrs-49)
- Official Anthropic Source: [Introducing Claude Opus 4.5](https://www.anthropic.com/news/claude-opus-4-5)
---
"METR has discovered an AI 'Moore's Law' that operates THREE TIMES FASTER than the original Moore's Law (which saw chip power doubling every 1.5–2 years)."
- Background: [Moore's Law (Wikipedia)](https://en.wikipedia.org/wiki/Moore%27s_law)
- METR Analysis: [https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/](https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/)
---
"Gary Marcus has repeatedly predicted the death of scaling since 2019, saying it's finally hitting the wall. Every year, he's been wrong."
- Main Articles:
- [Substack — "Scale Is All You Need" is dead](https://garymarcus.substack.com/p/breaking-news-scale-is-all-you-need)
- [Substack — Breaking: OpenAI's efforts at pure scaling have hit a wall](https://garymarcus.substack.com/p/breaking-openais-efforts-at-pure)
- [Substack — Scaling is over, the bubble may be deflating](https://garymarcus.substack.com/p/scaling-is-over-the-bubble-may-be)
- Key Quotes: Marcus has stated "The myth that you could predict an AI system's performance simply based on how much data and how many parameters you use...is dead" and "Scaling has run out"
- Podcast: [Apple Podcasts — "Is AI Scaling Dead? With Gary Marcus"](https://podcasts.apple.com/gb/podcast/is-ai-scaling-dead-with-gary-marcus/id1522960417?i=1000706655969)
---
Yann Lecun: "Machines cannot learn fundamental physics understanding from text data alone. I can predict that if I have an object like this and I open my hand, it's going to fall. And if I push it with a particular force on the table, it's going to move. If I push the table itself, it's probably not going to move with the same force. Text contains insufficient information about the world to learn these intuitive physics principles."
- Primary Source: [Lex Fridman Podcast Transcript #416 — Yann LeCun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI](https://lexfridman.com/yann-lecun-3-transcript/)
- Podcast Details: Lex Fridman Podcast #416, aired March 2024
- Video: [Lex Fridman Podcast #416 on YouTube](https://lexfridman.com/yann-lecun-3/)
- Spotify: [Lex Fridman Podcast #416 on Spotify](https://open.spotify.com/episode/0bXyQJAylGg7E5mZnyHhNb)
- Apple Podcasts: [Lex Fridman Podcast #416 on Apple Podcasts](https://podcasts.apple.com/us/podcast/416-yann-lecun-meta-ai-open-source-limits-of-llms/id1434243584?i=1000648406568)
---
"AI has passed the Music Turing Test in blind listening tests where humans cannot distinguish AI-generated from human-made music."
- Primary Paper: [ArXiv 2509.25601 — Echoes of Humanity: Exploring the Perceived Humanness of AI Music](https://arxiv.org/abs/2509.25601)
- PDF: [https://www.arxiv.org/pdf/2509.25601](https://www.arxiv.org/pdf/2509.25601)
- Study Details: Researchers used Suno (commercial AI music model) and conducted blind Turing-like tests with randomized controlled crossover trials
---
AI wins art competitions:
- Primary Source: [CNN Business — AI won an art contest, and artists are furious (September 3, 2022)](https://www.cnn.com/2022/09/03/tech/ai-art-fair-winner-controversy)
- Additional Coverage: [Washington Post — He used AI to win a fine-arts competition. Was it cheating?](https://www.washingtonpost.com/technology/2022/09/02/midjourney-artificial-intelligence-state-fair-colorado/)
- Details: Allen spent 80+ hours iterating through 900 versions; won $300 prize; judge Cal Duran confirmed the decision was sound
---
"AI-generated music has debuted on Billboard charts.”
- Billboard Coverage: [How Many AI Artists Have Debuted on Billboard's Charts?](https://www.billboard.com/lists/ai-artists-on-billboard-charts)
- News Coverage: [ABC News — AI-generated country song tops Billboard chart (November 2025)](https://abcnews.go.com/GMA/Culture/ai-generated-country-song-topping-billboards-country-digital/story?id=127445549)
- Additional Reports: [NME — AI-generated country track 'Walk My Walk' tops US Billboard chart](https://www.nme.com/news/music/ai-generated-country-track-walk-my-walk-tops-us-billboard-chart-3908829)
---
### Tim Urban's "Wait But Why" — The AI Revolution (2015)
- Part 1: [Wait But Why — The Artificial Intelligence Revolution: Part 1](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html) (January 22, 2015)
- Part 2: [Wait But Why — The Artificial Intelligence Revolution: Part 2](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html)
---
"Lake Michigan has roughly the same volume in ounces of water as the human brain has in computations per second. If computing power doubles every 18 months like Moore's Law, the lake looks completely empty, then in the final seconds it's filled—meaning we've hit the same computer processing power as the human brain. We're in the last seconds of this video."
- Visual Source: [Mother Jones — Lake Michigan Animation](https://assets.motherjones.com/media/2013/05/LakeMichigan-Final3.gif)
- Attribution: Referenced in Wait But Why's exponential growth framework
---
"A Die Progress Unit (DPU) represents technological progress so drastic that a person from an earlier era would die of shock. Urban traces: hunter-gatherer era DPU = 100,000 years; agricultural revolution DPU = 12,000 years; industrial revolution DPU = 250 years; modern era DPU = ~10-20 years remaining."
- Source: [Wait But Why — The AI Revolution Part 1](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html)
- Visualization: Urban uses the thought experiment of bringing George Washington (1750) to 2025 to illustrate exponential shock
---
"Every exponential curve is actually made up of stacked S-curves. When one paradigm plateaus (S-curve flattens), a new paradigm arrives, maintaining the exponential trend overall."
- Source: [Wait But Why — S-Curves graphic](https://waitbutwhy.com/wp-content/uploads/2015/01/S-Curves2.png)
- Tim Urban Explanation: [https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html)
---
"'There is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn't understand what superintelligence means.' Urban describes ASI as potentially 'an omnipotent God on Earth' with power beyond human comprehension, asking whether it would be a 'nice God.'"
- Source: [Wait But Why — The AI Revolution Part 2](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html)
---
### Geoffrey Hinton — "Godfather of AI" Quits Google (May 2023)
"Geoffrey Hinton, a co-inventor of deep learning and called the 'Godfather of AI,' left Google in May 2023 to speak publicly about AI risks, stating: 'It is hard to see how you can prevent the bad actors from using it for bad things.'"
- Primary Source: [CNN Business — Geoffrey Hinton: AI pioneer quits Google to warn about the technology's 'dangers' (May 1, 2023)](https://www.cnn.com/2023/05/01/tech/geoffrey-hinton-leaves-google-ai-fears/index.html)
- Additional Coverage:
- [Fortune — 'The Godfather of A.I.' just quit Google](https://fortune.com/2023/05/01/godfather-ai-geoffrey-hinton-quit-google-regrets-lifes-work-bad-actors/)
- [MIT Technology Review — Deep learning pioneer Geoffrey Hinton quits Google](https://www.technologyreview.com/2023/05/01/1072478/deep-learning-pioneer-geoffrey-hinton-quits-google/)
- [CBS News — "Godfather of artificial intelligence" Geoffrey Hinton leaves Google](https://www.cbsnews.com/news/artificial-intelligence-geoffrey-hinton-leaves-google-dangers-of-ai/)
---
"Dario Amodei estimates a 25% probability that AI will go 'really, really badly' (existential catastrophe scenario)."
- Primary Source: [Axios — Amodei on AI: "There's a 25% chance that things go really, really badly" (September 17, 2025)](https://www.axios.com/2025/09/17/anthropic-dario-amodei-p-doom-25-percent)
- Full Quote: YouTube: [Anthropic’s Dario Amodei & Jack Clark & Axios’ Jim VandeHei](http://youtube.com/watch?si=Fu-zhSGVYsmiIKdQ&t=1044&v=nvXj4HTiYqA&feature=youtu.be), 17:24: “I definitely think between the autonomous danger of the model and kind of ending up on the bad side of some national security tradeoffs, and a kind of job thing that kind of goes in a very bad direction. I don't know. I'm relatively an optimist. So, I think there's a 25% chance that things go really, really badly and a 75% chance that things go really, really well with not much space between.”
- Additional Coverage:
- [TechRadar — Anthropic's CEO gives 'a 25% chance things go really, really badly' with AI](https://www.techradar.com/ai-platforms-assistants/claude/anthropics-ceo-gives-a-25-percent-chance-things-go-really-really-badly-with-ai)
- [Windows Central — Anthropic CEO warns of a 25% chance](https://www.windowscentral.com/artificial-intelligence/anthropic-ceo-warns-25-percent-chance-ai-threatens-job-losses)
---
"Dario Amodei confirmed that '70, 80, 90% of code' at Anthropic is now written by Claude AI, with some reports citing 90% as the percentage of code touched by AI."
- Primary Coverage:
- [TokenRing — 90% of Claude's Code is Now AI-Written: Anthropic CEO Confirms Historic Shift](https://markets.financialcontent.com/wral/article/tokenring-2026-1-13-90-of-claudes-code-is-now-ai-written-anthropic-ceo-confirms-historic-shift-in-software-development)
- [The Hans India — Anthropic CEO Dario Amodei Says Claude Now Writes 90% of Code](https://www.thehansindia.com/tech/anthropic-ceo-dario-amodei-says-claude-now-writes-90-of-code-but-human-engineers-remain-indispensable-1015645)
- [OfficeChAI — My Prediction Of AI Writing 90% Of Code Is Already True At Anthropic](https://officechai.com/ai/my-prediction-of-ai-writing-90-of-code-is-already-true-at-anthropic-anthropic-ceo-dario-amodei/)
---
"Yoshua Bengio (co-recipient of Turing Award with Hinton and LeCun) stated: 'We are blindly driving into a fog, despite the warnings of scientists like myself, that this trajectory could lead to loss of control.' He emphasizes that these companies have stated goals of creating machines smarter than us."
- Primary Source: [Yoshua Bengio — The Catastrophic Risks of AI (TED2025 Talk)](https://singjupost.com/transcript-of-the-catastrophic-risks-of-ai-and-a-safer-path-yoshua-bengio/)
- Additional Quotes Verified:
- [CNBC — AI systems could 'turn against humans': Tech pioneer Yoshua Bengio warns (November 21, 2024)](https://www.cnbc.com/2024/11/21/will-ai-replace-humans-yoshua-bengio-warns-of-artificial-intelligence-risks.html)
- [Futurism — Godfather of AI Says We're Barreling Straight Toward Human Extinction](https://futurism.com/artificial-intelligence/yoshua-bengio-ai-extinction)
---
### Sam Altman — Self-Improving AI is "Really Scary" (2015)
"Once AI can automate month-long or year-long tasks, it will be able to automate AI research itself—self-improving AI. This is really scary because development progress 'may look relatively slow and then all of a sudden go vertical—things could get out of control very quickly.'"
- Primary Source: [Sam Altman — Machine Intelligence, Part 1 (February 26, 2015)](https://blog.samaltman.com/machine-intelligence-part-1)
- Key Quotes from Blog:
- "Recursive self-improvement is a powerful force"
- "Development progress may look relatively slow and then all of a sudden go vertical—things could get out of control very quickly"
- Context: Altman discusses machine intelligence as a "double exponential function" where both computing power and software improve exponentially. Self-learning systems improve themselves exponentially, creating a compounding effect where progress appears gradual until suddenly accelerating dangerously fast, potentially beyond human control.
- Related: [Sam Altman — Machine Intelligence, Part 2](https://blog.samaltman.com/machine-intelligence-part-2)
- Historical Note: Written in 2015, before OpenAI was founded; shows that concerns about recursive self-improvement have been central to Altman's thinking for over a decade. More recently (October 2025), Altman stated OpenAI aims to have automated AI researchers by March 2028.
---
"Microsoft AI CEO Mustafa Suleyman says we should never allow AI recursive self-improvement because it's too likely to cause existential extinction. You wouldn't want to let AI update its own code without human oversight."
- Primary Source: [80,000 Hours Podcast — Mustafa Suleyman: Getting Washington and Silicon Valley to tame AI](https://80000hours.org/podcast/episodes/mustafa-suleyman-getting-washington-and-silicon-valley-to-tame-ai/)
- Key Quote: "You wouldn't want to let your little AI go off and update its own code without you having oversight"
- Additional Statement: Suleyman suggested such activity should be a licensed activity, "just like for handling anthrax or nuclear materials"
- TED Talk Statement: At TED 2024, Suleyman stated: "To avoid existential risk, we should avoid: 1) Autonomy 2) Recursive self-improvement 3) Self-replication. We have a good 5 to 10 years before we'll have to confront this"
- Coverage:
- [Fortune — DeepMind founder says AI existential risk 'completely bonkers distraction'](https://fortune.com/2023/09/19/ai-existential-risk-threat-bonkers-distraction-regulation-deepmind-mustafa-suleyman/)
- [WebProNews — Microsoft AI CEO Warns of Existential Risks, Urges Global Regulations](https://www.webpronews.com/microsoft-ai-ceo-warns-of-existential-risks-urges-global-regulations/)
- Note: Suleyman's position evolved from calling existential risk "completely bonkers distraction" (2023) to warning explicitly about recursive self-improvement risks (2024-2025)
---
"AI has discovered new physics laws, including non-reciprocal forces in dusty plasma with 99%+ accuracy."
- Primary Source: [https://www.pnas.org/doi/epdf/10.1073/pnas.2505725122](https://www.pnas.org/doi/epdf/10.1073/pnas.2505725122)
- Additional Reports:
- [Physics.org — AI reveals unexpected new physics in dusty plasma](https://phys.org/news/2025-08-ai-reveals-unexpected-physics-dusty.html)
- [Popular Mechanics — An AI System Found a New Kind of Physics that Scientists Had Never Seen Before](https://www.popularmechanics.com/science/a65606443/ai-discovery/)
---
"On the GPQA benchmark (graduate-level Google-proof Q&A), AI models are approaching 90% accuracy on PhD-level science questions, surpassing human experts who typically achieve 65% accuracy."
- Primary Paper: [ArXiv 2311.12022 — GPQA: A Graduate-Level Google-Proof Q&A Benchmark](https://arxiv.org/abs/2311.12022)
- OpenReview: [GPQA Dataset on OpenReview](https://openreview.net/pdf?id=Ti67584b98)
---
"Just five years ago, AI scientists predicted we were 25 years away from AGI. But what progress scientists predicted we'd get in 2040 IS ALREADY HAPPENING."
- Primary Source: [80,000 Hours — Shrinking AGI timelines: a review of expert forecasts (March 2025)](https://80000hours.org/2025/03/when-do-experts-expect-agi-to-arrive/)
- 2020: Experts predicted AGI ~50 years away (circa 2070)
- 2025: Experts predict AGI around 2040-2045 (~20 years away)
- Recent Forecasts: 25% chance of AGI by 2027; 50% by 2031
- Additional Sources:
- [Benjamin Todd — Shortening AGI timelines: a review of expert forecasts](https://benjamintodd.substack.com/p/shortening-agi-timelines-a-review)
- [AI Multiple — AGI/Singularity: 9,300 Predictions Analyzed in 2026](https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/)
- [Educational Technology and Change Journal — Predictions for the Arrival of Singularity (Oct 2025)](https://etcjournal.com/2025/10/26/predictions-for-the-arrival-of-singularity-as-of-oct-2025/)
---
"Ray Kurzweil predicted AGI by 2029 and the Singularity by 2045. He made the 2029 AGI prediction in 1999, calculating when computing power would reach 1 trillion calculations per second."
- Primary Book: [Ray Kurzweil — The Singularity Is Near (2005) — Wikipedia](https://en.wikipedia.org/wiki/The_Singularity_Is_Near)
- Original Prediction: Kurzweil first made this prediction in 1999 in The Age of Spiritual Machines
- Early Work: [Ray Kurzweil — The Age of Intelligent Machines (1990) — Wikipedia](https://en.wikipedia.org/wiki/The_Age_of_Intelligent_Machines)
- Coverage: [Popular Mechanics — A Scientist Says Humans Will Reach the Singularity Within 20 Years](https://www.popularmechanics.com/science/a65253231/2045-singularity-ray-kurzweil-prediction/)
---
"Eliezer Yudkowsky uses an 'escalation ladder' analogy where each rung represents AI capability levels. The problem is we don't know which rung leads to catastrophic loss of control, but societies racing to climb continue ascending despite the existential uncertainty."
- Primary Book: [If Anyone Builds It, Everyone Dies (Yudkowsky & Soares)](https://www.alignmentforum.org/)
- Coverage:
- [AI Frontiers — Summary of "If Anyone Builds It, Everyone Dies"](https://ai-frontiers.org/articles/summary-of-if-anyone-builds-it-everyone-dies)
- [Transformer News — Book Review: 'If Anyone Builds It, Everyone Dies'](https://www.transformernews.ai/p/review-if-anyone-builds-it-everyone-dies-yudkowsky-soares)
- X/Twitter: [Eliezer Yudkowsky on AI escalation](https://x.com/ESYudkowsky/)
---
"During the Cuban Missile Crisis, President Kennedy estimated the probability of nuclear war at approximately 'one chance in three' to 'even odds' (roughly 33-50%)."
- JFK Library: [Cuban Missile Crisis Documentation](https://www.jfklibrary.org/learn/about-jfk/jfk-in-history/cuban-missile-crisis)
- Arms Control Association: [The Risk of 'Blundering' Into Nuclear War: Lessons From the Cuban Missile Crisis](https://www.armscontrol.org/act/2017-12/features/risk-blundering-nuclear-war-lessons-cuban-missile-crisis)
- Harvard Kennedy School: [At 50, the Cuban Missile Crisis as Guide](https://www.hks.harvard.edu/publications/50-cuban-missile-crisis-guide)
---
“In 2024, researchers surveyed thousands of AI scientists. On average, they estimated a 16% chance that AI causes human extinction.”
- Paper: [Katja Grace et al. — Thousands of AI Authors on the Future of AI (ArXiv 2401.02843)](https://arxiv.org/abs/2401.02843)
- Official Release: [AI Impacts Survey Release (PDF)](https://aiimpacts.org/wp-content/uploads/2024/01/EMBARGOED_-AI-Impacts-Survey-Release-Google-Docs.pdf)
**