12,399 views Mar 9, 2026 Dylan Patel breaks down the current chaos inside the world’s top AI companies. Dylan is the founder and CEO of SemiAnalysis, one of the best analyst firms covering everything AI and semiconductors. Never miss an interview👇🏼 https://forwardfuture.ai Download The 25 OpenClaw Use Cases eBook 👇🏼 https://bit.ly/4aBQwo1 Download The Subtle Art of Not Being Replaced 👇🏼 http://bit.ly/3WLNzdV Download Humanities Last Prompt Engineering Guide 👇🏼 https://bit.ly/4kFhajz Discover The Best AI Tools👇🏼 https://tools.forwardfuture.ai 1:13 - Dylan's predictions 7:47 - Anthropic vs DoW 15:08 - War Claude 22:00 - How happiness in society works 31:31 - Knowledge work is cooked 38:22 - Is SaaS dead? 45:18 - New Media landscape 48:16 - White collar bloodbath 52:38 - Open Source is Losing 1:04:45 - Chinese AI Distillation Attacks 1:09:52 - Closed Source VS Open Source 1:19:43 - Microsoft CEO is coping 1:26:55 - Who wins the ASI race? My Links 🔗 👉🏻 X: https://x.com/matthewberman 👉🏻 Forward Future X: https://x.com/forwardfuture 👉🏻 Instagram: / matthewberman_ai 👉🏻 TikTok: / matthewberman_ai 👉🏻 Spotify: https://open.spotify.com/show/6dBxDwx... Media/Sponsorship Inquiries ✅ https://bit.ly/44TC45V --- Matthew Berman interviews analyst Dylan Patel about AI labs, geopolitics, jobs, and infrastructure; Patel argues we are in a brutal transition where closed‑source frontier labs, massive data centers, and agentic tools like Claude Code reshape work, politics, and global power.​ --- # Detailed outline of the video ## 0:00–7:47 – Cold open, catch‑up, and prediction scorecard - Brief cold open clips: Patel talking about Claude Code, executives watching podcasts, AI and nuclear missiles, Sam Altman’s opportunism, UBI, and “buffet of dopamine.”​ - Host Matthew Berman welcomes Dylan Patel for “round two,” notes new office, and recalls their last conversation ~8 months earlier, which he says felt like “3 years in AI.”​ - Berman reads Patel’s prior predictions to score them:​ - GPT‑4.5 will be too slow and too expensive. - Scale AI is “cooked.” - Junior dev market is “nuked.” - OpenAI is his pick to reach superintelligence first. ## 7:47–22:00 – Anthropic vs Department of War (“DoW”) and “War Claude” - Berman asks about the U.S. government blacklisting Anthropic as a “supply chain risk” and OpenAI signing a deal with the Department of War the same day (then revising it).​ - Patel outlines two narratives:​ - Anthropic is punished for not “bending the knee” to President Trump and not donating politically. - Anthropic is deeply dogmatic on safety and autonomy, to the point of being self‑sabotaging (e.g., Dario’s hypothetical about nuclear missiles and AI‑driven surveillance/autonomy).​ - He contrasts Anthropic’s policy team (heavily ex‑Biden) with OpenAI and Microsoft’s bipartisan approach, comparing “slithering snake” pragmatism vs moral dogmatism.​ - He notes Anthropic was first with classified‑environment access, but OpenAI “swooped in” after Anthropic “lost the plot,” and that Anthropic’s internal culture makes it hard to compromise without staff revolt.​ - Patel sketches a likely future where Anthropic eventually cooperates on mass surveillance and autonomous weapons but frames it internally as “forced by the government” to preserve alignment culture.​ - Discussion of Claude being the only deployed model in classified networks, likely an older Claude Sonnet 3.5/3.6 on‑prem, and why ripping it out in six months is technically feasible by swapping in newer OpenAI or other weights.​ ## 22:00–38:22 – Happiness, social unrest, and “knowledge work is cooked” - Patel explains why he thinks AI accelerates social unrest:​ - 50‑year trend of capital taking share from labor. - Social media amplifying perceived inequality and unrealistic lifestyles. - People in the 90th percentile believing they’re “middle class” while those at the 50th feel poor.​ - He argues objective living standards (housing, medicine, food) improved, but perception worsened, and AI concentrates growth in a narrow slice: electricians, construction, semiconductors, and capital allocators.​ - He forecasts visible job losses: robo‑taxis (Waymo, etc.) eliminating millions of driving jobs and white‑collar productivity allowing tiny teams to build enterprises.​ - Patel shares SemiAnalysis anecdotes: massive Claude Code token bills, non‑coders like operations and data‑center analysts using Claude Code to build tools, and his own existential joke that his business will be gone in 2–3 years unless they outrun AI.​ - He frames AI adoption as Jevons paradox: higher efficiency yields more total usage and more hiring for leading firms, but those firms are cannibalizing competitors and not offsetting overall job losses.​ - He personally remains optimistic and says he’s hiring aggressively but believes many non‑enterprising workers will be “cooked” in the transition.​ ## 38:22–52:38 – Is SaaS dead, new media, and “white‑collar bloodbath” - Berman raises Satya Nadella’s comment that the “application layer collapses into agents,” and asks what remains besides agents and storage/CRUD backends.​ - Patel thinks it’s easier to name losers than winners in software, but views scalable data/compute platforms like Databricks and Snowflake as relatively well‑positioned because vibe‑coded agent workflows still need robust data/compute substrates.​ - Inside SemiAnalysis, the role of “head of data” has morphed into making others’ vibe‑coded systems scalable and production‑ready, emphasizing the shift from spec‑writing + implementation teams to domain experts working directly with agents.​ - They discuss Berman’s experience: he hired a researcher, then Claude/OpenClaw automated most research and outline creation, yet the researcher is busier than ever building tools and sites—evidence of role mutation, not immediate elimination.​ - Patel highlights their own hiring spree (10 hires this year, ~20 open roles) while acknowledging the revenue is pulled from slower competitors who will eventually lay off workers.​ - Conversation shifts to media: cheap content generation splinters attention, destroys American monoculture, and erodes the economics of traditional TV/financial news.​ - Patel contrasts CNBC’s tiny average viewership (~100k) with lean creator teams producing more engaging content for a fraction of the cost, arguing that new media is “eating” traditional media jobs.​ ## 52:38–1:09:52 – Open source is losing, local compute, and distillation - They examine the Anthropic blog accusing several Chinese labs (e.g., DeepSeek, MiniMax, Kimi) of distilling Claude, and Patel notes unusually high Chinese‑language traffic to Anthropic/OpenAI via proxies in Korea/Japan.​ - He believes some Chinese models are clearly distilled from U.S. models, possibly via intermediaries (code tools like Cursor/Replit/Lovable) that mask origins, and that even modest volumes of distilled data can significantly improve smaller models.​ - On DeepSeek V4, Patel says:​ - It was trained on Nvidia Blackwell GPUs, likely in Southeast Asia, not Chinese domestic chips. - Weights can easily be moved over encrypted links; physical suitcase transfers are unnecessary for model weights. - He expects V4 to be strong but not as shockingly disruptive as the earlier R1 release. - He argues closed‑source usage is pulling away: most production workloads lean on frontier closed models, while open‑source models and local setups are mostly for hobbyists and tinkerers.​ - Berman pushes the “local renaissance” narrative (M‑series Macs, DGX Spark, RTX 5090, running small models that match last year’s SOTA) and notes he offloads specific workloads locally for latency/privacy.​ - Patel responds that:​ - For serious throughput, daisy‑chaining consumer or small “AI PCs” is slow and cost‑inefficient versus renting data‑center GPUs. - Global shortages of advanced wafers and DRAM mean capital will favor hardware that delivers maximum tokens per bit and per mm²—i.e., cloud data‑center GPUs, not consumer boards. - Nvidia and DRAM makers are already reallocating supply from gaming/phones/PCs toward data centers; Xiaomi and others are reportedly cutting phone output because AI is consuming their chip supply. - They agree that for most users and workloads, cloud inference will dominate and local inference will remain niche except for specialized/robotics use and personal tinkering.​ ## 1:09:52–1:26:55 – Politics, regulation, and “Microsoft CEO is coping” - Patel predicts AI will dominate U.S. politics in the coming election, with Democrats likely to pivot into an explicitly anti‑AI, pro‑worker stance because public sentiment is already majority negative on AI and worsening as job losses and resource crowding become visible.​ - He describes:​ - Hyperscalers like Google planning to reinvest all cash flow into compute, data centers, energy, and AI (e.g., forecasting Google may show near‑zero free cash flow in some years). - Massive capex crowding out other sectors: electricians, construction, cooling, and energy pulled to data centers, raising prices elsewhere and fueling resentment. - Patel anticipates bipartisan anti‑AI factions (e.g., AOC‑style left populists and some right populists) and fears regulation that could slow U.S. labs and let China catch up.​ - On Microsoft, he notes:​ - Microsoft cut back planned AI capex vs earlier trajectories, forcing OpenAI to seek capacity from Oracle, SoftBank, CoreWeave, Amazon, and now Google. - Copilot adoption has been weak relative to Claude Code, Cursor, and other tools; internal GitHub reliability issues and slow infra decisions hamper them. - Microsoft is losing share in both compute and AI software monetization compared with Amazon and Google, and Nadella’s recent warning that a single algorithmic breakthrough could “break the math” of capex is, in Patel’s view, “major cope.”​ - Patel argues that:​ - Scaling laws for pre‑training and RL remain intact. - Each new model generation lowers cost ~hundreds‑to‑thousands‑fold for a given capability and/or dramatically raises capability at similar cost (citing Google’s recent Gemini‑Flash‑style releases as examples). - Anthropic, OpenAI, and Google report no sign of scaling plateau, so he sees capex as rational, not foolish. ## 1:26:55–1:29:32 – Who wins the ASI race? - Berman re‑asks the original question from their prior conversation: who reaches artificial superintelligence (ASI) first?​ - Patel notes:​ - Last time he said OpenAI, expecting a rough patch followed by a comeback; indeed, OpenAI struggled vs Gemini and Anthropic for a while, regained user growth with recent models, but Anthropic’s revenue growth remains extraordinary. - If you extrapolate revenue, he expects Anthropic to overtake OpenAI by around April on that metric. - Consensus among “people paying attention,” he says, is that Anthropic is ahead culturally and technically on alignment and recursive improvement. - Precisely because consensus now favors Anthropic, Patel contrarianly picks **OpenAI** again as his ASI winner, half‑joking that he’ll be “cooked” online for saying it.​ - Berman closes with thanks and a YouTube‑style prompt pointing viewers to the algorithm’s recommended next video.​ --- # Biographies of the speakers ## Dylan Patel - Dylan Patel is the founder and CEO of **SemiAnalysis**, an independent semiconductor and AI infrastructure research and consulting firm.​ - SemiAnalysis produces deep technical and economic analysis of chips, data centers, hyperscalers, and AI labs, and Patel personally advises hedge funds, corporates, and infrastructure providers on compute, memory, and AI trends.​ - In the conversation, he describes his company as fast‑moving, young (average age ~30), and heavily reliant on Claude Code and other AI tools for data‑center modeling, token economics, and new lines of business such as energy modeling.​ ## Matthew Berman - Matthew Berman is a creator and interviewer focused on AI, tools, and the future of work, hosting long‑form conversations on his YouTube channel “Matthew Berman.”​ - He runs the **Forward Future** media/education brand, publishes an AI newsletter, and curates AI tools via a directory at tools.forwardfuture.ai.​ - Berman uses Claude/OpenClaw deeply in his own workflow (e.g., as a first‑class “employee” with its own email/Drive doing outbound and inbound sales triage) and produces guides such as “The Subtle Art of Not Being Replaced” and “Humanity’s Last Prompt Engineering Guide.”​ --- # Predictions and outcomes (so far) **Key AI and macro predictions discussed in the interview and their current status as of March 2026.** |#|Prediction (who/when)|Content of prediction or claim|Evidence/outcome so far in video|Status (as framed in video)| |---|---|---|---|---| |#|Prediction (who/when)|Content of prediction or claim|Evidence/outcome so far in video|Status (as framed in video)| |---|---|---|---|---| |1|Patel, prior episode (8 months before)|GPT‑4.5 would be too slow and too expensive|Berman says Patel “absolutely nailed” it; Patel says 4.5 lacked data, was infra‑complicated, and is now unavailable to users or via API despite being a good model.​|Correct so far (per both)| |2|Patel, prior episode|Scale AI is “cooked”|Patel says Scale AI has had departures and lost ground in RL‑environment tooling, though its core labeling business is “doing fine”; Meta’s acquisition mainly about Alex Wang leading Meta’s superintelligence effort.​|Partially correct (strategic miss, not dead company)| |3|Patel, prior episode|Junior dev market “nuked”|Patel observes fresh grads have a harder time finding jobs; huge AI‑coding productivity (e.g., Anthropic’s reported ~$19B code revenue; SemiAnalysis’ massive Claude Code spend) allows non‑devs to do coding work.​|Broadly validated (market tighter, automation rising)| |4|Patel, prior episode|OpenAI will be first to superintelligence (ASI)|He reiterates that was his earlier call; now notes Anthropic’s explosive revenue and cultural edge, yet still picks OpenAI again, partly as a contrarian stance.​|Unresolved; re‑affirmed in this video| |5|Patel (current)|Anthropic revenue will surpass OpenAI’s by about April 2026 if current trajectories hold|He says revenue additions of ~$4B in January and ~$5B in February for Anthropic and projects crossover by April.​|Pending; projection within next 1–2 months| |6|Patel (current)|AI‑driven job losses (drivers and white‑collar) will escalate soon|He cites imminent large‑scale robo‑taxi (Waymo) deployments and increasing white‑collar automation via Claude Code/agents, especially for non‑enterprising workers.​|In progress; framed as near‑term inevitability| |7|Patel (current)|Democratic Party will become strongly anti‑AI and win the next election as the “anti‑AI party”|He notes >50% of Americans already hold negative views of AI and expects Democrats to seize anti‑AI rhetoric as jobs, prices, and stock volatility get blamed on AI capex.​|Unresolved (future U.S. election)| |8|Patel (current)|UBI (universal basic income) becomes necessary/acceptable to prevent social breakdown|He says he has shifted from staunch capitalist to seeing UBI as “perfectly fine” given coming dislocation and surplus concentration.​|Normative prediction; policy outcome unknown| |9|Patel (current)|Closed‑source frontier labs will continue outpacing open‑source models|He argues cloud usage share and compute advantages (multi‑GW for OpenAI/Anthropic) ensure closed models stay ahead; says DeepSeek V4 likely strong but not R1‑level shock and gap will widen again.​|So far, he claims labs are still ahead and accelerating| |10|Patel (current)|Local / consumer AI hardware will remain niche vs data centers|He points to wafer and DRAM shortages, rising memory prices, and Nvidia reallocating supply to data centers; asserts tokens‑per‑bit economics favor hyperscale GPUs, not DGX Spark/consumer gear.​|Trend currently favors his view (per his account)| |11|Patel (current)|Hyperscaler capex is rational; scaling laws will not soon break|He cites ongoing pre‑training and RL scaling at Anthropic, OpenAI, Google and says cost per capability still falls orders‑of‑magnitude; dismisses Nadella’s “one algorithm could break the math” line as “cope.”​|Supported by current lab behavior as described, but long‑term unresolved| |12|Patel (current)|Chinese models will continue distilling from U.S. labs but remain compute‑constrained|He claims clear signs of distillation via intermediaries and says Chinese compute gap vs U.S. frontier will widen again as U.S. labs ramp GW‑scale clusters.​|Distillation alleged; overall trajectory framed as U.S. advantage widening| ---