Skip to content
Giant Antique Postage Stamp style editorial illustration for the news article: Five AI supply chain leaders discuss challenges at Milken Conference
FeatureIndustryVibe Builder

Five AI supply chain leaders discuss challenges at Milken Conference

By Harsh Desai
Share

TL;DR

Five AI supply chain leaders discussed chip shortages, orbital data centers, and AI architecture flaws at the Milken Institute Global Conference in Beverly Hills. TechCrunch covered their insights.

What changed

Five leaders spanning the AI supply chain gathered at the Milken Institute Global Conference to dissect mounting issues. They addressed chip shortages delaying model training, orbital data centers as compute alternatives, and questions over foundational AI designs. This session exposes vulnerabilities across hardware, infrastructure, and algorithms.

Why it matters

Developers face extended timelines for AI projects due to Nvidia H100 GPU shortages, with only 2 million units shipped in 2025 against 5 million demanded. Basic users will notice slower rollouts of AI features in apps like chatbots as firms ration compute. Vibe builders experimenting with generative tools must adapt to these constraints in creative workflows.

What to watch for

Track AMD MI300X chips as an alternative to Nvidia for faster availability in training runs. Compare orbital data center proposals from Lonestar Data Holdings against AWS ground facilities for latency metrics. Verify shifts by checking MLPerf inference benchmarks on Hugging Face for architecture changes.

Who this matters for

  • Vibe Builders: Pivot your creative workflows to prioritize lightweight models that function despite compute rationing.

Harshs take

The AI supply chain is hitting a wall of physical reality. Silicon scarcity and energy constraints are no longer theoretical problems for future roadmaps. They are immediate bottlenecks forcing companies to ration compute, which kills the pace of innovation for anyone relying on massive, centralized models.

The industry is currently overextended on hardware promises that manufacturers cannot fulfill. This reality check forces a shift toward efficiency over brute force. Projects built on the assumption of infinite, cheap compute will fail as costs rise and availability drops.

Smart operators are already moving toward smaller, optimized models that run locally or on less constrained hardware. Expect a brutal consolidation phase where projects lacking a clear path to compute sustainability vanish from the market. The era of reckless model scaling is ending.

by Harsh Desai

Source:techcrunch.com

More AI news

Everything AI. One email.
Every Monday.

New tools. Model launches. Plugins. Repos. Tactics. The moves the sharpest builders are making right now, before everyone else.

No spam. Unsubscribe anytime.