
Welcome to this Cloud Wars Agent and Copilot Minute. In these discussions, I’ll be analyzing opportunities, impact, and outcomes possible with AI; this episode focuses on the importance of combining fast compute with scalable data infrastructure to support AI applications.
Highlights
00:25 — OpenAI partnered with Cerebras to bring 750 megawatts of ultra-low latency AI compute into its platform. Cerebras makes wafer-scale AI chips to eliminate data bottlenecks during AI inference. OpenAI has also scaled PostgreSQL to support 800 million users, which is crucial for handling millions of queries per second and managing user data at global scale. These investments are foundational to optimizing for real-time interactions and ensuring AI systems are not starved for data.
01:39 — The Cerebras partnership improves how fast AI answers questions, while the PostgreSQL developments enhance how AI systems store, fetch, and manage data. These investments are complementary, demonstrating the importance of both fast compute and scalable data infrastructure.
03:40 — Microsoft’s Azure database for PostgreSQL supports extremely high throughput workloads, showcasing the importance of managed data services while OpenAI’s PostgreSQL scaling story (running on Azure) emphasizes the need for architecting data and AI together. Winning with AI isn’t just about bigger models; it’s about resilient infrastructure.
More AI Insights:
- AI Data Shortage Requires Context Engineering, Data Mesh Architecture
- Google’s ‘Nested Learning’ Signals Smarter AI Agents
- Why MCP Is Becoming the Universal Interface to Enterprise Data
- How To Choose the Right Microsoft Tools for Your AI Agent Orchestration Use Cases

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.
+++++



