An icon of an eye to tell to indicate you can view the content by clicking
Signal
Original article date: Feb 08, 2026

Why India Should Skip the Expensive AI Arms Race and Build Smarter Instead

February 12, 2026
5 min read

Zoho founder Sridhar Vembu is making waves with his bold stance on India's artificial intelligence strategy: skip the costly Large Language Model (LLM) competition and focus on smaller, more efficient AI approaches instead.

Speaking ahead of India's upcoming AI Impact Summit, Vembu warned against trying to compete head-on with Big Tech's massive LLMs, which cost $50-100 billion to develop and consume enormous amounts of energy. His advice? "Sometimes, staying a little bit behind is a good idea."

Key Strategic Insights

Focus on Brain Power Over Energy: Instead of emulating energy-hungry global models, Vembu advocates leveraging India's intellectual capital to develop research and development around smaller, more efficient AI systems. "We have to apply our brain power, rather than energy which is scarce," he emphasized.

Capital-Efficient Innovation: With GPUs in short supply and electricity costs rising globally, Vembu suggests India should avoid the capital-intensive LLM race and pivot to distinct, smaller approaches that align with the country's resource realities.

Bottom-Up Development: This strategy aligns with India's recent Economic Survey, which noted that limited access to cutting-edge compute infrastructure makes pursuing foundational models as a centerpiece "challenging."

The Bigger Picture

India is positioning itself as a major player in global AI governance, preparing to host the largest AI summit to date with over 35,000 registrations and participation from 100+ countries. Industry titans including NVIDIA's Jensen Huang, Anthropic's Dario Amodei, and Google DeepMind's Demis Hassabis are expected to attend.

This strategic shift could give India a competitive edge by developing practical, resource-efficient AI solutions while others pour billions into increasingly expensive large models.

🔗 Read the full article on Live Mint