Overview Kluisz, now rebranded as Nava, is a pioneering deeptech startup that has quickly made waves in the competitive artificial intelligence infrastructure sector. Founded in 2025 by industry veterans Abhinav Sinha, Vamshidhar Reddy, and Abhijeet Singh, the company was initially conceptualized as an AI-native private cloud platform designed to simplify infrastructure management for enterprises. As the demand for robust AI computing power soared, the startup underwent a strategic pivot, evolving into a full-stack "neocloud" provider that integrates hardware and software to optimize high-performance computing. This transformation from a software-first cloud service to a vertically integrated AI infrastructure giant offers a fascinating window into how modern tech companies adapt to the explosive growth of generative AI and GPU-centric workloads. In the following sections, we will explore the key facts behind this ambitious journey and what they signify for the future of AI infrastructure. ### 1. The Strategic Rebranding to Nava In April 2026, the company officially announced its transition from Kluisz to Nava, a name derived from the Sanskrit word for "new" or "fresh." This rebrand was far more than a cosmetic update; it signaled a fundamental shift in the company’s business model and vision. While Kluisz focused primarily on private cloud software solutions for enterprise clients, the transition to Nava reflects a move toward building a full-stack, GPU-as-a-service infrastructure platform. This pivot aligns the company with the urgent global need for specialized hardware to support the rapidly growing AI and machine learning ecosystem. ### 2. The Rise of AI-Native Cloud Platforms The emergence of Kluisz was driven by the realization that legacy cloud infrastructure was inherently ill-equipped to handle the unique demands of modern artificial intelligence. Traditional cloud providers were often constrained by virtualization overhead and lack of purpose-built hardware for distributed training. By designing an AI-native platform from the ground up, the company aimed to provide high-performance environments where compute, storage, and networking are optimized specifically for AI model training and inferencing. This specialization allows enterprises to focus on application development rather than the complexities of maintaining infrastructure in a constantly evolving tech landscape. ### 3. Founders and Their Industry Expertise The strength of a startup is often measured by its leadership, and the trio behind Kluisz brings an exceptional pedigree to the table. CEO Abhinav Sinha served as the global COO and CPO at OYO, while Vamshidhar Reddy brings years of expertise as a former McKinsey partner and AMD professional. Rounding out the team, Abhijeet Singh draws on his experience as a former VP of Cloud Infrastructure at Jio and AT&T. This combination of executive leadership, deep engineering knowledge, and operational scaling experience has been instrumental in securing investor confidence and navigating the company’s rapid growth phase since 2025. ### 4. Significant Funding and Investor Backing The company’s growth trajectory has been supported by substantial capital injections from prominent global investors. In July 2025, it secured a $9.6 million seed round led by RTP Global, which was noted for being one of the largest seed rounds for an AI startup at the time. This initial success was followed by a $22 million Series A round in April 2026, led by Greenoaks Capital, with follow-on participation from RTP Global and Unicorn India Ventures. Such strong financial backing underscores the market's belief in the company’s ability to bridge the critical compute gap in the Asia-Pacific region. ### 5. Focus on GPU-as-a-Service Infrastructure A core component of the company’s current strategy is the delivery of GPU-as-a-Service (GPUaaS). As training large-scale generative AI models requires massive amounts of processing power, GPUs have become the most valuable commodity in the tech world. By providing on-demand GPU instances, the company allows developers to scale their training and inference workloads without the heavy overhead of owning expensive, proprietary hardware. This model provides the agility required for startups and large enterprises alike to remain competitive in an environment where access to high-performance silicon is the primary bottleneck for innovation. ### 6. Expanding Footprint Across Asia-Pacific The rebrand to Nava marks a regional expansion effort, with the company establishing Singapore as its new headquarters. This location choice is strategic, placing the company in close proximity to major Asia-Pacific markets and a deeper pool of international talent. By building AI data centers and deploying GPU clusters across India, Singapore, and Southeast Asia, the company aims to address the specific infrastructure needs of emerging AI ecosystems. This regional focus is critical as demand for low-latency compute facilities near urban hubs continues to grow across the APAC region. ### 7. Commitment to Vertical Integration A hallmark of the new "neocloud" model is vertical integration. Rather than simply renting space or reselling existing cloud resources, the company is building a stack that spans from the data center design and hardware procurement to the software orchestration and inference layers. This level of control allows for tighter integration between hardware and software, resulting in higher performance and greater cost efficiency for users. By controlling the chain of infrastructure, the company can optimize for specific enterprise goals such as compliance, performance metrics, and cost targets. ### 8. Targeted Enterprise Use Cases While the company started with a software-first approach, its new infrastructure platform is built to support a wide range of enterprise use cases. From banking and finance to healthcare and large-scale manufacturing, enterprises are increasingly seeking secure, private infrastructure to run their AI models. The platform provides features such as zero-trust security, deep observability, and automated orchestration, making it suitable for highly regulated sectors that demand rigorous data privacy and reliability. This versatility ensures that the company remains relevant across diverse industries that are currently undergoing AI-led digital transformation. ### 9. The Competitive Landscape of AI Neoclouds The company is operating in a fiercely competitive market, standing alongside notable players like CoreWeave, Lambda Labs, and various regional data-center providers. However, the company differentiates itself by focusing on the "neocloud" concept—offering an autonomous, full-stack environment that thinks and adapts to performance requirements. By positioning itself as a foundational platform rather than a simple compute provider, it aims to capture long-term value in the AI infrastructure value chain. This competitive positioning is essential as firms move away from general-purpose cloud services toward high-performance, specialized AI infrastructures. ### 10. Future Prospects and Scaling Goals Looking ahead, the company has set ambitious targets, including plans to scale to over 1 gigawatt of compute capacity across India and Southeast Asia over the next five years. This scaling effort includes heavy investment in talent, particularly in GPU engineering, data center architecture, and go-to-market operations. As global demand for AI compute continues to outpace supply, the company’s ability to execute on these infrastructure deployments will be the ultimate test of its vision. If successful, it is poised to become one of the top three AI infrastructure players in the region, fundamentally shaping how Asian enterprises harness the power of AI. ### Conclusion The evolution from Kluisz to Nava represents a major milestone in the rapidly maturing AI infrastructure market. By shifting from a specialized software provider to a vertically integrated full-stack neocloud platform, the company has positioned itself as a critical player in the effort to bridge the global AI compute gap. Through strong leadership, significant capital, and a clear vision for the Asia-Pacific region, the company is tackling the most pressing challenges of the AI era—scalability, security, and performance. As we look toward the future, it is clear that the companies that control the underlying infrastructure will hold the keys to the next generation of artificial intelligence. Will this full-stack approach prove to be the definitive architecture for the enterprises of tomorrow?