High-Level Overview
SF Tensor is a startup focused on building advanced infrastructure that enables AI labs and researchers to concentrate on AI model development rather than managing complex compute resources. Their core products include a Kernel Optimizer that automatically transforms AI model kernels into their mathematically fastest forms by simulating hardware behaviors, often outperforming hand-tuned implementations, and an Elastic Cloud platform that orchestrates training jobs across the cheapest available GPUs from multiple cloud providers. This infrastructure is hardware-agnostic, supporting NVIDIA, AMD, TPUs, and more, aiming to eliminate vendor lock-in and reduce costs. By automating infrastructure management, SF Tensor allows AI researchers to focus purely on experimentation and innovation, addressing a major bottleneck in AI development[1][2][3].
For an investment firm, SF Tensor’s mission aligns with accelerating AI research by democratizing access to high-performance compute. Their investment philosophy likely centers on backing technologies that reduce operational complexity and cost in AI workflows. Key sectors include AI infrastructure, cloud computing, and developer tools. Their impact on the startup ecosystem is significant as they lower barriers for AI startups and research labs, enabling faster iteration and innovation without the need for large infrastructure teams[1][2].
For a portfolio company, SF Tensor builds a developer-first, hardware-agnostic AI compute infrastructure that serves AI researchers, startups, and labs. It solves the problem of infrastructure complexity, high costs, and vendor lock-in in AI training. Their growth momentum is evident from seed funding by Y Combinator in 2025 and early traction scaling to thousands of GPUs, demonstrating strong demand for their solution[1][2][3].
---
Origin Story
SF Tensor was founded in 2025 in San Francisco by three brothers—Ben, Tom, and Luk Koska—who have extensive experience working together on AI research, including training their own foundational world models. The company emerged from their firsthand frustration with spending 60% of their time managing infrastructure rather than conducting research. They identified a widespread problem: AI research teams, especially smaller labs, lack dedicated infrastructure expertise, while larger teams waste valuable research talent on DevOps. This infrastructure tax was holding back AI progress, inspiring them to build a "set it and forget it" infrastructure layer that automates kernel optimization and cloud orchestration. Early pivotal moments include successfully scaling training runs to thousands of GPUs and securing seed funding from Y Combinator[1][2][3].
---
Core Differentiators
- Unique Investment Model: (For the firm) Focus on AI infrastructure startups that reduce operational friction in AI research.
- Product Differentiators: SF Tensor’s Kernel Optimizer outperforms hand-tuned kernels by simulating memory hierarchies and hardware topology, making high-performance compute accessible without deep hardware expertise.
- Developer Experience: The platform is "set it and forget it"—users simply connect their code repository, select GPU count and budget, and SF Tensor handles the rest, including automatic job migration on spot instance failures.
- Speed, Pricing, Ease of Use: Elastic Cloud finds the cheapest GPUs across providers and orchestrates distributed training from 1 to 10,000 GPUs without code changes, reducing costs and complexity.
- Community Ecosystem: By supporting multiple hardware vendors (NVIDIA, AMD, TPUs), SF Tensor avoids vendor lock-in, unlocking new compute supply and fostering a more open AI infrastructure ecosystem[1][2][3].
---
Role in the Broader Tech Landscape
SF Tensor rides the critical trend of democratizing AI compute infrastructure amid explosive growth in AI model scale and complexity. The timing is crucial as AI research increasingly demands massive, cost-efficient compute resources, yet current infrastructure is fragmented, expensive, and often locked to specific vendors like NVIDIA. Market forces such as rising GPU prices, cloud provider competition, and the proliferation of diverse AI hardware create a strong tailwind for SF Tensor’s hardware-agnostic, multi-cloud orchestration approach. By commoditizing compute and abstracting infrastructure complexity, SF Tensor accelerates AI innovation, enabling startups and labs to compete without large infrastructure teams. Their influence extends to reshaping AI infrastructure economics and fostering a more accessible AI research ecosystem[1][2][4][5].
---
Quick Take & Future Outlook
Looking ahead, SF Tensor is poised to expand its platform capabilities, potentially integrating support for emerging AI hardware and enhancing automation features. Trends shaping their journey include the continued scaling of AI models, diversification of AI hardware, and increasing demand for cost-effective, flexible compute solutions. As AI research becomes more distributed and democratized, SF Tensor’s infrastructure layer could become a foundational enabler, reducing barriers for innovation globally. Their influence may evolve from a niche infrastructure provider to a critical backbone for AI labs and startups, driving faster AI breakthroughs by eliminating infrastructure friction. This aligns with their founding vision: making high-performance AI compute accessible, portable, and affordable so researchers can focus solely on advancing AI[1][2][3].