High-Level Overview
DriveNets is a fast-growing software company specializing in cloud-native networking solutions that disaggregate traditional networks, enabling service providers, cloud hyperscalers, and AI infrastructures to build scalable, high-performance networks at lower costs.[1][2][4] It offers Network Cloud, a software platform running on standard white-box hardware, which powers over 30% of US Internet traffic for clients like AT&T and Comcast, and Network Cloud-AI, launched in 2023 as a high-performance Ethernet alternative to InfiniBand for AI workloads.[1][5] DriveNets serves communications service providers (CSPs), hyperscalers, NeoClouds, and enterprises, solving key challenges like network scale, complexity, and profitability by replacing hardware-centric designs with software-driven elasticity and efficiency.[2][3][6]
The company has demonstrated strong growth momentum, raising over $200 million in funding at a valuation exceeding $1 billion, expanding to nearly 400 employees (over half in R&D), and earning awards like the 2021 Leading Light for Best New Optical Networking/IP Product.[2][8]
Origin Story
DriveNets was founded in 2015 by Ido Susan and Hillel Kobrinsky, two successful telco entrepreneurs who identified pain points in network capacity, complexity, and profitability amid exploding traffic demands outpacing revenue growth.[2][3] Their idea emerged from observing hyperscale cloud architectures—simple, scalable, and cost-effective—and applying them to telco-grade networking via disaggregation, using cloud-native software on commodity white-boxes instead of proprietary hardware.[1][2][4] Early traction came from simplifying networks to just two building blocks for any service, port, or scale, converting routing from hardware to software, and gaining influence in standards bodies like the Open Compute Project and Telecom Infra Project.[3][8] Pivotal moments include launching the Open Grid Alliance with VMware and Wind River, securing major CSP deployments, and rapid scaling post-funding.[1][8]
Core Differentiators
DriveNets stands out through its disaggregated, cloud-native architecture that decouples software from hardware, enabling massive scale and cost savings. Key strengths include:
- Network Cloud for CSPs and service providers: Runs telco-scale performance on shared white-box infrastructure, simplifying operations, boosting utilization, and growing profitability by detaching network growth from costs via a SaaS-like model.[1][2][3]
- Network Cloud-AI for AI infrastructures: Delivers lossless Fabric Scheduled Ethernet with distributed design, outperforming standard Ethernet and InfiniBand in throughput, low latency, and job completion time (JCT) for GPU clusters, without vendor lock-in or specialized skills.[5][6]
- Automation and orchestration: DriveNets Orchestrator (DNOR) provides advanced deployment, scaling, visibility, and insights, blending compute and networking resources for edge and cloud applications.[7][8]
- Open ecosystem and hardware agnosticism: Uses standard Ethernet, Broadcom cell-based protocols, and open alliances, allowing mix-and-match of GPUs, NICs, and DPUs while ensuring predictable, lossless performance.[6][8]
These features enable starting small and scaling horizontally to massive fabrics, accelerating AI cluster setup and reducing CapEx/OpEx.[6]
Role in the Broader Tech Landscape
DriveNets rides the AI networking boom and network disaggregation trend, where exploding AI workloads demand ultra-scalable, low-latency fabrics beyond traditional chassis switches—much like NVIDIA's Mellanox acquisition underscored networking's criticality.[6] Timing is ideal amid hyperscaler GPU cluster growth and CSPs chasing cloud economics, with Ethernet evolving as an open InfiniBand rival for AI backends in enterprises, NeoClouds, and hyperscalers across finance, pharma, automotive, and energy.[1][5] Market forces like traffic surges (e.g., 30% US Internet via DriveNets) and profitability pressures favor its software-centric model, influencing ecosystems via open standards, alliances, and deployments that push white-box adoption and simplify AI data centers.[1][2][8]
Quick Take & Future Outlook
DriveNets is poised to dominate AI backend networking with Network Cloud-AI's Ethernet edge, expanding from CSP wins to hyperscaler AI clusters as GPU scales hit millions and Ethernet standards mature.[5][6] Trends like distributed AI fabrics, edge computing, and open disaggregation will propel growth, potentially capturing more hyperscaler share amid InfiniBand constraints. Its influence may evolve by standardizing scheduled Ethernet globally, blending networking with compute for next-gen clouds—reinforcing its origin as a radical disruptor now essential for profitable, high-scale networks.[1][9]