Loading organizations...
MemVerge provides an open-source memory layer designed for AI agents. This platform enables AI-powered applications to efficiently learn, store, and recall data, optimizing GPU-centric computing vital for the generative AI era. Its technology enhances performance for demanding AI training, inference, and batch workloads.
Co-founder and CEO Charles Fan established MemVerge, driven by the insight into Big Memory computing's potential. Drawing on a background including executive roles like CTO at Cheetah Mobile, Fan recognized the necessity of an all-memory storage architecture. He co-founded the company to address data bottlenecks for data-intensive applications.
MemVerge targets enterprises with demanding AI workloads, spanning training, inference, and interactive scenarios. The company's vision aims for a future where all applications operate directly in memory. Its mission involves "opening the door to Big Memory," aspiring to deliver faster, more efficient, and scalable AI operations by transforming data interaction.
MemVerge has raised $43.5M across 2 funding rounds.
MemVerge has raised $43.5M in total across 2 funding rounds.
MemVerge is a technology company specializing in Memory-Converged Infrastructure (MCI) and Big Memory Computing software, designed to eliminate bottlenecks between memory and storage for data-intensive workloads like AI, machine learning, and big data analytics.[1][2][5] Its flagship products, including Memory Machine software and MemMachine for AI agents, virtualize memory across DRAM and persistent memory (e.g., Intel Optane, CXL-enabled pools), enabling seamless scaling, sub-microsecond latency, and plug-and-play compatibility with existing applications without code changes.[3][5][7] MemVerge serves enterprises such as LinkedIn, Tencent Cloud, and JD.com, solving the "memory wall" problem by pooling abundant, persistent memory for real-time processing of massive datasets, reducing costs, and accelerating AI/ML training and inference by up to 40x in latency-sensitive scenarios.[1][2][6] Backed by investors like Lightspeed Venture Partners and Jerusalem Venture Partners, the company has shown growth through launches like MCI in 2019 and AI-focused MemMachine, targeting the booming generative AI market.[1][3]
Founded in 2017 in San Jose, California, MemVerge emerged from the recognition of the longstanding "memory wall" bottleneck—where memory capacity and bandwidth limit application performance amid exploding data volumes.[2][3] Co-founder and CEO Charles Fan, along with CTO Yong Tian and Chairman Shuki Bruck, drew from their experience creating all-flash and hyperconverged infrastructure solutions to pioneer MCI upon the advent of new persistent memory hardware like Intel Optane.[1][3][4] The idea crystallized in 2017 as hardware innovations demanded a new software stack; MemVerge developed proprietary Distributed Memory Objects (DMO) and Memory Machine software to harness these, validated by the 2019 CXL specification announcement.[1][2] Early traction came from stealth launch in April 2019, securing adoption by global leaders in AI and data science, with investments from Gaorong Capital, LDV Partners, and others fueling expansion into cloud and enterprise data centers.[1][3]
MemVerge rides the generative AI and Large Language Model megatrend, where data-intensive workloads demand massive memory bandwidth and capacity amid GPU/HBM shortages.[2][9] Its timing aligns perfectly with CXL's rise as a standard for composable memory, addressing the memory wall that bottlenecks AI training/inference and real-time analytics in the on-demand economy.[1][2] Market forces like exploding machine-generated data (IoT, big data) and persistent memory forecasts (248% CAGR 2019-2023 per IDC) favor MemVerge, enabling enterprises to process vast datasets at memory speeds without crashes or flash wear.[5][6] By providing a software layer for hardware-agnostic Big Memory, it influences the ecosystem, powering innovators like Tencent and accelerating cloud-native AI infrastructure shifts toward disaggregated, efficient resources.[1][3]
MemVerge is poised to capture a multi-billion-dollar Big Memory market as CXL adoption surges and AI agents demand persistent, personalized memory layers.[2] Next steps likely include deeper enterprise AI integrations, expanded CXL partnerships, and hybrid cloud scaling for MemMachine, capitalizing on GPU orchestration needs.[7][9] Trends like AI workflow complexity and cost pressures will amplify its role, evolving MemVerge from infrastructure enabler to core AI memory platform—unlocking the full potential of abundant, persistent memory first promised in its 2019 launch.[1]
MemVerge has raised $43.5M in total across 2 funding rounds.
MemVerge's investors include Intel Capital, Cisco Investments, Gaorong Capital, Glory Ventures, Jerusalem Venture Partners, LDV Partners, Lightspeed Venture Partners, NetApp, Northern Light Venture Capital, SK Hynix, Bin Yue.
MemVerge has raised $43.5M across 2 funding rounds. Most recently, it raised $19.0M Series B in May 2020.
| Date | Round | Lead Investors | Other Investors |
|---|---|---|---|
| May 1, 2020 | $19.0M Series B | Intel Capital | Cisco Investments, Gaorong Capital, Glory Ventures, Jerusalem Venture Partners, LDV Partners, Lightspeed Venture Partners, NetApp, Northern Light Venture Capital, SK Hynix |
| Apr 2, 2019 | $24.5M Series A | Bin Yue, Jerusalem Venture Partners, LDV Partners, Lightspeed Venture Partners, Northern Light Venture Capital |