Howso is a Raleigh-based AI technology company building *trustworthy, explainable, enterprise-grade AI*—including an open-source Howso Engine and synthetic-data tools—that aim to replace black‑box models with auditable, deterministic reasoning for finance, healthcare, government and other regulated industries.[4][1]
High-Level Overview
- Concise summary: Howso (formerly Diveplane) develops an explainable AI reasoning layer and related products (Howso Engine, Howso Synthesizer) that deliver auditable, causal and context‑aware predictions and insights directly from enterprise data, with an emphasis on transparency, uncertainty quantification and regulatory suitability.[4][1]
- For a portfolio-company style view: Product — an *AI reasoning layer* (Howso Engine) plus an enterprise synthetic data product (Howso Synthesizer) that together enable explainable predictions, feature‑contribution analysis, and auditable decision trails for enterprises.[4][1]
- Who it serves — large enterprises and institutions across financial services, healthcare, government and research/academia that need transparent, accountable AI for high‑stakes decisions.[1][4][6]
- Problem solved — replaces opaque “black box” neural models with a deterministic, information‑theory and probability‑based reasoning engine that provides clear attributions, causal insights, and probability density functions for uncertainty so organizations can *understand why* a prediction was made and act responsibly.[4][6]
- Growth momentum — rebranded from Diveplane to Howso in 2023 and released an open‑source Engine that broadened access; selected by Microsoft for its Pegasus Responsible AI go‑to‑market program and made available via Azure Marketplace, signaling enterprise channel acceleration and partnerships.[1][3][4]
Origin Story
- Founding and founders — Howso was founded in 2017 as Diveplane by Dr. Mike Capps, Dr. Chris Hazard, and Mike Resnick; Capps previously served as President of Epic Games, Hazard is an ML researcher focused on trust and reputation, and Resnick is a long‑time technical collaborator with Hazard.[1][3]
- How the idea emerged — founders were motivated by concerns about society’s reliance on opaque black‑box AI for critical decisions and aimed to prioritize social impact and explainability over raw model power, turning Hazard’s research into an enterprise product.[1][3]
- Early traction and pivotal moments — the company amassed more than sixty patent assets, deployed technology across academic, nonprofit, financial, healthcare and government users, and in September 2023 rebranded to Howso while open‑sourcing its Engine; in 2024 it joined Microsoft’s Pegasus Responsible AI program as a select startup and made its platform available through Azure Marketplace.[1][4]
Core Differentiators
- Explainability and auditability — the Engine produces deterministic, fully attributable outputs with clear audit trails back to influential input data and feature‑contribution analysis rather than opaque neural activation patterns.[4][6]
- Causal and probabilistic reasoning — focuses on causal insights and provides probability density functions around predictions so users can see uncertainties and boundary cases, not just point estimates.[6][4]
- Open‑source access to core engine — release of an open‑source Howso Engine (2023) to broaden adoption among researchers and enterprises who need inspectable models.[1][3]
- Enterprise positioning and partnerships — targeted for regulated industries with certifications/partnerships (e.g., Microsoft Pegasus Responsible AI program, Azure Marketplace availability) to shorten sales cycles and support enterprise deployments.[1][4]
- Synthetic data capability — Howso Synthesizer enables training and testing of models on private, auditable synthetic datasets to preserve privacy while maintaining fidelity for downstream tasks.[2][1]
Role in the Broader Tech Landscape
- Trend alignment — rides the growing demand for Explainable AI (XAI), Responsible AI, and causal‑inference methods as regulators, enterprises and customers push back against opaque LLMs and neural black boxes in high‑stakes settings.[4][6]
- Timing — regulatory scrutiny (e.g., AI governance, privacy laws) plus enterprise adoption of generative models has increased demand for technologies that can *explain* and *audit* model outputs, making Howso’s positioning timely.[1][4]
- Market forces in its favor — increased procurement by financial institutions, healthcare systems and governments for compliant, auditable AI; partner channels (cloud marketplaces, co‑sell programs) speed enterprise adoption.[6][1]
- Influence — by open‑sourcing an explainable Engine and promoting auditable synthetic data, Howso pushes the ecosystem toward more transparent model design patterns and provides tools researchers and practitioners can build upon.[1][3]
Quick Take & Future Outlook
- Near term — expect continued enterprise commercialization via Azure Marketplace and Microsoft co‑selling, deeper vertical use cases in banking, insurance, and healthcare (loan default prediction, fraud detection, inventory/retention analysis are already showcased), and incremental product maturity around agentic/LLM explainability and synthetic data workflows.[1][6][4]
- Medium term — success will depend on scaling enterprise deployments, expanding partner certifications and integrations (data platforms, MLOps), and demonstrating measurable risk‑reduction and regulatory compliance benefits versus mainstream neural approaches. Howso’s open‑source Engine can accelerate community adoption but will require strong developer documentation and benchmarks versus neural baselines.[1][3][4]
- Strategic risks and opportunities — opportunity lies in becoming the de‑facto explainable reasoning layer for regulated AI; risks include competition from large cloud vendors and open‑source LLM/XAI projects and the challenge of proving superior ROI vs. incumbents on complex tasks.[4][1]
Quick take: Howso has carved a clear niche—enterprise‑grade, auditable causal AI—and its open‑source Engine plus strategic partnerships position it to influence how organizations adopt responsible AI for high‑stakes decisions; the company’s next phase is proving that explainability and determinism can scale with the performance and integration enterprises demand.[1][4]