Gen Z Founders Reject Elon Musk Millions to Build Brain-Inspired AI That Outperforms OpenAI and Anthropic Models
Two young founders who met in high school in Michigan declined a multimillion-dollar offer from Elon Musk after building an experimental AI model that caught the attention of researchers across leading U.S. universities. Their early work on a small large-language model named OpenChat set the stage for what would become a new brain-inspired AI architecture capable of outperforming systems from OpenAI, Anthropic, and DeepSeek on reasoning benchmarks.
Two Gen Z founders built a brain-inspired AI that beat OpenAI and Anthropic on key reasoning tests after turning down a multimillion-dollar offer from Elon Musk.
Fortune Brainstorm AI, Singapore
William Chen and Guan Wang created OpenChat using a limited set of high-quality conversations instead of the massive internet-scale datasets used by most large-language models. They paired this data strategy with reinforcement learning, enabling the model to refine its behavior through rewards and penalties. Their work stood out at a time when few teams were applying reinforcement learning to LLMs, and OpenChat quickly spread across academic circles after being open-sourced.
Researchers at Berkeley and Stanford downloaded the model, expanded on the codebase, and cited it in early examples showing how small, well-trained models could match or exceed the performance of larger systems. Their project eventually reached Musk, who contacted them through xAI with a multimillion-dollar recruiting offer. Chen said they turned it down because they believed LLMs had structural limitations that required a fundamentally new approach.
The decision led to the development of Sapient Intelligence and its core architecture, the Hierarchical Reasoning Model (HRM). The system was created inside Tsinghua University’s Brain Cognition and Brain-Inspired Intelligence Lab, where both founders studied after Chen chose to enroll at the Beijing-based institution alongside Wang. Their university years included challenging coursework, occasional failed classes, and growing support from professors who encouraged their work on novel AI reasoning structures.
Chen and Wang spent extensive time experimenting in robotics labs during their high school years in Michigan, bonding over shared long-term goals for solving computational problems and exploring the concept of AGI. Their aligned interests continued at Tsinghua, where the pair became known for pursuing a machine-intelligence architecture that diverged from mainstream transformer-based systems.
A breakthrough arrived at 3 a.m. during testing of their 27-million-parameter HRM prototype. The model surpassed systems from OpenAI, Anthropic, and DeepSeek on several reasoning benchmarks. It solved Sudoku-Extreme puzzles, navigated 30×30 mazes, and posted strong results on the ARC-AGI benchmark without relying on chain-of-thought prompting or large-scale computational resources.
HRM uses a two-part recurrent framework inspired by how the human brain blends deliberate analytical thinking with fast, reflexive responses. The system processes problems by planning and applying internal logic rather than relying solely on statistical pattern prediction. Chen said this approach reduces hallucinations and enables competitive performance in time-series forecasting tasks involving weather modeling, quantitative trading, and medical monitoring.
Sapient Intelligence is now scaling HRM into a general-purpose reasoning engine based on the belief that AGI progress requires more efficient architecture instead of increasingly large transformer models. The founders argue that frontier LLMs face structural constraints in reasoning depth, planning, and multi-step problem analysis that cannot be resolved through additional layers or parameter counts.
The company is preparing to open a U.S. office, raise additional funding, and deploy the second version of its model. The team is also evaluating a potential rebrand as they work on continuous learning methods designed to help systems update safely without full retraining cycles. Chen said he expects AGI progress to accelerate over the next decade and believes advanced systems will eventually exceed human cognitive performance.
Comments
Post a Comment