There’s been great interest in what Mira Murati’s Thinking Machines Lab is building with its $2 billion in seed funding[1] and the all-star team of former OpenAI researchers who have joined the lab. In a blog post[2] published on Wednesday, Murati’s research lab gave the world its first look into one of its projects: creating AI models with reproducible responses.

The research blog post, titled “Defeating Nondeterminism in LLM Inference,” tries to unpack the root cause of what introduces randomness in AI model responses. For example, ask ChatGPT the same question a few times over, and you’re likely to get a wide range of answers. This has largely been accepted in the AI community as a fact — today’s AI models are considered to be non-deterministic systems— but Thinking Machines Lab sees this as a solvable problem.

The post, authored by Thinking Machines Lab researcher Horace He, argues that the root cause of AI models’ randomness is the way GPU kernels — the small programs that run inside of Nvidia’s computer chips — are stitched together in inference processing (everything that happens after you press enter in ChatGPT). He suggests that by carefully controlling this layer of orchestration, it’s possible to make AI models more deterministic.

Beyond creating more reliable responses for enterprises and scientists, He notes that getting AI models to generate reproducible responses could also improve reinforcement learning (RL) training. RL is the process of rewarding AI models for correct answers, but if the answers are all slightly different, then the data gets a bit noisy. Creating more consistent AI model responses could make the whole RL process “smoother,” according to He. Thinking Machines Lab has told investors that it plans to use RL to customize AI models for businesses[5], The Information previously reported.

Murati, OpenAI’s former chief technology officer, said in July that Thinking Machines Lab’s first product will be unveiled in the coming months[6], and that it will be “useful for researchers and startups developing custom models.” It’s still unclear what that product is, or whether it will use techniques from this research to generate more reproducible responses.

Thinking Machines Lab has also said that it plans to frequently publish blog posts[7], code, and other information about its research in an effort to “benefit the public, but also improve our own research culture.” This post, the first in the company’s new blog series called “Connectionism,” seems to be part of that effort. OpenAI also made a commitment to open research when it was founded, but the company has become more closed off as it’s become larger. We’ll see if Murati’s research lab stays true to this claim.

The research blog offers a rare glimpse inside one of Silicon Valley’s most secretive AI startups. While it doesn’t exactly reveal where the technology is going, it indicates that Thinking Machines Lab is tackling some of the largest question on the frontier of AI research. The real test is whether Thinking Machines Lab can solve these problems, and make products around their research to justify its $12 billion valuation.

Techcrunch event

San Francisco | October 27-29, 2025

References

  1. ^ $2 billion in seed funding (techcrunch.com)
  2. ^ blog post (thinkingmachines.ai)
  3. ^ pic.twitter.com/jMFL3xt67C (t.co)
  4. ^ September 10, 2025 (twitter.com)
  5. ^ customize AI models for businesses (www.theinformation.com)
  6. ^ unveiled in the coming months (x.com)
  7. ^ frequently publish blog posts (thinkingmachines.ai)

By admin