Artificial intelligence assistants are beginning to look less like fixed tools and more like adjustable instruments, and OpenAI’s latest set of experiments with ChatGPT illustrates the shift. In recent days the company has started testing features that hand more control to the user, ranging from a dial that changes how much effort the model invests in an answer, to a study mode that creates flashcard-style quizzes, to a deeper integration of its Codex system across development environments.

A Dial for Reasoning Depth

The “effort picker,” as it is being called in early tests[1], is the most unusual. Instead of relying on the system to decide how hard it should think, users can now choose from a set of levels that adjust the depth of the reasoning process. A lighter setting produces quick replies that skim the surface. Higher levels push the model through longer reasoning chains, slowing down the response but delivering more structured analysis.

There are four stages in the current version, each tied to an internal budget that controls how much “juice,” as the engineers describe it[2], gets allocated before the answer is finalized. At the bottom is a mode designed for casual queries, the sort of questions where speed matters more than precision. Above that sit the standard and extended modes, useful for homework problems or work research where more careful steps help. At the very top, reserved for the company’s most expensive subscription, sits the maximum effort tier, which allows the model to spend far more cycles on each response. That restriction reflects cost: deeper reasoning requires more computation, which in turn means higher prices to cover it.

This kind of dial has existed in other corners of computing for decades. In the early years of expert systems, researchers often balanced inference depth against processing time. The idea was that longer reasoning chains could uncover better answers, but only if the operator was willing to wait. OpenAI’s move is essentially a modern translation of the same idea, packaged for a general audience.

Flashcards for Study Mode

A smaller but still interesting addition appears in the form of a study mode. When prompted with a topic[3], the model generates a set of digital flashcards, presents questions one by one, and tracks the user’s answers through a scorecard. Unlike static test banks, the content can evolve with the conversation, producing follow-up questions or repeating material that the learner got wrong. Education research has long found that this kind of retrieval practice strengthens memory more effectively than rereading material, so the approach is grounded in existing evidence. Early tests, though, suggest the rollout is patchy. In some regions, including Pakistan, the system has not produced quizzes for certain subjects such as blogging or search engine optimization, hinting that coverage is still incomplete.

Codex Gains Broader Reach

Meanwhile, developers are seeing changes in Codex[4], the company’s programming assistant. The tool can now be used more smoothly across environments, with sessions linked between browser, terminal, and integrated development editors. A new extension for Visual Studio Code and its forks, including Cursor, helps bridge local and cloud work. The command-line tool has been updated as well, with new commands and stability fixes. The improvements bring Codex closer to what competing systems are attempting, such as Anthropic’s Claude Code, which is also experimenting with web and terminal links.

A Shift Toward Adjustable AI

Taken together, the updates reveal a trend. OpenAI is gradually shifting away from a model that spits out a single kind of response toward a service that lets people decide what kind of reasoning, format, or integration they want. That could matter as much for casual users who only want fast answers as it does for students drilling for exams or engineers juggling code between a laptop and the cloud. What unites all of these developments is the idea that AI should not be a sealed black box but an adjustable partner, with knobs that people can turn depending on the task at hand.

Notes: This post was edited/created using GenAI tools.

Read next: Study Shows Chatbots Can Be Persuaded by Human Psychological Tactics[5]

References

  1. ^ early tests (x.com)
  2. ^ as the engineers describe it (x.com)
  3. ^ When prompted with a topic (www.reddit.com)
  4. ^ changes in Codex (help.openai.com)
  5. ^ Study Shows Chatbots Can Be Persuaded by Human Psychological Tactics (www.digitalinformationworld.com)

By admin