Full Title: ⚙️ the Cost of Letting AI Think for You
Highlights
April 06, 2026 | Read onlineHello, friends. Enterprises are starting to move past copilots and toward AI products that deliver measurable results inside real-world workflows. This is a shift Gartner says will define the next phase of enterprise AI adoption. A new study shows how easily humans defer to AI reasoning, raising deeper questions around how we can be more intentional about what we choose to do manually versus what we allow to be automated. Meanwhile, Nvidia’s push into highly open models is less about becoming a lab and more about expanding the AI ecosystem, reducing dependence on a few proprietary models. —Jason HinerIN TODAY’S NEWSLETTER
Why Nvidia threw its weight behind open source AI
The cost of letting AI think for you
Pay for results, not AI tools, say enterprises
BIG TECH
Why Nvidia chose open models to reshape AI
If you’re wondering why AI chips leader Nvidia is now building open models that compete with the Chinese open-source champs, and even proprietary models from OpenAI and Anthropic, then you’re not alone.
Last month, Nvidia launched Nemotron 3 Super, a 120-billion-parameter reasoning model that outperformed expectations in benchmarks. This is a mixture-of-experts model with a 1-million-token context window. In other words, it’s a serious model made to compete with the frontier labs. Meanwhile, the company promised that a model 4x its size, to be called Nemotron 3 Ultra, is coming soon.
And because Nvidia opens the weights, datasets, and training recipes, it’s among the most open models in the world, especially for a model of this capability. Some of the only models that could claim to be more open would be the ones from MBZUAI, which The Deep View covered in depth in January. But Nvidia’s open models are far closer to full-stack openness than most of the open-source models, which only offer open-weight releases.
So why would the leading hardware company of the AI era make software that competes with its leading customers?
“We’re not trying to control AI. We’re trying to grow it,” Bryan Catanzaro, VP of applied deep learning research at Nvidia, told The Deep View. “And so our incentives as a company, our business is aligned with open models and with supporting the ecosystem in a very direct way.”
Kari Briski, VP of generative AI software at Nvidia, told The Deep View another perspective: “The model is the byproduct. It is not core to our business, which allows us to just open up the data, open up the recipes, open up everything.”
If we break it down, there are three benefits Nvidia gets from making its own models:
Extreme hardware co-design: Making their own models allows Nvidia to optimize the heck out of their GPUs, CPUs and other hardware to run AI. They don’t have to wait to get the latest models from the frontier labs to plan the next stage of optimizations.
Hedging against proprietary monopolies: If the frontier labs that need the latest and greatest hardware dwindle down to only a handful of players, then Nvidia could end up at their mercy. When you have a smaller number of customers you rely on for huge numbers of orders, then those customers gain more and more control over your prices. They can demand lower prices because they know so much of your business depends on them.
Letting a thousands flowers (a.k.a. customers) bloom: By releasing open models that other hardware and software makers can use as a rapid on-ramp to build their own AI products and serve the various niches in the industry, Nvidia is powering up the ecosystem, helping companies with limited resources have models they can use to compete and potentially creating a lot more future customers when those companies succeed and grow.
“You don’t want one person winning [because] then they decide all the rules. You need a big open ecosystem for everybody to come along,” said Briski.
Nvidia’s open model strategy makes perfect sense from the perspective of being an ecosystem catalyst. The more it eases the on-ramp fo (View Highlight)
AI might be causing us to forget how to think for ourselves.
Recent research from the University of Pennsylvania found that AI users were often willing to accept flawed AI reasoning, readily incorporating it into their decision-making with “minimal friction or skepticism.”
The research documents the rise of “cognitive surrender,” a phenomenon in which users adopt AI outputs while “overriding intuition… and deliberation.”
• In a study of nearly 1,400 participants across 9,500 trials, researchers found that subjects accepted unsound AI reasoning more than 73% of the time and only overruled models’ decisions about 20% of the time.
• Additionally, participants with higher trust in AI and “lower need for cognition and fluid intelligence” tended to fall victim to this more often. (View Highlight)
“Across domains, AI tools are not merely assisting decision-making; they are becoming decision-makers,” The research reads. “This shift opens new theoretical ground: How should we understand human cognition and decision-making in an age when we outsource thinking to artificial processes?”
The study adds to a growing body of research on how AI may be impacting the way that we think. One of the most commonly cited studies comes from the MIT Media Lab, in which a group of test subjects was asked to write SAT questions with three different tools: one with OpenAI’s ChatGPT, one with Google search, and one with no help at all. Consistently, the ChatGPT users “underperformed at neural, linguistic, and behavioral levels.” (View Highlight)
Even some of AI’s biggest names are questioning its effects on our brains. Anthropic CEO Dario Amodei said in a March interview with podcaster Nikhil Kamath that deploying AI in the wrong ways could easily make people “become stupider,” but only if they choose to forgo learning entirely. “Even if an AI is always going to be better than you at something, you can still learn that thing. You can still enrich yourself intellectually,” Amodei told Kamath.
The researchers, however, posit that cognitive surrender may not inherently be a bad thing. If an AI model is generally better at reasoning and decision-making than the person using it, with fewer mistakes, “deferring to a statistically superior system may be adaptive or even optimal.”
The bigger issue, however, comes down to agency. The researchers noted that this trend could mark a profound shift in cognition itself, “one in which users may not know when or why they have deferred, and where the line between human and machine agency becomes blurred.” (View Highlight)
We are not yet at a point where thought is entirely automated. AI, however, presents the opportunity to manifest that future, turning the friction of human critical thinking into a slippery waterslide of accepting all it gives us. Amodei is correct: Even if AI is someday capable of doing everything, the dividing line between reaping the benefits and losing ourselves is in what we let it do. Even if machines make our clothing, plenty of people still knit and sew as a form of enrichment. Even if laptops make writing easier, there is still value to be gained from writing in a journal by hand. And even if an AI model can take the work out of work, doing things ourselves is still vital to retaining our humanity and agency. Put simply: Don’t be afraid to be bad at something, even if AI can do it better. Explore when there’s value to handling it yourself. (View Highlight)
A new study from research firm Gartner found that by 2028, over half of all enterprises will stop paying for assistive intelligence, including copilots and smart advisors, and instead favor platforms that deliver workflow results. This sentiment reflects a broader industry shift towards results-driven agentic solutions over chat-like interfaces that can only deliver advice.
“The market is moving away from standalone AI experiences and toward workflow-native,” said Vuk Janosevic, Gartner analyst, to The Deep View. “Enterprises have made it clear that they value outcomes inside existing processes more than smart advice sitting off to the side.” (View Highlight)
A defining characteristic of agentic AI, or an AI solution that can be categorized as taking action on a user’s behalf, is granting it access to the same databases and context the user relies on daily. Any agentic solution expected to deliver meaningful outcomes must therefore be entrusted with proprietary, and often highly sensitive, data.
This reframes what enterprises are willing to pay for. It’s not necessarily demand for a new category of AI, but rather a reflection of whether the AI has the authority to trigger the actions being requested. Gartner posits that the vendors who succeed will not be those that offer more AI, but those that facilitate agent orchestration, ensuring agents follow guardrails, securely access key company databases, and can identify and correct missteps. (View Highlight)
“In practice, that means the real value is less about building one more AI platform and more about making business software capable of acting, deciding, and completing work in context and in line with compliance guardrails,” said Janosevic.
The growing shift toward automation may prompt an instinctive assumption of widespread job disruption. Janosevic, however, offered a more nuanced view that while some displacement is inevitable, entire professions are unlikely to vanish. Instead, certain roles will contract and be redesigned as new ones emerge around AI-led work.
“The deeper point is that agentic AI changes how work is organized, not just how fast it gets done,” he added. (View Highlight)