We're Not Working With AI - We're Managing It

Author

Ayman Elhalwagy

Date Published

We're Not Working With AI - We're Managing It

One of the most interesting studies I've read recently is MIT's Your Brain on ChatGPT.

The results are telling: "Brain connectivity systematically scaled down with the amount of external support: the Brain-only group exhibited the strongest, widest-ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling."

Here's my take: this is clear empirical evidence that we are not really "working with AI." We are managing AI. We tell it what we want, it does the work. Similar to how a manager directs a team.

That's not a bad thing. The upside is obvious - bandwidth and productivity. You don't expend the same level of mental effort to get things done.

But it's a mistake to fool ourselves into thinking this is the same as using a tool. If it were truly collaborative, we'd expect to see the kind of sustained neural connectivity patterns associated with active learning and memory formation. The study found the opposite - weakened connectivity across key brain networks, particularly reduced theta and alpha activity in frontal-temporal regions crucial for memory encoding. And the immediate impact is clear: 83% of LLM users couldn't accurately quote their own essays minutes later, compared to only 11% in other groups.

If short-term memory encoding is this compromised, it raises questions about long-term retention. When the brain isn't actively building those neural pathways during the initial learning phase - what the reduced theta and alpha connectivity patterns suggest is happening - it's reasonable to hypothesize that long-term memory formation through synaptic plasticity would be similarly affected.

What makes this even more interesting is something the study didn't control for: how people engage with the LLM. Some just say "I want X" and let the model figure out the "how." Others specify both what and how, and iterate. The latter forces the prefrontal cortex to stay engaged.

The study actually found evidence for this: when participants who had practiced writing without AI (Brain-only group) later used LLMs in Session 4, they showed higher neural connectivity than regular LLM users. Even within the LLM group, participants who limited their use to specific tasks like grammar checking or translation maintained stronger ownership of their work compared to those who outsourced content generation entirely.

That, I think, is the real frontier. Not whether LLMs make us smarter or dumber, but whether we discipline ourselves to engage them in a way that keeps us thinking, rather than just delegating.

The study calls this phenomenon 'cognitive debt' - we defer mental effort in the short term but pay long-term costs in diminished critical thinking and creativity. Which raises perhaps the most important question: Can humanity afford to accumulate this debt at scale? Or do we need to be intentional about developing our cognitive capabilities in parallel with AI's advancement - not to compete with it, but to remain capable of directing it meaningfully?

And it raises a bigger question: is the current reward function used to train LLMs the right one for all cases? Yes, it boosts benchmark scores and makes the models more effective problem-solvers. But in spaces like EdTech - where the real value comes from engagement and critical thinking - we may need more exotic reward functions. Ones designed not only to optimize answers, but to shape the interaction itself so that the prefrontal cortex is activated rather than bypassed.

Link to the study: https://www.brainonllm.com/