Big Tech Leaders Ask: Do AI Tutors Build Real Skills?

Experts from OpenAI, Google, and Khan Academy debate if AI tutors truly help students learn. Discover the risks of cognitive offloading and how to use AI safely.

Wednesday, February 11, 2026

Top researchers from OpenAI, Google DeepMind, and Khan Academy are joining forces to debate the future of artificial intelligence in the classroom. The central issue is no longer just about access or excitement, but whether AI tutors are helping students learn or allowing them to skip the hard work of thinking.

What Happened

According to Edtech Insiders, more than 600 educators and researchers have registered for a summit to unpack the effectiveness of AI tutoring. The event features heavy hitters like James from OpenAI, who leads research on how AI affects cognitive outcomes, and Irina from Google DeepMind, who focuses on reasoning and accessibility.

The discussion comes at a pivotal moment. As Jen Lapaz writes, the industry is moving past the initial hype phase where "everyone is experimenting." The focus is now on evidence. The goal is to ensure AI tutors are designed to deepen understanding rather than "quietly normalize outsourcing thinking."

The Bigger Picture

AI tutoring is already operating at a massive scale. According to Brookings, recent meta-reviews show that intelligent tutoring systems can match the success of human tutors. In fact, a study published in Scientific Reports found that specific AI tutoring models even outperformed traditional in-class active learning strategies, with students learning more in less time.

However, the technology is not perfect. While OpenAI's latest models outperform humans in critical thinking and scientific reasoning, they still lack "adaptive reasoning." This means they can solve complex logic puzzles but may struggle to adapt like a human teacher when a student is stuck.

This leads to the risk of "cognitive offloading." Research from the University of Toronto suggests that when students use tools to bypass "core cognitive work," they lose long-term knowledge. There is a significant negative correlation between how often a student uses AI shortcuts and their ability to think critically on their own.

To combat this, platforms like Khan Academy are implementing "Socratic" guardrails. As we previously reported, Khan Academy has integrated its content directly with ChatGPT to improve accuracy. Their standalone tool, Khanmigo, is designed to ask probing questions rather than giving answers. This forces the student to do the mental heavy lifting, prioritizing the learning process over the final product.

What This Means for Families

For parents, this shift means that not all AI tools are created equal. An AI that simply provides answers acts as a "crutch," leading to what researchers call metacognitive errors—where a student thinks they understand the material because they got the right answer, but they cannot replicate the result without the AI.

However, when used correctly, these tools can be powerful. Khanmigo and similar platforms allow for "teaching at the right level," giving students personalized support that a single teacher in a crowded classroom cannot always provide. The key difference lies in whether the tool acts as a tutor that guides or a machine that solves.

What You Can Do

  • Check the methodology: Before buying a subscription, test the AI. Does it give the answer immediately, or does it ask your child to explain their next step?
  • Monitor the history: Tools like Khanmigo record Q&A history, allowing you to see if your child is engaging in conversation or just demanding answers.
  • Encourage "study mode": Explain to your child that using AI to skip thinking hurts their brain's ability to retain information. Encourage them to use AI only after they have attempted the problem themselves.
Share: