Think Academy presented its "Think Kids" platform at the 2026 ASU+GSV Summit, branding the tool as an early childhood solution for the AI era. As artificial intelligence enters preschool classrooms, educators and parents are measuring the commercial momentum of these platforms against conflicting research on child development.
What Happened
The ASU+GSV Summit is a venue for education technology investors, featuring 457 sessions on the future of learning. Platforms featured at this event are not vetted for educational outcomes.
The summit identifies early-stage startups through the GSV Cup, a process described by the EdTech Innovation Hub to find companies with the potential to scale globally. Judges evaluate startups on business models, founding teams, and product-market fit. Peer-reviewed pedagogical efficacy is not a requirement for inclusion. Consequently, companies like Think Academy gain visibility for their market viability rather than proven, longitudinal success in student learning.
The Bigger Picture
Specific research on Think Kids is absent, but independent studies offer a mixed view of AI in early childhood. Some classroom data is positive. A study in Humanities and Social Sciences Communications found that an AI-robot used for task-based learning led to higher persistence and problem-solving among five- and six-year-olds. Research in AI, Brain and Child observed that kindergarteners used the conversational AI tool Doubao without it replacing their human teachers.
Despite these findings, developmental experts caution against broad adoption. Human brain architecture relies on "serve-and-return" interactions, which are the physical and emotional exchanges between a child and a caregiver. High screen usage limits these interactions. Because AI models are patient and rewarding, they can replace human interaction.
There is also the risk of cognitive outsourcing. While older students use AI to build thinking skills—as we previously reported—young children are still forming foundational reasoning. Relying on generative AI platforms during these years may hinder cognitive development by performing the act of thinking for the child.
What This Means for Families
Current AI tools offer task-based assistance but lack long-term data regarding safety for development. Researchers in the Early Childhood Education Journal argue that AI is only trustworthy if developers use strict ethical frameworks that prioritize children's rights. Until these frameworks are standard, parents and educators must vet these tools themselves. A platform's popularity at an industry summit indicates venture capital backing, not a healthy developmental profile.
What You Can Do
- Prioritize human interaction. Ensure digital engagement is balanced with physical, face-to-face "serve-and-return" communication.
- Treat AI as a supplement. Limit the use of AI platforms to specific, task-based activities rather than open-ended companions or primary instructional tools.
- Look past the hype. When evaluating new preschool platforms, look for peer-reviewed pedagogical studies instead of industry awards or investor backing.