As artificial intelligence reshapes classrooms, the company responsible for monitoring nearly half of America's K-12 students is setting new standards for digital safety. GoGuardian, a platform used by 25 million students, recently outlined a stricter approach to managing AI in schools, calling for "granular visibility" and stronger control mechanisms to keep pace with rapid technological changes.
What Happened
Vishal Gupta, Global Chief Technology and Product Officer at GoGuardian, stated in an interview with EdTech Digest that schools must demand more transparency from their technology providers. As districts race to adopt AI tools, Gupta argued that improved security is non-negotiable. He identified four critical requirements for responsible innovation: deep visibility into which tools are being used, strict compliance with privacy laws, robust control mechanisms, and measurable educational outcomes.
According to Gupta, old methods of simply blocking websites are no longer sufficient. Schools now require "granular visibility" broken down by class and grade to distinguish between sanctioned learning tools and unauthorized apps. To support this, GoGuardian is rolling out new features like Multi-Factor Authentication (MFA) and obtaining security certifications such as ISO 27001.
The Bigger Picture
This push for tighter control comes as schools face an escalating technological arms race. Students are increasingly finding ways to bypass traditional web filters. According to Linewize, students frequently use web-based proxy sites to hide their activity from school administrators. In response, companies are deploying AI-driven "proxy solutions" to detect and block these evasion tactics in real time.
The regulatory landscape is also shifting. While federal laws like FERPA and COPPA provide a baseline for privacy, experts warn they may not be enough for the AI era. SchoolAI notes that legacy privacy laws often fail to address how generative AI models collect and train on student data. Consequently, states are beginning to implement their own stricter guidelines. AI for Education reports that some states now require human oversight for AI grading and prohibit unauthorized data training.
To address these concerns, the industry is moving toward standardized governance. The new ISO 42001 certification specifically targets AI management, requiring organizations to perform impact assessments and monitor for bias—a level of scrutiny that goes far beyond standard IT security.
What This Means for Families
For parents, this shift signals that school-issued devices are becoming more sophisticated surveillance tools, aimed not just at safety but at data analysis. The move toward "measurable outcomes" means software providers are under pressure to prove their tools actually help students learn, rather than just keeping them busy.
However, it also highlights a tension between safety and privacy. As filtering tools like Deledao and GoGuardian use AI to scan text and images in real-time, the digital classroom is becoming a highly monitored environment. Parents can expect stricter enforcement of digital policies and potentially less flexibility in how students use school devices for non-academic purposes.
What You Can Do
- Check the policy: Ask your school board if they have specific policies regarding AI data training. Does the software they use learn from your child's work?
- Monitor the monitor: If your school uses GoGuardian, ask if they provide a parent app or portal where you can view reports on your child's online activity.
- Discuss boundaries: Talk to your children about why web filters exist. Explain that "getting around" the filter exposes them to security risks, not just forbidden content.