EdTech company Otus has formed a new advisory board of national education leaders to guide its strategy on artificial intelligence, data privacy, and student assessment. As schools face increasing pressure to adopt AI tools while protecting student data, this move signals a shift toward more rigorous oversight in educational technology.
What Happened
Otus, a platform used by K-12 districts for assessment and data insights, announced the creation of the Otus Advisory Board. The board aims to help the company navigate the complex landscape of modern education, specifically focusing on "emerging trends in education, data, and technology."
According to Otus President Chris Hull, the group will provide feedback on critical issues like AI in assessment, equity in personalized learning, and data governance. The inaugural members include distinguished leaders such as former AVID Center CEO Dr. Sandy Husk and Dr. Edward Lee Vargas, a former State Superintendent of the Year for both California and Washington.
This is the company’s second major move into expert oversight this year. In January 2025, they formed a specific AI Advisory Board to oversee tools like Otus Insights.
The Bigger Picture
The formation of this board comes at a time when the risks of AI in education are becoming clearer. A recent report cited by NPR warns that for many schools, the risks of AI currently outweigh the benefits. These risks include potential bias in automated grading and the danger of "de-skilling" students, where relying on AI shortcuts the learning process.
Equity is a major concern. Research highlighted by The 74 indicates that free AI tools—often the only ones accessible to low-income districts—are significantly less reliable than paid versions. This creates a two-tiered system where some students receive accurate feedback while others are exposed to misinformation.
Data privacy is also undergoing a massive shift. Schools are moving away from simple login tools toward "zero-trust" architectures. According to SchoolDay, the goal is now "data minimization," ensuring vendors only see the specific slice of data they need to function, rather than accessing a student's full profile.
Furthermore, while "personalized learning" is a popular buzzword, its success depends on implementation. Harvard Gazette reports that personalized support works best when it is relationship-based, linking students to human resources and coordinators, rather than relying solely on software algorithms.
What This Means for Families
For parents, the involvement of experienced superintendents and educators in product development is a positive sign. It suggests that companies are recognizing tech alone cannot solve educational challenges.
However, the research on AI bias and data privacy highlights the need for vigilance. If your school uses platforms that claim to "personalize" learning, it is important to know if that personalization is driven by a supportive human element or an unmonitored algorithm. Additionally, as districts adopt more tools, ensuring those tools follow strict data governance protocols is essential to keeping your child's information safe.
What You Can Do
- Ask about AI tools: Find out if the AI tools your child uses are the free, public versions or paid, vetted enterprise versions which offer higher accuracy and privacy.
- Check data policies: Ask your school board if they use "privacy checkpoints" or data minimization strategies when approving new apps.
- Value human connection: Support programs that use technology to connect students with teachers and mentors, rather than replacing them.