University AI Tool Sparks Privacy Fears Over Student Data

A professor's unauthorized use of OpenAI for student data highlights privacy risks. Learn how schools handle 'local' vs. cloud AI and what parents can do.

Tuesday, February 3, 2026

A professor at a Dutch university is facing scrutiny after encouraging colleagues to upload sensitive student portfolios to OpenAI, despite strict privacy rules prohibiting the practice. The incident highlights a growing challenge for schools: ensuring that "innovative" AI tools do not secretly compromise student privacy.

What Happened

A professor at Fontys University of Applied Sciences developed a software tool to analyze student portfolios, which often contain deep personal reflections and progress reports. He initially promoted it on LinkedIn as a "local AI tool," implying the data would stay on the user's computer. However, he later admitted the tool actually sent data to servers operated by OpenAI, the US-based company behind ChatGPT.

According to Omroep Brabant, the professor denied uploading actual student data, stating he only used his own portfolio for testing. Yet, the university confirmed that sending such data to OpenAI is a clear violation of the General Data Protection Regulation (GDPR). Johan Jeuring, head of the National AI Education Lab, stated that using OpenAI for personal portfolios is "simply not allowed" because users cannot fully trust how the company handles that information.

The Bigger Picture

This incident illustrates a major technical divide in schools: the difference between "rented" AI and "local" AI. When schools use tools like ChatGPT, they are often "renting" intelligence, meaning data leaves the school's control. Research indicates that to truly protect privacy, institutions should move toward self-hosted models where data never leaves the school's own servers.

Furthermore, relying on a teacher's word that data wasn't uploaded is risky. While the university stated verification was impossible, new research into traffic analysis suggests that experts can actually identify what topics are being sent to AI models by analyzing network patterns. This means schools may soon have the technical means to audit these "leaks" rather than relying on trust.

What This Means for Families

For families, the distinction between "deleted" and "gone" is critical. While OpenAI offers controls, standard chats are not deleted instantly. According to OpenAI's retention policy, even after a user deletes a chat, the data remains on their systems for up to 30 days.

Additionally, digital portfolios are not just homework; they are inherently personal documents that often include health-related information or emotional reflections. When these are fed into commercial AI systems, they may be used to train future models unless specific opt-out settings are enabled.

What You Can Do

  • Ask about "Local" vs. "Cloud": Ask your school if their AI tools process data on the student's device (local) or send it to a company (cloud).
  • Check Data Training Settings: If your child uses ChatGPT for school, ensure the "improve the model for everyone" setting is turned off to prevent their work from becoming training data.
  • Request Privacy Audits: Encourage school administrators to look into network traffic analysis tools to verify that sensitive data isn't silently leaving the campus.
Share: