OpenAI updated ChatGPT to function as a collaborative teammate by remembering user preferences across sessions. The system now includes tools that allow the artificial intelligence to adopt specific personalities and retain personal details, moving from a basic search tool to a persistent digital assistant.
What Happened
According to OpenAI, users can customize ChatGPT through three features: Custom instructions, Memory, and Skills. Custom instructions let users define their role, tone, and formatting rules. The Memory feature stores details users share or implicitly gathers from recent conversations to tailor future replies. Skills allow users to automate repetitive tasks into structured workflows. These tools help the AI adapt based on user guidance, so users do not need to repeat context during every session.
The Bigger Picture
As AI tools shift from ephemeral chat sessions to long-term identity profiles, researchers warn of privacy and cognitive risks for students.
The rapid adoption of persistent AI memory creates data privacy challenges. According to MIT Technology Review, AI systems often collapse information from various domains—such as academic research, health questions, and interpersonal conflicts—into single repositories. This context bleed means a student's private data might influence other areas of their profile. Research from Contrary Research indicates that these systems are moving toward an identity model that is difficult for users to audit or erase, complicating compliance with student privacy laws like FERPA. We previously reported on related vulnerabilities, including how hackers are tricking AI assistants to steal student data.
Beyond privacy, the personality aspect of modern AI introduces developmental concerns. A study published in Frontiers found that students develop attachment and trust based on an AI's responsiveness. A separate study in the International Journal of STEM Education identifies this as an illusion of dialogue. Treating the AI as a collaborative partner can lead to cognitive offloading, where students let the machine complete tasks and experience a false sense of independent capability.
Institutional oversight lags behind student usage. According to AI Agent Corps, 92 percent of higher education students use generative AI. Data from Xcelacore shows that only 13 percent of institutions measure the technology's return on investment, and many administrators remain unaware of how AI is used on their campuses.
What This Means for Families
Long-term memory and customized personalities make ChatGPT an efficient tool, but they blur the line between software and confidant. When an AI remembers a student's struggles, preferences, and style, it creates an environment that can reduce academic isolation. This engagement can mask the fact that the student is interacting with a corporate database.
Parents and educators must protect student data from commercial AI models while ensuring that ease of use does not bypass the struggle of learning. If an AI automatically tailors its answers to a student's preferences and remembers their previous mistakes, it may prevent the student from developing critical thinking and problem-solving skills. As we previously reported, actively engineering prompts is a vital skill that automated memory features might bypass.
What You Can Do
- Audit stored memories: Check ChatGPT's Personalization and Memory menus with your student to review and delete unnecessary personal details.
- Set clear boundaries: Discuss what types of information are appropriate to share with AI. Academic queries should remain separate from highly personal or health-related topics.
- Encourage manual prompting: Prompt your student to start fresh conversations for new assignments rather than relying on the AI's stored context, ensuring they structure their own inquiries.