Nanostockk | Istock | Getty Images
Artificial intelligence is reshaping workplaces — and increasingly finding its way into the hands of many teens and children.
From helping with homework to chatting with AI “friends,” tools such as ChatGPT have free versions online that are easy for young users to access. These AI chatbots, built on large language models (LLM), generate human-like responses that have sparked concern among parents, educators and researchers.
A 2024 survey by Pew Research Center found that 26% of U.S. teens aged 13 to 17 say they have used ChatGPT for their schoolwork — double the rate from a year earlier. Awareness of the chatbot rose to 79% in 2024 from 67% in 2023.
Regulators have taken notice. In September, the Federal Trade Commission ordered seven companies, including OpenAI, Alphabet and Meta, to explain how their AI chatbots may affect children and teenagers.
In response to mounting scrutiny, OpenAI announced in the same month that it’ll launch a dedicated ChatGPT experience with parental controls for users under 18 and develop tools to better predict a user’s age. The system would automatically direct minors to “a ChatGPT experience with age-appropriate policies,” the company said.
Risks of children using AI chatbots
However, some experts worry that early exposure to AI — especially as today’s youngest generations grow up with the technology — may negatively impact how children and teens think and learn.
A 2025 preliminary study from researchers at MIT’s Media Lab examined the cognitive cost of using an LLM in writing essays. 54 participants aged 18 to 39 were asked to write an essay and were assigned to three groups: one could use an AI chatbot, another could use a search engine and a third that solely relied on their own knowledge.
The convenience of having this tool today will have a cost at a later date, and most likely it will be accumulated.
Nataliya Kosmyna
Research scientist, MIT
The paper — still in the process of being peer reviewed — found that brain connectivity “systematically scaled down with the amount of external support,” according to the study.
“The brain‑only group exhibited the strongest, widest‑ranging networks, the search engine group showed intermediate engagement, and LLM assistance elicited the weakest overall [neural] coupling,” according to the study.
Ultimately, the study suggests that relying on AI chatbots could lead people to feel less ownership over their work and lead to “cognitive debt,” a pattern of deferring mental efforts in the short term that may erode creativity or make users more vulnerable to manipulation in the long run.
“The convenience of having this tool today will have a cost at a later date, and most likely it will be accumulated,” said research scientist Nataliya Kosmyna, who led the MIT Media Lab study. The findings also suggested that relying on LLMs might lead to “significant issues with critical thinking,” she added.
Children, in particular, could be at risk for some of the negative cognitive and developmental impacts of using AI chatbots too soon. To help mitigate these risks, researchers agree that it is very important for anyone, particularly the youth, to have the skills and knowledge first before relying on AI tools to complete tasks.
“Develop the skill for yourself [first], even if you are not becoming an expert in it,” said Kosmyna.
Doing so will allow inconsistencies and AI hallucinations — a phenomenon where inaccurate or fabricated information is presented as facts — to be caught more easily, she added, which will also help “support critical thinking development.”
“For younger children … I would imagine that it is very important to limit the use of generative AI, because they just really need more opportunities to think critically and independently,” said Pilyoung Kim, a professor at the University of Denver and child psychology expert.
There are also privacy risks that children may not be aware of, and it is important that when using these tools, they are used responsibly and safely, explained Kosmyna. “We do need to teach overall, not just AI literacy, but [also] computer literacy,” she said. “You need really clear tech hygiene.”
Children also have a higher tendency to anthropomorphize, or to attribute human characteristics or behavior to non-human entities, said Kim.
“Now we have these machines that talk just like a human,” said Kim, which can put children in vulnerable situations. “Simple praise [from] these social robots can really change their behavior,” she added.
Protecting kids in the AI era
Today, the AI-native generation is growing up with access to these tools, and experts are asking themselves: “What happens with extended use?”
“It’s too early [ to know]. No one is doing studies on three–year–olds, of course, but it’s something very important to keep in mind that we do need to understand what happens to the brains of those who … are using these tools very young,” said Kosmyna.
“We see cases of AI psychosis. We see cases of, you know, unaliving. We see some deep depressions… and it’s very concerning and sad, and ultimately dangerous,’ she added.
Kosmyna and Kim said regulators and technology companies share the responsibility to protect society and young people by having the right guardrails in place.
For parents, Kim’s advice is simple: keep an open line of communication with your kids and monitor the AI tools they use, including what they type into the LLMs.
Want to be your own boss? Sign up for CNBC’s new online course, How To Start A Business: For First-Time Founders. Find step-by-step guidance for launching your first business, from testing your idea to growing your revenue.
Plus, sign up for CNBC Make It’s newsletter to get tips and tricks for success at work, with money and in life, and request to join our exclusive community on LinkedIn to connect with experts and peers.