When childhood becomes data

Personalised learning brings opportunity and risk in equal measure

Artificial intelligence is not simply entering the classroom. It is beginning to redesign the architecture of human development itself. That is the unsettling undercurrent running through AI and Ethics of Smart Education, a major new report from the FII Institute and Columbia Climate School’s Center for Sustainable Development, which examines how governments from France to China, Qatar to the United States are racing to integrate AI into schools while rewriting the social contract between children, educators, states and technology companies.

At first glance, the report appears to explore familiar territory: adaptive learning, AI tutors, personalised education and administrative efficiency. Yet beneath the policy language sits a far more provocative argument. The future of education may no longer be decided primarily by teachers, ministries or even parents. Increasingly, it may be shaped by whoever controls the data, infrastructure and governance models underpinning artificial intelligence itself. And that changes everything.

For decades, education systems were designed around human limitations. A teacher could only manage so many students. Assessments were periodic snapshots. Behavioural observations faded with time and memory itself imposed natural limits on surveillance. AI erases those limits.

Today’s “smart education” systems can continuously monitor engagement, track emotional responses, analyse learning behaviour, predict performance trajectories and generate persistent digital profiles stretching across years of development. The report describes this as “the datafication of childhood”, perhaps its most important and haunting phrase. In practical terms, it means childhood risks becoming a permanent dataset.

Learning in the age of algorithms

A student’s weaknesses, behavioural struggles, learning speed, emotional patterns, disciplinary records and cognitive habits could theoretically follow them long after graduation. The report openly warns of future scenarios in which such information may become accessible to employers, insurers, financial institutions or other third parties. The ethical implications are enormous because education is unlike any other AI environment. These are not shopping preferences or entertainment algorithms. These are children. Human beings in formation.

The report repeatedly stresses that education concentrates multiple “high risk” AI applications simultaneously, including grading, behavioural profiling, predictive analytics, placement systems and generative tutoring tools. A flawed recommendation engine suggesting the wrong film is trivial. A flawed educational AI system influencing a child’s life trajectory is something else entirely.

“Smart education is not primarily a technical challenge,
but a governance challenge.”

 AI and Ethics of Smart Education, FII Institute

What makes the report especially compelling is that it refuses to frame this debate as simply “pro AI” versus “anti AI”. Instead, it argues that outcomes depend almost entirely on governance architecture. France and the wider European Union approach AI through a rights based precautionary model rooted in GDPR and the EU AI Act. Educational AI systems are increasingly classified as “high risk”, triggering stricter requirements around transparency, human oversight and child protection. Yet even France faces contradictions. While promoting AI innovation and sovereign European models, it is simultaneously attempting to reduce screen addiction and restrict smartphone use in schools.

China, meanwhile, views smart education as national infrastructure tied directly to competitiveness and long term technological sovereignty. Qatar frames Arabic language AI capability not merely as localisation but as an issue of fairness and inclusion. Japan’s Society 5.0 vision pushes perhaps the most philosophically balanced model, positioning AI readiness alongside wellbeing, literacy and human centric design.

Then there is the United States, where the report paints a fragmented landscape of rapid experimentation driven by EdTech markets, inconsistent regulation and uneven safeguards. Facial recognition systems, deepfake abuse and surveillance technologies are already generating ethical crises in schools. Brazil offers perhaps the starkest warning. In one case highlighted in the report, facial recognition systems deployed in schools evolved into tools capable of emotion analysis and behavioural monitoring, effectively transforming classrooms into what the authors describe as laboratories for emotional surveillance.

Today’s students may become the first generation raised inside algorithmic education

Beneath all of this sits another anxiety the report handles brilliantly: cognitive outsourcing. If students increasingly rely on generative AI to write, reason, summarise and think for them, what happens to the development of critical thought itself? What becomes of originality in a generation raised alongside systems designed to eliminate friction, uncertainty and intellectual struggle?

The report does not descend into dystopian melodrama. In fact, it remains remarkably measured throughout. AI could absolutely expand educational access, reduce administrative burdens and personalise learning in transformative ways. Yet the document’s central message is impossible to ignore. Smart education is not fundamentally a technology story. It is a governance story.

The real contest is not over whether AI belongs in schools. That battle is already over. The real contest is over who shapes the minds, freedoms and futures of the first generation raised inside algorithmic childhood.

– Photo credits – Tim Mossholder, Quila and CDC

Related Posts