How to Raise Critical Thinkers in an AI-Driven World
Tech may be a great equalizer in many ways. But that doesn’t mean you don’t need to learn how to use it well.
A few weeks ago I received a promo email from OpenAI. In a nutshell, it said: you don’t need to have training to use ChatGPT - anyone can use it, anywhere, anytime, regardless of your understanding of AI or LLMs.
This is, in theory, true. Using freemium models, commercial LLMs have - basically - zero barrier to entry, from a product perspective. As long as you can type or voice dictate, you’re in. Congratulations. But this doesn’t mean anything in terms of how well or effectively you can use an AI tool. For that, you’re back at square one: how well you can critically think about your task, its objective and scope.
An AI company inviting you to use their LLM by saying you need “no training” to use it is like a rocket ship manufacturer saying “Here are the keys to the rocket. Anything else you need?” It’s simply not the case that all we need is “access” to a powerful machine. We also need foundational literacy and training on how, when, and where to activate the machine; and, most importantly, we need critical thinking and metacognitive skills to decide the purpose, reason, and proportionality of our use.
Many of us have grown up with a common narrative of technology being a great equalizer. How frequently can you remember hearing some slogan along the lines of “the internet is democratizing access” or pushing the general view that access to a technological device (usually a smart one) somehow automatically means better fundamental rights?
Yet this is a dangerous conception to graft onto our AI products, particularly when they’re being sold to children (including indirectly, via schools and education programmes). When tech companies - including many edtech companies who sell to schools and educators - try to convince you that their AI solution is automatically going to improve quality of life without students having to do any foundational training, you should run for the hills (or, at least, be very skeptical).
Indeed, every person I’ve met who cares about children’s welfare can, more or less, agree on one simple thing: that children must have education, training, and some foundational literacy about AI tools before engaging in interactions with generative AI, for either learning or play. Where children and AI are involved, plug-and-play solutions are simply not feasible.
As it turns out, the same rule goes for adults.
What Defines the Best AI Users
A recent chat with our academic advisor Dr. Vivienne Ming, an AI ethicist and theoretical neuroscientist, recapped this truth: that some humans are simply better than others at using AI to their advantage, and this largely has to do with higher levels of critical thinking, social emotional intelligence and metacognitive functioning (i.e. self-awareness about how you learn and being able to train yourself to learn better with new tools and in new contexts).
In her latest, preliminary research project on how humans collaborate with AI, Dr. Ming is finding initial evidence of precisely this:
[What we appear to be seeing is that] there is no one story for humans and machines collaborating. The bottom [level of the pyramid for collaboration] is easy, and this accounts for probably the majority of people. They largely just do what the AI says [and] add no additional value. They do about as well as the AIs [would on their own].
But, she adds, when you combine an AI with a person with a certain level of metacognition and social emotional intelligence (i.e. when they have a certain level of awareness of themselves and the AI), something unusual happens:
Above a certain [human competency] threshold, the humans stop just taking the AI's word for it, and suddenly this whole new behavior emerges. Where it's really dynamic. The humans [say things], the AI adds insights, the humans take those insights, update their [work], the AI pulls back to the known and the center [and] the humans push out to the edges.
This is human-machine collaboration at its (simplistically described) best. And it cannot be achieved with an approach that tells you that you need “no training” to use AI. In fact, the secret ingredient - so to speak - to achieving the human-machine collaboration Vivienne is describing is nearly the opposite of zero-friction. It is the ability to willingly create friction by pushing back against an AI that leads learners to the pot of gold:
High [AI and] human capital hybrid teams don't just do as well as [a high functioning AI], they seem to do better. Even though these humans know nothing, they have no domain expertise about [the topic]. They're just smart, emotionally intelligent, they have perspective-taking skills, including [about] the AIs. They have real sensitivity when it might be wrong and when it might be right.
So, as you choose your AI literacy solutions for your students this semester, remember: before you make an investment in an AI platform, invest in foundational critical thinking, communication skills and safety training first for students.
More Resources
For free AI literacy resources for K-12 students, we love Day of AI’s (from MIT) open source teaching guides for teachers. Or, if you’d like to discuss a multilingual option (with no teacher prep) for your students, check out KiguLab or contact us at info@kigumigroup.com.
References
Kigumi conversations with Dr. Viviennine Ming, December 2025