The Right and Wrong Way to Study with AI

Have a look around any college library, Uni rec room or coffee shop where someone’s revising and you’ll see the same thing. A laptop open, a textbook closed and ChatGPT doing most of the talking. AI has become the default study partner for a huge number of learners, and that shift has happened faster than almost any other change in education in living memory.

If you’re working through a personal trainer diploma right now, topping up with a CPD or studying for a specialist qualification, this question is one that affects you directly. The information you’re trying to learn (anatomy, physiology, programme design, nutrition science) is the same information you’ll be relying on every day once you’re working with paying clients.

Whether AI is helping you actually learn that material, or whether it’s quietly letting it slip straight through your head, is the difference between qualifying with genuine expertise and qualifying with a certificate you can’t back up. And the latest research has a lot to say about which one you’re likely to end up with.

What the Latest Research Is Actually Showing

The most talked-about study of the last year on this topic came out of the MIT Media Lab. Researchers used EEG to measure the brain activity of students writing essays under three conditions: with no tools at all, with a search engine, and with a large language model like ChatGPT. The LLM group produced their essays faster and with less effort, but the EEG data told a more uncomfortable story. Their neural connectivity dropped, their memory of what they’d just written was weaker and their sense of ownership over the finished essay was noticeably lower than the other two groups (Kosmyna et al., 2025). The researchers called it cognitive debt, a great analogy. You feel like you’ve done the work, but you’ve borrowed the thinking from somewhere else and the bill comes later.

A randomised controlled trial published in Social Sciences & Humanities Open in 2025 found something equally noteworthy. Undergraduate students were split into two groups, one using ChatGPT as a study aid and one using only traditional non-AI study methods, then tested 45 days later in a surprise retention test. The traditional group scored 68.5 per cent on the surprise test. The ChatGPT group scored 57.5 per cent (Barcaui, 2025). Eleven percentage points of knowledge had quietly disappeared, despite both groups feeling like they had studied the same material. They had treated AI as a cognitive crutch and the moment the crutch was taken away their recall fell apart.

If that was the whole story, this article would be a straightforward warning to close that AI website down. But it isn’t. Because in 2025 and 2026 a separate body of research has shown that AI can be one of the most effective tutors ever built, when it’s used in a particular way.

A randomised controlled trial published in Scientific Reports in 2025 by a Harvard team compared a custom AI tutor to in-class active learning in an undergraduate physics course. The AI-tutored group did not just keep up with their classmates, they learned significantly more in less time and reported feeling more engaged and more motivated than the in-class group (Kestin et al., 2025). Around the same time, Google’s LearnLM team ran a UK schools trial of a Socratic AI tutor and found that students working with it were 5.5 percentage points more likely to solve novel problems on follow-up topics than those working with human tutors alone (Google DeepMind, 2025).

The pattern from across all of this research is consistent. When AI is used to hand out finished answers, learning suffers. When AI is used to ask questions, generate practice problems, give feedback on a learner’s reasoning and force them through the productive struggle of working things out, learning improves. The tool is the same. The technique is everything.

Answer Machine vs Tutor

A good way to think about this is to picture two different study sessions. In the first, the learner has a question. They type it into ChatGPT. ChatGPT gives them a polished answer. They read it, nod, paste it into their notes and move on. The whole thing takes ninety seconds. They feel like they’ve made progress.

In the second, the learner has the same question. They type it in, but this time they ask the AI not to give them the answer. They ask it to walk them through the problem step by step, asking questions along the way and only confirming when the learner has reasoned their way to the right answer themselves. The whole thing takes fifteen minutes. They feel like they’ve worked hard. And a month later, they can still explain the concept from memory.

That second mode is what cognitive scientists have been calling Socratic tutoring for the best part of two and a half thousand years, and it’s what the best human tutors have always done. The really useful finding from the recent AI research is that a well-designed AI tutor can do it too, often at any time of day or night and at a level of patience no human can quite match. The catch is that you have to actually use the tool that way, and most learners don’t. The default behaviour of every popular chatbot is to be helpful, and helpful usually means giving you the answer.

This is the single biggest decision you make every time you sit down to study with AI. Are you using it as an answer machine or as a tutor? Because the difference between those two modes is the difference between just getting information and actually learning.

Discover the Best Way to Study with AI on the TRAINFITNESS Blog

 

Six Techniques That Build Real Knowledge

Once you’ve decided you want to use AI as a tutor rather than an answer machine, the practical question is how. These six techniques are drawn from the cognitive science literature on what actually produces durable learning and all of them translate naturally into a chat interface.

  1. Get it to quiz you, not answer you. Retrieval practice (the act of pulling information out of your own head rather than putting it back in) is one of the most well-evidenced study techniques in cognitive science. Open a fresh chat and tell the AI what topic you’ve just studied. Ask it to generate ten exam-style questions on that topic, one at a time and to mark your answers. Then attempt them from memory before checking. This is the closest thing to a free personal tutor that has ever existed.
  2. Make it Socratic on purpose. The default behaviour of every LLM is to give you the answer. You can override this with a single instruction at the start of your session: “You are my tutor. When I ask you something, do not give me the answer. Ask me a question that helps me work it out myself. Only confirm when I’ve reasoned my way there.” This one prompt changes the entire character of the conversation and it’s the closest you can get in a chatbot to the kind of Socratic dialogue the research has shown produces the best knowledge transfer.
  3. Use the Feynman technique. Pick a concept you’ve just studied and explain it back to the AI in your own words, as if you were teaching it to a beginner. Then ask the AI to find any gaps, vague bits or factual errors in your explanation. You’ll discover very quickly which parts of the topic you actually understand and which parts you’ve just been nodding along to.
  4. Ask for worked examples, then fade them. When you’re tackling a new procedure (anatomy of a movement, a programming calculation, a biomechanics problem) ask the AI to walk you through one fully worked example. Then ask it to give you a similar one with the last step missing for you to fill in. Then one with the last two steps missing. This is called example fading, and it’s one of the best-evidenced techniques for moving from understanding to independent application.
  5. Get it to generate plausible wrong answers. Multiple choice questions are only as useful as the wrong answers they offer. Ask the AI to write you a quiz where every wrong answer is a common misconception about the topic. Working out why each wrong answer is wrong is often more useful than identifying the right one.
  6. Build in spaced check-ins. Spaced practice (revisiting material at increasing intervals over time) is the other heavyweight finding from cognitive science. Ask the AI at the end of each study session to remind you next week to test you again on what you covered today. Better still, keep a simple list and come back to those topics on a deliberate schedule. The point is to fight the forgetting curve, not to hope it doesn’t apply to you. You can even ask AI to build that schedule for you.

The Pitfalls Worth Knowing About

Even with the right techniques, there are a few things to watch out for when you’re studying with AI. None of them are deal-breakers, but knowing about them is what separates a confident learner from a complacent user.

False confidence is the first one. Because AI gives you answers in fluent, well-organised prose, it’s very easy to mistake that fluency for accuracy. Reading a clear explanation is not the same as being able to produce one. The fix is retrieval practice. If you can’t put the laptop away and explain it back from memory, you don’t know it yet.

Hallucinations are the second. Even the best models will occasionally invent a study, misattribute a quote or get a piece of physiology slightly wrong. In a technical field this is a huge risk. If the AI tells you the rotator cuff has five muscles, or that creatine is synthesised in the liver from four amino acids, you need to be the kind of learner who checks. Cross-reference anything important against a textbook, a peer-reviewed source or your tutor.

Cognitive offloading is the third. This is the one the MIT study was getting at. If you let the AI do the thinking for you often enough, the parts of your brain that normally do that thinking get less practice. The fix is to make sure the AI is asking you questions more often than it’s answering them. If your last five messages have all been the AI explaining things, switch the dynamic.

And the fourth pitfall is skipping the productive struggle. There’s a particular kind of effort, the not-quite-getting-it feeling that comes just before something clicks, that is what cognitive science calls a desirable difficulty. It’s uncomfortable and it’s also where most of the learning actually happens. If you reach for the AI the moment you feel that discomfort, you’re skipping the bit that builds the knowledge. Sit with the difficulty for a few minutes first. Try a couple of approaches. Then ask for a hint, not an answer.

Building It Into a Weekly Study Rhythm

None of this needs to be complicated. A practical weekly rhythm for any learner using AI well looks something like this. Three or four short study sessions a week, twenty to forty minutes each. Each session starts with retrieval (the AI quizzes you on last session’s material from memory) before it moves on to anything new. New material is worked through in tutor mode, with the AI asking questions rather than giving answers, and the session ends with you explaining the day’s concept back in your own words. Once a week, set aside a longer session to revisit topics from two and three weeks ago, because that’s when the forgetting curve starts to really dip.

Used like this, AI is one of the best learning tools that has ever existed. It’s patient, it’s available, it never gets tired and it can adapt to whatever your specific weak point happens to be. Used the other way, as an answer machine you ask questions to and copy-paste from, it will leave you feeling productive but remembering very little. The technology is the same. The result is what counts.

If you’re someone in study mode right now, this is worth taking seriously. The learners who treat AI as a tutor are going to come out the other side of their qualifications with deeper knowledge, better critical thinking and more confidence. The ones who treat it as an answer machine are going to come out with possibly a certificate and a steep forgetting curve. You get to choose which one you’d rather be.

References

  • Barcaui, A. (2025). ChatGPT as a cognitive crutch: Evidence from a randomised controlled trial on knowledge retention. Social Sciences & Humanities Open, 12, 102287. Click here to review the full research article.
  • Google DeepMind (2025). AI tutoring can safely and effectively support students: an exploratory RCT in UK classrooms. Click here to review the full research article.
  • Kestin, G., Miller, K., Klales, A., Milbourne, T. and Ponti, G. (2025). AI tutoring outperforms in-class active learning: an RCT introducing a novel research-based design in an authentic educational setting. Scientific Reports, 15, 17458. Click here to review the full research article.
  • Kosmyna, N., Hauptmann, E., Yuan, Y.T., Situ, J., Liao, X.-H., Beresnitzky, A.V., Braunstein, I. and Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. MIT Media Lab, arXiv:2506.08872. Click here to review the full research article.

Ready to Put Smarter Study to Work?

If this article has got you rethinking how you study, our personal trainer diplomas are designed to support exactly the kind of active, structured learning the research is pointing to. You’ll get expert tutor support, practical assessments and a Student Desktop full of tools designed to help you build genuine expertise, not just memorise enough to pass an exam.

Whether you choose In-Person, Distance Study or our Live-Virtual classroom, you’ll cover the same evidence-based syllabus and gain the same CIMSPA-recognised qualification at the end of it. The only difference is how you fit it around your life.

Gym Instructor & Personal Trainer Practitioner Diploma™ – Distance Study, In-Person & Live-Virtual

Course Info

Get Started

View Dates