More than just answers: how AI is changing the way we learn

Jasper Naberman

AI in education; you’ve probably heard about it. Maybe you’ve even used ChatGPT or similar tools yourself. While these technologies can be impressive and helpful, not all feedback has been positive. Why does AI (Artificial Intelligence) sometimes work very well, but at other times seem to fall short?

At StudyGo, our mission is to use smart technology to make education more personal and effective for every secondary education student. AI can be a powerful tool, but only when used the right way. In this blog, we explain how we are using AI to help students learn better and more effectively.

Why does AI produce mixed results in education?

Many students use AI tools like ChatGPT mainly to get quick answers or summaries. Very convenient of course, but not always very effective. Research shows that students who use AI mainly as an “answer machine” tend to learn less deeply. This happens because they engage less actively with the material and are more likely to think they understand something without truly grasping it. [1]

It’s like quickly Googling your homework answers: perhaps useful for a test, but not ideal for truly mastering the material.

StudyGo’s approach: using AI for active learning

At StudyGo, we do things differently. Our new AI tutor helps students engage actively with the material. Instead of giving straightforward answers, it asks targeted questions that encourage students to think for themselves. This is known as the Socratic approach.

The tutor asks thoughtful questions, lets students find their own answers, and guides them step by step through the material.

Research shows that this Socratic method, also when applied in AI environments, helps students think more critically and retain information better. [2] [3] [4]

AI tutor: improved understanding but more ‘test’ chats

The AI tutor on StudyGo is always visible on the side of the screen. This encourages students to interact with it regularly, without losing focus on their assignments.

Recent analyses showed that students interacted with the AI tutor much more frequently than before. Those who actively engaged with the tutor showed stronger understanding of the material and returned to the platform more often. This suggests that students see the AI tutor as a valuable part of their learning process.

Of course, there’s still room to improve. Sometimes the open-ended structure of the tutor led to unfocused conversations that weren’t always productive. This insight helps us refine the tutor further to ensure that every interaction is meaningful.

We also saw an increase in ‘test’ chats: students just trying the tutor out. While some of those were experienced as a bit frustrating, the majority of students had a positive learning experience.

This tells us the Socratic method alone is not enough. It’s a great starting point, but there’s still plenty of room for improvement:

  • By making learning sessions more goal-oriented [5]
  • By giving the AI tutor knowledge about which learning content is relevant for a student [6]
  • By reflecting on what has been learned [7]
  • And much more!

What about our human tutors?

You might be wondering: “Does this AI tutor replace human tutors?” Good news: absolutely not! Our research shows that students continue to make use of human tutors just as often. The AI tutor is an addition, not a replacement.

Students regularly use the personal support of our experienced and carefully screened tutors, who are available live via chat every day between 15:00 and 22:00 to help explain things when needed.

Our human tutors remain essential, while the AI tutor offers extra support when students want to work independently or outside human tutor hours.

We’re just getting started!

We’re constantly conducting research to explore how AI can support education even better. Our ambition is clear: to make learning smarter and more personal, for every student.

Try our AI tutor now or follow our blog for more insights and updates.

References

[1] Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2025). Generative AI without guardrails can harm learning: Evidence from high school mathematics. Proceedings of the National Academy of Sciences, 122(26), e2422633122. (link)

[2] Fakour, H., & Imani, M. (2025). Socratic wisdom in the age of AI: A comparative study of ChatGPT and human tutors in enhancing critical thinking skills. Frontiers in Education, 10, 1528603. (link)

[3] Favero, L., Pérez-Ortiz, J. A., Käser, T., & Oliver, N. (2024). Enhancing critical thinking in education by means of a Socratic chatbot. In ECAI’24: International Workshop on AI in Education and Educational Research (AIEER), October 19-20, 2024, Santiago de Compostela, Spain. (link)

[4] Pitorini, D. E., Suciati, & Harlita. (2024). Students’ critical thinking skills using an e-module based on problem-based learning combined with Socratic dialogue. Journal of Learning for Development, 11(1), 52-65. (link)

[5] Wang, F., Zhou, X., Li, K., Cheung, A. C. K., & Tian, M. (2025). The effects of artificial intelligence-based interactive scaffolding on secondary students’ speaking performance, goal setting, self-evaluation, and motivation in informal digital learning of English. Interactive Learning Environments. Advance online publication. (link)

[6] Levonian, Z., Henkel, O., Li, C., & Postle, M.-E. (2025). Designing safe and relevant generative chats for math learning in intelligent tutoring systems. Journal of Educational Data Mining, 17(1), 66–97. (link)

[7] Kumar, H., Xiao, R., Lawson, B., Musabirov, I., Shi, J., Wang, X., Luo, H., Williams, J. J., Rafferty, A., Stamper, J., & Liut, M. (2024). Supporting self‑reflection at scale with large language models: Insights from randomized field experiments in classrooms. arXiv preprint. (link)