How can you integrate AI with learner interaction?
AI in L&D isn't about replacing course creators—it's about giving learners personalised practice environments. This article explores using Custom GPTs to create interactive, context-specific scenarios where learners can practice new skills and receive real-time feedback, all while staying aligned with your course content.
What am I currently seeing with AI in learning & development?
As with many other industries, I see learning & development trying to find where AI can fit within its workflow. How can it adopt AI to improve its capacity as a business function? I am seeing many answers to this, and many strong opinions. Is AI going to take our jobs?
Should we be using AI to write our courses? I see a lot of different things during my time networking and browsing LinkedIn. One I am seeing a lot is tools popping up where you can upload any process documents and an end goal, and the tool will spill out a fully created learning solution. Another one I see is AI integrated within an LMS (Learning Management System). Instead of a bank of courses the learner is assigned, let them simply enter the problem they are having with a process or career position, and let the AI suggest learnings that would help solve that problem. This is a very interesting idea and I think is definitely worth exploring, but not the subject of this article.
What does it mean for a learner to interact with a course?
I would like to explore what it means for a learner to be invested in a piece of learning. What does it mean to interact with a course? What does it mean to be engaged by the learning? Let's start with the interaction side. To interact with a piece of learning brings the experience from passive to active. The learner is doing something, rather than sitting back, reading and clicking 'Next Slide'.
In traditional eLearning this usually takes the form of accordions, click-throughs, scenario, 'gamification' features built within the course, etc. These certainly add a level of interaction to the learning, no question about it. However if done for the sake of having interaction, the result on the learner is just friction in the way of getting through the learning as quickly as possible. This is where engagement comes in.
What does it mean for a learner to be engaged with a course?
For a learner to be engaged with an eLearning means for the course to occupy their attention. They need to be involved in what they are doing. They need to want to complete it. In my opinion, this one of the most important parts of the learning process. To communicate to the learner *why* they should want to complete this particular eLearning. What is the benefit to them? How will this make their job easier? What is the benefit to the business? You need to demonstrate how this is highly relevant to what they do on a day to day basis. If answered properly early on in the training, you are far more likely to earn the learners attention far better than any interaction will.
What problems do I see with interaction and engagement?
In my experience, I see interaction and engagement often used interchangeably. The questions or checks I get asked most with stakeholders or LMS administrators is "Can we make sure this course is interactive." I see this as the wrong question to ask. A more impactful question I always steer them to is "Does this eLearning solve a specific business problem?". And if I have done my job correctly, the answer to this is always "It certainly should do. Let's deploy it and test.".
Now, back to the question of this article: how can you integrate AI with learner interaction?
How can AI be utilised to enhance this?
How I have been utilising AI in my eLearning courses is by creating instances where the learner can try what they are learning out in a personalised environment that gives feedback in real-time. They can interact with the course content itself and have the course reply to them. They can ask questions to the course if they don’t understand something, and it will answer them, straight away. If programmed correctly, the AI tools (I have mainly been experimenting with custom GPTs from ChatGPT) can aid in the learner fully understanding and be able to practice and implement what they are learning.
What are the benefits to the learner?
Utilising this properly, the learner gets a much more active experience. The course is their course, rather than just one that has been made for many other learners. Let’s look at an example. Let’s say I was creating a course on conflict resolution skills, and for better or worse, the format had to be an eLearning. This format can be great for teaching the learners the skills and techniques needed to be successful in conflict resolution, but it can’t give them the practice in realistic scenarios that the real world can.
That is where a well crafted custom GPT can help. It can be programmed to provide random, but highly relevant scenarios that the learner may face in their roles, and ask them how they would go about resolving them. It can then provide feedback based on the learner's response, in relation to the content they have learned in the course.
Can this be done at a low cost?
A question that is asked to me by stakeholders whenever I mention I am trying a new tool, technique, or strategy is “What will the cost be to get this working for us?” And this is a valid question in any circumstance. In terms of the CustomGPTs I have been creating, the only cost involved is the use of a ChatGPT Plus or Enterprise plan.
So, I pay for the Plus plan independently, and have been utilising this professional account in my client work. I have, so far, come into no issues with access requirements or data issues, as long as I ensure that I select ‘No conversation data in your GPT to improve our models’, and given in any circumstances that all company or individual specific information is redacted. A pretty low cost for such a powerful learning tool in my opinion.
Another consideration for cost is the time commitment in creating the CustomGPTs. Obviously due to the highly targeted nature of it, you would need to create one per use. Even though each doesn’t take a huge time to create, there is the administration time to consider as over time they could add up. Even though the monetary cost is low, the administration and creation time can add up quickly, so the return on investment will need to be determined on a case-by-case basis.
How can you ensure the AI doesn't lead the learners away from the learning?
A risk that is apparent with the current state of AI is its tendency to hallucinate when giving answers. This is clearly something that we want to avoid within our learning tools. And how can you ensure that any answers or feedback the tools give the learner is relevant to what they have just learned? It would be unfair on the learner to receive feedback based on things that are not present in the learning. How I have combated this, is by creating a raw copy of the course content in plain text (this is the time I would remove any sensitive company or personal information). This document is provided to the CustomGPT with instructions that all responses to directly reference the content within this document. The specific instruction I use can be found below:
“Evaluate my response using only the techniques and ideas included in the course content. Never introduce, assume, or recommend strategies that are not explicitly covered. If I mention something outside the course, you may flag it as outside the scope but still provide feedback on how it was used.”
I have found this prompt to work perfectly well so far as it provides the strict adherence to the course content, while allowing and acknowledging that learners may try something different. The CustomGPT can respond to that, while acknowledging it as not being within the course.
Are these AI interactions general or relevant to the learner?
As we discussed earlier, relevance is key, and these AI tools should be no different. At minimum (if you are utilising a CustomGPT) it should be relevant to the course that the learner has just taken. We looked at how this can be done above. When getting a learner to practice something they may face in their roles, I believe making their training on it as relevant and contextual as possible is key.
Let’s look at a conflict resolution training I built for example. The goal was to prepare learners with the skills and techniques to deal with conflict situations calmly, professionally, and in a way that left all parties happy to move forwards. When creating the AI tool to support this training, it certainly could have been effective with general conflict scenarios the AI could provide, but there are two levels of relevancy above this. To improve it, would be to explain to the CustomGPT who the target audience is, what the business does, the types of customer, and the problems and challenges that are usually faced in these types of scenarios. All of this together would vastly improve the relevance and prepare the learner for the ‘real thing’.
The next level of improvement would come with getting real transcripts of calls where there were conflict situations. After redacting any sensitive information, and provided to the CustomGPT, this would ensure that the learner is practicing on examples that have happened to others in the same role as themselves, and that they are likely to face in the future. Incredibly relevant. So, when creating tools like these for yourself, consider if there is any real data, scenarios, or information that you can utilise to inform the relevance of your tools.
Can the results of these be measured?
This is the question that I am currently exploring. Whether we can directly measure the impact of the specific AI tools we are utilising in reference to the learning solution as a whole. I have created a three-level measurement & evaluation framework which I have tested and implemented to great success, but this was created to measure the impact of a learning implementation as a whole, rather than specific parts of it.
