And what a chatbot LMS and Open Ecosystem integration spells for the future of AI in learning and education.
As cautious as we must always be to think about the parallels between robots and human brains, and how machine learning intersects with real learning, there is no denying that the AI-heightening times we are living in are challenging the nature of learning in many respects:
- On its goals, or where I would ultimately draw my satisfaction as an educator;
- On its methods, or if we are properly increasing positive, reinforcing processes in education: Discovery, willingness to experiment, curiosity, interest for the way learning itself works (metacognition);
- On its understanding on the student as a social, psychological and cognitive entity.
It is on that last item, but based on all three, where getting acquainted with the many ways an engineer conceives a chatbot —and to some extent, any application— put into doubt large swaths of my experience.
And for that, I am grateful.
Getting your hands dirty: Intents, Utterances, Slots, Prompt, Fulfillment
Figuring out how to build a chatbot today is a matter of figuring out to which platform you are willing to pledge allegiance. I am referring to the structure and internal logic. I have been looking mainly into Google’s Dialogflow and Amazon Lex. These and many others —Microsoft Bot, and a swarm of small open source alternatives— can usually be ported without hassle to existing or custom messaging channels: Facebook Messenger, WhatsApp, Telegram or bespoke mediums.
They share several of the same key elements, but throughout the road I have found philosophical differences that require independent study. Given the market share and ease of integration with systems hosted in the same platform, including LMS, the explanations might be more closely related to the Amazon Lex paradigm.
Nevertheless, for now the following are the essential elements present in any modern chatbot platform. You can give them some thought and even start writing them down before facing the interface. As we find time and again in learning technologies, they are for the most part not designed to help you envision new, innovative experiences, but assume you’ve come up with one already.
So for now, you’re better off jotting this down and taking it outside.
Most chatbot development tutorials being by focusing on the Intent. It is actually one of the few words all platforms share in common, perhaps because it is a general term in usability design to understand what your learner wants to get out of the technology. Definitions, however, can often feel frustrating and vague. Fearing I will not fare much better, I interpret an intent as:
A programmable representation of whatever your learner wants to do.
Two basic things to highlight here:
- Whatever your learner wants to do is something that must be recognized previously by the chatbot. In other words, the learner’s free will is limited to what the chatbot knows how to please.
- Generally speaking, chatbots cannot “push” or be the ones to kick-off a conversation. Interactions must always be initiated by the human.
From a design perspective, an intent could be seen in parallel of an affordance. While an affordance is what a tool enabled your learner to do, an intent refers to what they expect the tool will do for them.
Commonly mistaken by the Intent given their proximity. But whereas an Intent is allowed to be a general description of your learner’s goal within a system, the utterance is the literal expression from the user to the machine. In a chatbot, it’s a textual message. In a voice assistant, a sentence said out loud.
Generally speaking, after your chatbot receives a utterance from your learner, it can “activate” the intent. Usually an intent has several different utterances that activate it. It can be very taxing to think of the many ways in which lots of different users are asking for the same things. This is where the machine learning, and specifically the Natural Language Processing (NLP or NLU for “Understanding”) algorithms of a given platform come in.
Slots (Amazon Lex) or Entities (Google Dialogflow)
These refer to the data that is generated or stored during the conversation. Think of the information your chatbot needs in order to accurately understand and eventually “Fulfill” your learner’s desires, if they fall within scope.
Most platforms group these entities by origin or association:
- System entities: Basic data such as date and time, or settings defined previously.
- Learner entities: It could range from basic facts like age and gender, to highly specific attributes of their learning preferences. Here algorithms could play a role by figuring out many of these, which aren’t always obvious.
- Chatbot entities: They parallel the learner’s entities, and depend on how much character you intend to give you but. (Which you should. A lot. More on that later.)
The slots can be required, in which case the conversation will not move to the next stage until your chatbot gets all the information it needs from your learner. Or they can be optional, in which case you can set a default value and maybe a series of conditions in which they become required.
The dialog with your learner is where everything comes together for your chatbot. You build an internal logic, which the chatbot follows to understand what your learner needs. This internal logic is your understanding of what their general need looks like. But to please your learner, you need to be able to ground your logic.
In practice, the chatbot will request information to your learner through prompts. Usually (but not necessarily) framed as questions, their goal is to fill out all the slots required to complete the task.
Generally speaking, there is a trade-off here:
- Make very specific requests so your learner can provide straight answers. This will make the chatbot’s job to complete the slots easier. But it can be a dull experience for your learner, especially if it’s something that will happen regularly.
- Make broader questions, or just hint at what the chatbot needs. Provide several ways to request information. This could make the experience more entertaining or whimsical, but it would increase the complexity of the logic and potentially the computing power required. And if you are too vague it could end up confusing and frustrating them.
It is the ultimate goal of the chatbot. A fulfillment is both the achievement of satisfaction to a learner’s intent, and the measures the chatbot takes to make sure this is the case. Some fulfillments are achieved with complete certainty, but as the chatbot evolves, it would have to make decisions on the status of a fulfillment based on statistical computations.
Here we see everything come together. The learner initiates an intent with the chatbot with the expectation that it would be fulfilled. The chatbot’s goal is to identify the intent (or intents) from your learner’s utterances, figure out the entities required to fulfill it, engage with them in an amenable way that allows it to gather them all, and finally deliver.
Context or Session (advanced, optional)
When the development of your chatbot expands through several intents, it is very likely that similar or identical prompts and slots get mixed up. Chatbot platforms have methods to stay within the context of an intent, be it for a given period of time or until certain conditions are met.
In the meantime, you can just make sure the parts of each intent are unique, both for the chatbot and your learner.
An ‘agile’ chatbot creation approach in practice
While the description above looks like a linear, A-to-B path, it might be good to challenge that notion, both in terms of the chatbot creation process and the interaction itself.
- You can come up with intents first, but it does not have to be the case. A user-centered designer, for example, would defend you if you choose to start with utterances, prompts or even slots, and then tie the intents and the other components. And that would be okay. Utterances, in fact, could actually lead to the identification of new intents requested by your learner.
- The dialog that takes place in the process of intent fulfillment need not be linear either. If you are up to the challenge, it could entice a user towards more intents. In learning this is one of the most promising methods I’ve seen to model and research curiosity in education. This of course, has consequences in your prompts, and will require you to consider contexts.
- Finally, a hard truth: No matter the platform you choose, creating a chatbot for learning would be the equivalent of having lots of ongoing conversations with learners. In practice this means you must continuously be thinking on ways to enhance and increase the Intents, while refining the fulfillments and their success metrics. If you are putting effort into a personality, the prompts must also seek to be more natural. You should also work under the principle that, no matter how much effort you’ve put into the sophistication of your chatbot, it will never be a replacement for human-mediated interaction.
A word on the advantages of a good compelling storytelling framework for your chatbot
You would be remiss if I do not take advantage of the basic blocks of a chatbot to introduce yourself into a simple but functional overview of storytelling and learning. After all, for an education content creator —and for many reasons a marketing content creator as well— chatbots are a fascinating challenge in interactive storytelling.
Gone are the “Choose Your Own Adventure” days. In the age of personalization as a promise yet to be fully satisfied, a true digital interaction designer cannot possibly be content with a click-through game that “branches out” after the learner clicks on either A, B or C.
We might have been primed by decades of pop culture, but the reality is that a growing majority of humans is ready to establish direct, real, deep and ongoing interactions with artificial beings. Life is full of small hassles. It just doesn’t feel right that at the other end of the line there is a human completing menial work for us. And for those in the space of learning technologies, few things deserve our concern more than bringing the rest of the world into the opportunities and wealth that Industry 4.0 have to offer, using the most powerful mechanis to our avail: Storytelling.
Resources to take things further
An AI Assistant for Moodle, soon in a window corner near you?
Is the chatbot gaining a second wind for the Moodle LMS? After separate and short-winded attempts by Farhan Karmali and Catalyst IT’s Matt Porritt, finally someone at Moodle HQ is on the case. Lead Data Scientist David Monllao has added an issue on the Moodle Tracker: MDL-65767, “An AI Assistant for Moodle.” While many details are up for discussion (part of the purpose of the Tracker issue), it will most likely incorporate an open source NLU unit in Moodle, and will be connected to the Learning Analytics initiatives in which Monllao is involved. You can take a look at the tracker, or see detailed development progress at his GitHub, where you can also contribute.
Moodle forums dwellers weigh in
They have been for a while, actually.
A discussion started in July 2018 considers several aspects of an LMS chatbot, starting with the always healthy question “What for?” or “Why if there is already a better solution in place?” But skepticism is by no means the only valuable perspective or resource find on the conversation. Well worth a look and keep it going, as it will no doubt influence upcoming developments.
IntelliBoard’s Learning Intelligence Search Automation readies for 2.0
The top choice among analytics dashboards and Moodle Premium Integrator is about to debut LISA 2.0. Available only for “Level 5” premium IntelliBoard subscribers, it will expand on the quick generation of reports and charts from the dialog box, by suggesting reports users might be interested in, further incorporating support and self-guidance, and offering more detailed customization options. LISA and LISA 2.0 are available for Moodle, Instructure Canvas and Blackboard Learn.