AI For Your Open LMS In 2019: How To Frame An Artificial Intelligence Problem

704
AI Competency Roadmap

Artificial Intelligence (or AI) continues to capture the imagination of educators everywhere. Now, Human Resources professionals seem hyped in a similar vein. There is still a lot of doubt about AI’s disruptive potential in the workplace. But that has not stopped AI-related skills to already reach the top of 2019’s most in-demand skills.

WIRIS

As we continue to make sense of the phenomenon, it becomes clear what the challenge is. To make a hard science problem manageable, without diminishing its complexity. At first, the most basic forms of AI will help us with menial things. But even at this stage, a healthy dose of deep thinking help things move along.

Follow our series: Competency Roadmap to AI

Meanwhile, everywhere else in the universe

The rate of development in AI is fast. Almost news cycle fast. After we began discussing a basic AI framework for a learning workflow, a new record showed up. GPT-2 became the most powerful ANN system for artificial language generation. The icing on the cake was the Open Source nature of the code.

How To Train Open Source Artificial Neural Network GPT-2 To Do Your Language Homework For You

Except that the code was not revealed in full. And that the backers of OpenAI, the organization leading the development, are all big tech.

Trying to incorporate every new development into a cohesive framework is challenging. Even more so when the goal is to make it simple and consistent. It is tempting to ignore what is happening elsewhere and make a good enough introduction. But it will end up being out of date when it’s time to get your hands dirty.

How to sort this problem? In our case, by focusing on the fundamentals of a system.

Your Practical Guide To 2019 AI In Teaching And Learning [Updated]

We are also benefited by a key fact: AI is a classical thought experiment, or at least it resembles one. One that deals with age old issues, like actions and language, and how they come together. About whether we can describe situations in perfect detail for someone else, or a machine.

(Which in the early twentieth century we learned that to be impossible, or more accurately, “undecidable.”)

It’s about whether we can use our limited language and skills to empower another being to deal with reality. One we do not completely understand ourselves.

In a way, the problem of “AI onboarding” is the same problem of education. A problem dating as far back as ancient Greece, most likely way longer than that.

The Frame Problem (Or why a little Philosophy comes a long way)

Before the following short, hopefully welcome lesson on logic-based AI, we should note that a relevant news roundup comes at the end of this article.

Making a guide both deep and applicable, that tries to stay on top of current AI events, takes a lot of iteration. But not of the kind you are most familiar with.

From a Project Management perspective, agile approaches help you refine a route. In this scenario, you have a given starting and end point.

There are countless technologies, predating agile approached for decades, tackling the optimization issue. Most of them belonging to the field of Operations Research. A field responsible for a lot of our innovation and increasing efficiencies.

If education were an optimization problem, as a teacher you would know all the factors involved in the learning process, have quantities for each variable, and all you would have to do to create perfect learning interventions was to input those into a calculator with perfect knowledge of how the variables affect one another; and it would output the best educational experience possible.

But Optimization is not AI

At least in theory, an AI could optimize a problem further. If that is mathematically possible. In general, the answer of an AI to a problem whose variables we know, is the same that a more basic machine would give.

More sophisticated machines shine when the problem is more sophisticated. In this case, it means less certainty. We can have, as before, a starting point and an end goal. But we may need to move forward ignoring key information:

  • The initial or current state of our variables.
  • The stages involved in the process.
  • The “transformation” functions associated to each stage, which tell us how the variables are affected.

How do we, or rather, our machines, deal with missing information? In short, they do not. They do a relentless and dreary (for humans) process of trial and error, which only stops when they reach an outcome falling within the boundaries of what we find acceptable. This process is better known as Machine Learning (ML) and is responsible for most of the excitment around algorithms today.

If education were a ML problem, as a teacher you would not have to know everything about what happens during the learning process, as long as you can tell that an educational goal has been achieved, or that whatever process is in place has achieved its best possible outcome within the time and other resources allotted.

But Machine Learning is no AI

ML-type algorithms have been with us for a long time. They are now a part of most of the technologies we interact on a daily basis. Due to their probabilistic and iterative nature, they are not always accurate. They can improve over time, but to become 100% exact is never guaranteed. It depends on its design and the kind of feedback it receives.

The next natural leap is without a doubt the most challenging of all. From an information point of view, it not just about unknown quantities. But it starts at unknown links and form functions. From a project management point of view, it is not just about unknown steps in a process. From a philosophical point of view, it may or may not be about finding a solution to the undecidability problem. But since there is no formal way to overcome this with our current understanding of math and logic, the future development of AI relies on two vectors:

  1. We will lead the development of AI by working on expanding its “Frame,” scope or field of action.
  2. The development of some new kind of logic, with no built-in paradoxes, that allows an AI to expand its scope endlessly and autonomously. (This is mostly the stuff of science fiction.)

Conclusion: What is AI?

At this point, we can finally take a more decent stab at defining Artificial Intelligence:

An automated system capable of performing tasks under incomplete information, based on a given problem framework. The system is capable of making guesses about initial and current states of the framework’s variables. It is not reasonable to expect the system to expand beyond its given framework.

In a way, feeling overwhelmed at the magnitude of the problem is a promising start. It will help you understand why AI is not as “round the corner” as many make it out to be. At the very least, it would help you realize how many AI-branded tech is really not, but a still welcome iteration of an ML or next-generation algorithm.

For now, let’s conclude by updating our “Frame” with the issues we have brought to light today:

  • AI-ssumptions. A real AI should not just help you achieve productivity gains. It should lead to a better understanding. Unfortunately, the fast development and adoption of ML algorithms is compromising its accountability. Which besides its obvious ethical and legal consequences, it also compromises the production of general-purpose “machine insight.”
  • AI-ngredients. The technologies used when building next-gen algorithms are in desperate need for a standard spec sheet. This is particularly poignant when we consider the Frame Problem. How is the AI tool allowing us to expand the scope of a problem?
  • mAIndset. When teaching about AI, we have to be critical and clear about the trade-off. “Easy” problems are dull. “Hard” problems are exciting, but impossible without the easy ones first. For those looking into make a prosperous and fulfulling career out of AI development, there is only one roadmap: A very, very long (decades long) string of “easy” problems solved in sequence.

Open AI news roundup

  • OpenAI has released Neural MMO, a “reinforcement learning” (not AI) environment that enables “massive multi-agent” creation. Its main focus is online multiplayer games, which in the past have been used for educational purposes.
  • DeepMind, one of the leading research hubs in machine learning (not AI) is launching TF-Replicator, a Distributed Machine Learning library to help scale Tensorflow models (also not AI) on cloud containers. The library is expected to speed up the time needed to train models that need vast datasets.

eThink LogoThis Moodle Practice related post is made possible by: eThink Education, a Certified Moodle Partner that provides a fully-managed Moodle experience including implementation, integration, cloud-hosting, and management services. To learn more about eThink, click here.