Google Maps: The Ideal Performance Support

I’m a typical guy in that I don’t ask for directions, but it’s not a macho thing. I just know that I won’t understand them. I’m a visual person. I need to see the big picture, the context. I need a map. But that’s just me.

I wrote this post about the advantage of maps over directions as a metaphor for learning,  but who am I to tell people how to access information. As I prepare for the Performance Support Symposium in Austin next week, I am thinking about the maxim of Performance Support: “Get people what they need and get out of their way.” I keep wondering if there is a way to let people access step by step directions AND see the bigger picture. Bob Mosher calls this the flipped pyramid. In a formal classroom, there is an inverted pyramid. The grand concepts are on top and you drill down until you get to the instructions to do the task at hand at the end. In Performance Support you reverse this picture. You start with what is needed to get the task done, and then you let the user choose to drill down to the deeper concepts.

I’ve been trying to think of an example of how this would work and it hit me: Maps. Specifically online maps. Google Maps and its competitors are the ideal example of what Performance Support should be:

  • It let’s you switch back and forth from maps to directions and from individual steps back to the map.
  • You can access it at the moment of need. It can be on your desktop at home when planning a trip or on your mobile device when you are lost.
  • You can dive deep into the detail or zoom out to get a broader perspective.
  • You can link to other resources like the menu of a restaurant.
  • You can contribute by uploading photos and commenting on sites.
  • You can embed interactive maps into other applications.
  • Everyone understands how to use it (Is this because of its ubiquity or it’s straightforwardness?).
  • It uses the affordances of the mobile device (most obviously the GPS.) You can even use the Accelerometer for setting the compass so you can see which way you are facing.
  • It has a “Show me” function in the form of “Street View”
  • It warns you about challenges by showing traffic patterns.
  • It gives you options for completing the task with optional routes and optional modes of transportation.
  • It tells you how long the task will take.
  • Most importantly it gives you information that you can act on immediately.

These features of online maps could be added to any Performance Support solution to make it more robust. It is a good way to demonstrate the power of Performance Support.

I’m looking forward to see where this goes…as long as I don’t have to ask for directions.

What to Do About MicroLearning?

Ah, we finally have a new buzzword. I got the standard email yesterday: “We’ve got to do something about <insert buzzword here>” The buzzword of the day is “MicroLearning.” We’ve been talking about “chunking” content for years without getting much traction but dressed up in a new, more grown-up word, it gets taken more seriously. That’s cool. It’s still a good concept. People don’t have time for epic courses. By breaking down content into smaller “micro” parts, they are easier to consume in a hurry and they can be targeted to the right people, the right task and the right delivery channel.

There’s a problem though. It was always lurking behind the chunking conversation. Our current process for delivering learning content: The LMS via SCORM is too heavy handed for the scale we will be working in. Imagine that launching a course takes longer than actually doing the course. Imagine that loading many SCORM based microlearnings into an LMS being more cumbersome than it is worth. How do we track these things in a reasonable manner?

Here are some options:

Self-Completions

In the LMS you can load the url for the content and let the user click complete when they are done. This is the simplest idea and I always defer to the simplest but it may not meet your stakeholder’s standards for data integrity.

xAPI

The Experience API (a.k.a. Tin Can, xAPI) has the advantage of sending data to a database when the learner takes an action rather than forcing the learner to launch the content from the LMS like SCORM does. This would simplify the process but you would have to build a process to insert xAPI calls into your content and figure out how to get the data back into the LMS.

Track the Assessment

Load only the final assessment for a group of microlearnings into the LMS. In this way you are only tracking the successful completion of the quiz as evidence of the learning achieved through the microlearnings. The microlearnings themselves then become supporting material that the learner can launch at will. This is probably the ideal solution but I do have one more trick up my sleeve.

Gamification

I bet you didn’t see that one coming. Think about a video game with rooms and levels. If you run though the rooms as fast as you can, you won’t beat the level. You need to take something from each room, a key of sorts into the last room to win. How can we apply this to microlearning? Imagine that at the end of each microlearning you are given a key, a badge, a code, that you enter in the right place in the last module. Collecting all the keys gives you a passing score and that is sent back to the LMS. This brings us closer to the idea of experiential learning.

What are your plans for MicroLearning?

Check out my friend Tom Spiglanin’s post on this topic.

Deconstructing the Learning Management System

The Learning Management System is Undead. Everyone wants to kill it but no one does.

We are stuck in this no-win situation because it makes sense to have a system that tracks learning data and manages logistics, however the cognitive load and resource drain created by its complex workflows leads us all to question whether it is worth it. We will remain in this limbo state until we resolve the contradictions. The time has come to face this conundrum, but in order to do so we need to understand what the core problem is. The workflow structure of the LMS is layered with artifacts from past learning practices much like rock strata contain the fossils of creatures from eons ago. Unearthing these structures and examining their flawed assumptions can be a start to working towards a more useful learning architecture.

Let’s go back to the early days of the corporate LMS when it was really a classroom management system. Unlike K-12 where classes occur daily and Academia where classes are given on a semester basis, the scheduling of classroom based training in the workplace is highly variable. For this reason, LMS design had to be hierarchical to accommodate for all possible scenarios. The basic structure is this:
  • Course: the container of the content delivered to learners
  • Class: an instance of delivering the course
  • Session: an actual time and place where learners are presented with the content
Why is it necessary to track down to this level of granularity? LMS vendors are trying to sell more functionality around these objects: reporting on course completion, scheduling of people and resources for classes, attendance at sessions, etc. Without the hierarchy, the data structure necessary for this functionality would be hard to manage. The problem is that this level of complexity makes things easier for the LMS programmers but for the L&D professionals and learners it just adds to the strain of using the system.

An important component to making this hierarchy work is the concept of the registration. On a practical level, registrations are used for printing rosters, to capture attendance, and for managing wait-lists, class sizes, food orders and penalizing no-shows. On a data level, a registration which records an intention to participate in learning, acts to create the first record in the database that all subsequent transactions can be based on. For L&D and learners, though, it adds another layer of workflow that creates more confusion.

Now let’s move on to eLearning. Even though eLearning is nothing like classroom learning as far as workflow, no one wants to have a separate classroom system and eLearning system. So a structure needs be contorted to accommodate both processes. This leads to some pretty obvious contradictions that we all live with. For starters, eLearning is a transactional activity and not hierarchical, yet to fit in an LMS we need to create separate eLearning courses and eLearning activities (the equivalent of classes). Also, registrations are not necessary for eLearning but they are so integral to the LMS data strategy that they can’t be removed. This creates an extra step that learners do not understand.

Once we accept the flawed premise that eLearning must follow classroom training workflows, we now have to solve for problems that this model creates. This is like the problem people have when they try to fit unwieldy metaphors to complex real-life situations. Here’s an example: in classroom training, instructors are the arbiters of learning. If they feel that a student has learned, they give them a completion for the course. The completion becomes the coin-of-the-realm for learning data. With eLearning, there is no instructor to verify your completion. If the LMS simply links to the material, there is no way to prove that the learner “completed” the course. The proxy for the instructor is the end of course quiz. This is designed to prove a negative. Just like attendance of a class is not a guarantee of learning but not attending class is a reliable indicator of not learning, so too, the ability to answer questions about the content is not a guarantee of learning but the inability to answer those questions is a reliable indicator of not learning. Building a whole system to prove a negative seems a bit weak.

To create this dynamic in a foolproof (read human-proof) way, various protocols like AICC and SCORM need to be followed to control the flow of data between the learner and the database. This tight control leads to much of the difficulty in using and supporting these courses. If you are going to reliably report on this data, you need to follow some logical rules that take all possible scenarios into account. For instance. It might seem intuitive that you can make changes to a course any time you want but if you change the learning content itself aren’t you invalidating the completions that have occurred so far? This kind of logic may be valid from a data integrity standpoint but it kills usability. The need for this complexity is created by the Frankenstein monster we put in place when we try to create a hierarchical system that is meant to solve for all situations.

Now we have Social Media and Mobile where control is not practical or desired and you have a crisis. The structure of the LMS was designed to control participation data but we don’t learn that way. We learn by interacting with people and ideas in countless ways. The Internet which was designed to circumvent all control is allowing us to learn without restrictions. The LMS is not equipped to handle that. This is why we have to address these issues now. The latest successor to SCORM, called xAPI (the API formerly known as Tin Can) begins to address some of these problems. It tracks transactions rather than hierarchies and it doesn’t require that you discover learning through the LMS so it can capture data more freely but this is just the tip of the iceberg.

This seemingly intractable problem was created when vendors attempted to give us everything we wanted in one package. The answer is in the L&D industry doing some soul searching about what it is that the learner really needs. I could go on forever about the nuances of the LMS workflow. we need to make the LMS more usable for learners and for organizations. To do this we need to continue this exploration.

 

The Flipped LMS

Hooboy, do people like to complain about their LMS (Learning Management System). As evidenced by this conversation on #chat2lrn that I was a part of, L&D folk would like to wish away this beast of a software program. The thing is though, that no one ever really does get rid of their LMS. Why is that?

I believe that the root cause of this contradiction lies in two conflicting drivers: 1) It is important to the business to track training. 2) The way we track training now gets in the way of learning. Many people including myself have argued the first point so I’ll focus now on the second point. In the current process of using an LMS, in order to track training, you must control access to content. But we live in the Internet age now and we all know that content wants to be free. If we cannot resolve this disconnect, there is no hope for improving our relationship with our learning architecture.

The Flipped Classroom is a concept that breathed life into the disconnects of the education world. Perhaps the idea of a Flipped LMS can bring us to a solution. In the Flipped Classroom, instruction and experimentation (homework) were separated and switched, where the instruction was provided by video at home and the teacher supported the experimentation in the classroom. Imagine if we separated the content from the assessment. Put the assessment in the LMS where it can track whether someone learned, and let the content exist outside the LMS where it is always accessible for anyone. The assessment can have a landing page (most assessment tools can do this) that provides context for the information being assessed: why it’s important, how it is assessed, where to learn what you need to improve your score. Here would be the link to the content. There could be three assessments per program, all orchestrated by the LMS: A pre-test; the post-assesment and a follow-up assessment to reinforce the learning.

This way you are using the LMS for what it does best. By allowing multiple attempts and multiple sources of learning, you are letting the learner be more flexible and you are tracking improvement over time with less complexity.

But how do we know who accessed the content? This is the beauty of the idea. By splitting up access and assessment you also split up what you have to measure. For assessment, you must track individuals and so you need the LMS, but to track the reach of your content, you only need numbers of users and visits, not individuals. This can be done by any web analytics tool like Google Analytics.

Hopefully the clarity produced by this split in efforts will help L&D folk move on to more important conversations than how much they hate their LMS. That is, until they get the next bill from the LMS vendor.

It Should be Called a Participation Management System

At the center of the Learning Technology conundrum is the unwieldy Learning Management System. The joke is that you cannot manage learning. Learning takes place in the learner’s mind despite the best efforts of learning professionals to control it.
 
So what is it that gets managed? Participation. Completions, the coin of the realm for an LMS are really just measurements of who has participated in Learning Experiences. Registrations are the method to control who will participate in an experience. Even quizzes really just measure  whether you participated to the end. You can pass a quiz even if you haven’t learned but you probably cannot pass if you haven’t participated.  The LMS’s watchdog, SCORM, is designed to control participation in reviewing web content, oops I mean eLearning. Without the control you would just need a hyperlink and some web analytics.
 
Why all the focus on participation? Because it is a proxy for learning which can’t be measured. Showing up, as they say, is the better part of success and while it is true that it is unlikely that you will learn if you don’t participate, there is no guarantee that you will learn if you just show up. Being handcuffed to the content by a SCORM connection makes no difference.  
 
So what is the alternative?
 
 
If you cannot measure learning what can you measure? Memorizing?
We outsource our memorization to the Internet.
 
Why do we have to measure in the first place?
Because we have to account for the resources that we have been given to create Learning Experiences.
 
And why were we given those resources?
Because our stakeholders believe that learning will solve problems.
 
And how does learning solve problems?
Because learning changes people’s behavior.
 
So how do we measure that change?
 
Measuring change involves setting a baseline and then measuring the difference after an effect.  What can we measure about someone’s behavior before and after learning? There are many assessment tools that measure someone’s bias towards action. Are these tools granular enough to capture the change? Is it worthwhile to stop people from working and learning in order to make these measurements?
 
Or would it be better to simply ask people to reflect out loud, to narrate their experience and write about what changed. Perhaps that would be the best return on investment for everyone.

 

Learning Isn’t Pixie Dust

Recently someone showed me a PowerPoint presentation that had to be sent to senior stakeholders immediately (of course) and it was clear that it was not going to wow them. I said to my team “It’s OK, I’ll sprinkle some pixie dust on it.” This was in reference to my ability to make PowerPoint NOT look like PowerPoint. We are trying to spread ideas and we have such short windows of opportunity to get in front of people. We need to engage them. We need to EARN their attention. We need pixie dust: interesting approaches to graphic representation (read: eye candy.)
 
But Pixie Dust isn’t learning. It may be the price of entry, the hook. It may be what you need to attract learners but it isn’t learning. Learning is in the structure of ideas and frameworks of understanding. Learning is in the internalizing of stories.
 
Often we paste the term “Learning” onto a tool and we try to make it seem like this somehow makes it special. Learning Management System, Learning Authoring Tools, Virtual Learning. When you really look at these tools they aren’t that special and no amount of Learning “Pixie Dust” is going to make them separate from other tools. We use other tools to track usage, set up events, share screens, design pages. Calling them “Learning” tools seems to only benefit the software companies that make them.
 
What this does is it gives us a blind spot. When we think we must use learning tools for learning, we close off so many other solutions. We are blinded by our own Pixie Dust.
 
 

Training Is Not Normal

The way we train people is not normal. The idea of a single person presenting information to a room full of people is a process that was created by the Victorians to gain economies of scale over the traditional and more natural master apprentice format. The real way people learn is to ask, try, fail, analyze adjust solve and retry. When knowledge was controlled it made sense to distribute it efficiently but now technology has changed all that and with the persistent connectedness of everything, learning is something that anyone can do as if it were breathing. 
 
Meanwhile we have used technology to parse out the classroom model but we really haven’t changed it. Their are still instructional people who have knowledge and they distribute it in containers called courses and they track attendance, showing up, launching, on a roster (Learning Management System.)
 
But if real learning has gone on without us, what are we doing? What value are we creating?
 
There needs to be something in the middle. Unstructured learning is great for finding out how to fix a pipe but not for understanding how the system of pipes works and what it does. There is still a place for learning folk to provide value by creating schema, frameworks, context for information. These are tools that a learner can use as a multiplier effect to create more robust learning.
 
First we need to start dismantling the old model. Learners expect to learn in chucks as soon as they need them. We need to break down learning content into easily consumable chunks and show the learner how the chunks relate. We need to let the learner drive how the content is delivered.
 
This is a unique opportunity to change what learning can be.