Learning is Not an Event

Learning is not an event. It is a continuum. We are constantly reconstructing our reality in small ways in order to accommodate new information. We may want to learn how to do a task at the moment of need, but we cannot absorb that information unless we have laid the groundwork to provide context. Over time we build layers of mental frameworks on which to hang our learning. If we are providing information to be accessible at the moment of need, it must be accessed in a way that matches those frameworks or people will not be able to find it quickly or understand it when they see it.

In order to get to the point where this absorption is possible, time must be taken to build up a general understanding of the concepts involved. An individual must disassemble their previous understanding and then reassemble it in a new way that can accommodate the new information. In this way, learning is the same thing as creativity which disassembles and reassembles existing forms to create new ones. The process Disassemble And Reassemble Your  Models Of Understanding (DARYMOU) cannot be done for the learner. The environment must be created that enables them to do it for themselves. Where did we all learn about gravity, motion, cause and effect and social dynamics? The Playground. We need to create a playground for learning in the workplace. This involves providing diverse resources, opportunities to fail, collaborative activities and time for reflection. 

The disassembly process is done with humor, absurdity, memes, thought experiments etc. The reassembly portion requires that people can organize ideas into a structure. This can be done with a tool called a Knowledge Graph or Ontology. It is a visualization of the network of relationships between entities in a domain of knowledge. This tool is extremely adaptable so it can be extended over time to accommodate new ideas. It is easily accessed because of its visual logic and so it can be used later to organize information for when the person needs to access it at the moment of need.

Mindful Learning Analytics

Over the course of my years helping companies improve their business processes around using Learning Technology, I have become good friends with many Learning Management System Administrators. They are good, hard working people who care deeply about what they are doing. When they confide in me their many frustrations with the job, what stands out is the constant struggle with reporting on learning data for their stakeholders.

LMS Administrators are constantly being pressured to provide reports on data that either doesn’t exist or the data is organized in a way that makes it impossible to programmatically extract the requested information. This leads to a lot of manual work to “massage” the data to produce the results. In the end no one is happy because there are always errors and incomplete information and because the time spent is taken away from doing the other tasks that the Administrator is responsible for.

What I try to do in these situations is to help both the administrator and the L&D professional who made the request, understand the nuances of the requirements and the limitations of the systems that we are using to support them. In short, I encourage them to be mindful about what they want and the reality of what is available.

Let’s start with why we collect learning data in the first place. A friend of mine says that she refuses to collect data that no one will act on. This is key. Data collection and reporting takes resources. There is no point to spending those resources if there is no desired outcome.

Why did the senior leader who requested that we create the learning opportunity want it in the first place? She wanted something to change. She believed that if people learned to do something differently that there would be a change in the organization that would make things better for stockholders, employees, customers or the community. 

That message of what needs to be better needs to reach the intended audience. I use “reach” as a minimum requirement so that we get away from discussing whether people learned. If the message doesn’t reach the audience, they won’t learn, but measuring reach is more realistic then trying to get into their heads to see if they really understood the material. If the audience was reached and the change still didn’t happen then you can explore more.

To measure reach you need the following data points:

  • Learning Opportunities: These are the courses, classes, sessions, assessments and activities that people can access that delivers the message. Learning Opportunities can have the same names, so there needs to be a unique identifier like a course code to keep track. There also needs to be an owner or primary contact, someone to take responsibility for determining what a successful access of the Learning Opportunity entails. Is opening the page good enough? Did the person open the latest version? Are there equivalent activities? This information isn’t always available and many times owners of Learning Opportunities have moved on to other roles or other companies.
  • People: These are the records of individuals who accessed the learning opportunities. People need unique identifiers as well. This is usually an employee id, network login or email address. I’ve never worked with a company that had perfect records of their people. There were always outdated records, duplicates and people without identifiers. The Administrator cannot control the upstream data from HR and IT that supports the learning systems. They need to set reasonable expectations with the stakeholders as to what percentage of people records will be inaccurate or incomplete.
  • Audience: This is a list of which people we intend to reach with the message. Here’s where the limitations of the people data really come into play. The stakeholder needs to determine what property of people signifies that they are in the audience: geography? role? organization? These properties need to be in the data but they are usually not in the learning systems. They need to be imported from other databases and merged with the learning data. Without consistent people identifiers, there will be missing matches. If the stakeholder expects reports with 0% errors, there will have to be a lot of manual work and then there is still no guarantee of accuracy. You have to balance the effort against the usability of the data.
  • Access: Which people accessed the learning opportunities? This may seem simple but it is not for these reasons: Access is captured either by instructors in which significant delays can happen before they are entered in the system, or they are captured by eLearning modules which are notorious for sometimes neglecting to mark someone complete when a module is done but the connection was lost at the last minute. The other problem happens when the stakeholder asks for something called an “Exception Report.” This is a list of everyone in the audience and whether they did or did NOT complete the learning. Seems simple, but there is no record in a database of someone NOT doing something. It has to be inferred, and given the limitations of the supporting data (Who is in the audience? What does it mean to access the learning opportunity?), the resulting report can be confusing and frustrating for everyone.
  • Accountability: Ultimately the person who is accountable for whether someone has accessed a Learning Opportunity is their direct supervisor. Reporting structures in an organization are hierarchical and combining that data with learning data requires a recursive program that finds people through all the levels so that the report can “roll-up” to the top leadership. This process leaves out a lot of people especially in matrixed organizations and places where HR data is not updated in a timely manner. If a manager is reprimanded for poor numbers it may be based on error-prone data. It would be better to report on aggregate numbers showing completion counts and total people counts in each organization. Managers could then simply investigate whether their poor numbers are valid.

“Reach” is a relatively easy metric to obtain but it doesn’t tell us what the business impact is. Therefore, even though it is important and useful as a minimum requirement, it is not worth it to attempt to give a 100% accounting of the data. It would be better to put Learning Administrators to work finding interesting new ways to help people access the right learning and discover the actual impact it makes.

The 5 Levels of Curation

There is a lot of learning content out there.

Do we really always need to create more new content or can we just help people get to what already exists that can be valuable to them? This is of course called Curation and it is gaining interest as content proliferates and yet at the same time, resources for creating new content dwindle.

But curation is not the same as creating content. When we treat it like content we add to the problem. Curation is a practice. It is a behavior that we develop like a muscle. Like all practice, there are levels of commitment. There is a continuum of involvement from none to the ideal. Here is my model of the Curation spectrum:

The Five Levels of Curation:

1. No curation at all:
If you aren’t contributing, you are missing out on the give and take
of value in your community.

2. Only links:
Links by themselves don’t add much value but they are
better than nothing.

3. Links and context:
Telling a story about why this resource is valuable
gives the content resonance.

4. Links and relevance to work:
Providing an explanation on how the resources support the work
that we do makes the value proposition clear.

5. Building a community around curation
and committing to ongoing maintenance:
Keeping content fresh on a regular basis assures the
engagement of your audience

This model helps you to explain to stakeholders the value proposition of curation. When they come to you to create a page of hundreds of links as learning content that they will forget about as soon as they sign off on it, you can show them that the effort is not warranted by the value produced. You can encourage them to share less but provide more context, relevance and community.

Marketing Your Learning Ecosystem

If you build it they may come…but probably not. Don’t get caught in the trap of thinking that since you made something that is so beneficial to your target audience, that they should just naturally gravitate to your solution. I’m sorry but I think you’ve forgotten that we are all being barraged with a typhoon of interruptions, screaming for our attention. You can’t expect your potential users to notice your system above the noise, even if it is the most amazing thing that was ever created.

But why should we be marketers? We are learning professionals. Marketing is not our job or background. But is that true? Aren’t marketers and learning professionals both selling ideas? Don’t we both focus on key messages, target audiences and desired outcomes?

If you have built a system to help people access your Learning Ecosystem, you probably invested resources and the people who provided you with those resources want to know that the system is going to be used. An unused system has no value. So long before the project is nearing completion, you need to start putting together a plan to market your Learning Ecosystem. You will have to find a way to break through that wall of noise.

The first thing you have to do is get out of the ivory tower of the L&D department. You need to talk to actual learners not learning professionals or their stakeholders. How do they find out about the systems they use today? Where are they getting their daily work done? What kinds of problems are they trying to solve? What works for them outside of work?

Next, you need to go to where the learners are. Don’t stick with your usual communication channels if they don’t work. If people use social media, then you should use their social media tool. If people find out about systems through departmental meetings, then you should get yourself invited to those meetings. When you do show up, stick around. Don’t make your pitch and leave. Make sure you become part of the community otherwise people will ignore you. Create value for that community, not just for yourself and your initiative. Speak about your work in terms of their work.

Finally, bring fun. Work can be dreary. We are being tugged at by things that we are told are SO serious. It’s tiresome. Learning folk can be very irreverent and funny. Let that part of you out. People will remember it.

Voice of the Learner

We’ve been hearing about the Echo Chamber a lot lately. This is the effect where you become surrounded by people who think like you to the exclusion of new ideas. This happens in L&D a lot. We talk to each other about what works and what doesn’t work in learning. We talk to our stakeholders but the conversation is focused on the same ideas around delivering learning. This is problematic for creating learning content, but it is disastrous for implementing learning technology. It can lead to spending a lot of money on things that the learners don’t want.

In IT and Business improvement methodologies like ITIL and Six Sigma put a to of focus on getting “The Voice of the Customer,” finding out what the end user of the system, process, product or service is going to do in the real world. This way, the requirements of the project line up with the way users get their work done. What if we did this with learning content and learning technology? When was the last time you talked to a real-life learner? What would it be like to talk to someone who has no knowledge of our methods or jargon. That project that you’ve been working on for months that seems to be the most important project in the history of the company, might draw blank stares from someone who has been focused on their own work. Really listening to these people might just break us out of our Echo Chamber.

When my team does VOC work, we break it down into three tiers:

  • We collect general quantitative information from surveys of large groups
  • We get more specific insights and trends from small focus groups
  • We get targeted information from individual observations

Start by figuring out what questions you want answered. There are two types: questions to get information that you don’t currently have and questions designed to challenge your assumptions about what you already know. Here are some examples:

  • How are people solving the problem today?
  • What are people not getting out of their current solution?
  • Why do people need what we think they need?
  • How will our solution fit into the way people do their work?

Next, find people to participate. Make sure they are not in L&D, not stakeholders of L&D projects, and not in HR. It helps if they come from the remotest of offices and a variety of business functions. Find a strong community where people are engaged. This could be a charitable giving group or a cultural or health based club. Ask the leader for access to their member list. It’s best if the leader reaches out to their community on your behalf. Remember to always explain the importance of getting people’s feedback and most importantly THANK them profusely and at every opportunity for taking the time to help.

Now it’s time to implement the components of your VOC project:

  • Survey: Keep it short. People don’t have time for multipage surveys. Ask only what you need to get your core questions answered. Ask quantitative questions (multiple choice and likert scales, so that you can create graphs that show trends. At the end of the survey ask if they would be willing to participate in more user research (this will help you generate the list for the forums.)
  • Forums: Here’s where you can get details on the answers to the survey. Make use of the group dynamics to a get a discussion going. However, the discussion can be dominated by one or two dynamic personalities. Make sure others get a chance to talk by directing questions to them by name. It helps to show them something existing or at east a mockup of the proposed solution so that the participants have something to comment on. Make sure you schedule sessions around applicable time zones.
  • Direct Observations: After the survey and user forums, you might have very specific questions about what you heard. Or you may still have an assumption that you think might be off base from reality. By watching someone actually use a system or content, you can see directly what the effects of your design decisions will be. Schedule one to one meetings with individuals either in person or using video conferences with screensharing. Ask them to use the existing system or the mock up and have them narrate what they are doing. Ask them what they think each element means and what they expect to happen when they use them. This is where you will get some big surprises. Users always see at least one thing quite differently than was intended.
Finally you can aggregate the results and report out to your stakeholders. Don’t skip this step. You won’t remember all the conversations later. Keep good notes and summarize them. What you learned can be helpful to other projects in the future.

Reach and Access

Sales people have it easy. They are measured on something very concrete: sales. Marketers on the other hand are measured on reach. How many people did they reach with their message. There is no way to know whether their message resulted in sales but if they don’t reach people, those people don’t buy. Learning professionals are in the same boat. It’s hard to measure whether people learned but they are interested in who they reached with their learning content. They may or may not learn but they won’t get the content if they can’t be reached.

This means that built into every learning experience must be a mechanism to capture who had the experience. The issue is that how people access the experience is tangled up in how it is tracked. There is a danger in creating a tracking process that is more complex than the learning itself. This is unacceptable in the age of Google. Learners expect minimum friction in accessing learning.

This struggle follows the continuum of formality in delivering training. In a formal classroom environment, responsibility for determining completion is given to the instructor and they send that information to the LMS Admin for tracking. In a formal eLearning course, the course developer determines the criteria for completion and uses the SCORM protocol to make sure that completion is passed accurately to the LMS. Once you start looking at informal learning or microlearning where tracking activities might hinder the access to the learning, it gets more complicated.

One solution is to separate usage and learner success. You can use web analytics to measure the usage of the content and then track a post learning assessment to see if people learned. Or you can make the tracking more passive (not requiring action by the learner) by embedding it into the content. The xAPI protocol with its light javascript programming can be good for this.

With social media learning you can extract data about people’s interactions around the learning experience. You can even reintroduce the human into the equation. For instance, in certain MOOCs (Massive Open Online Courses) they use software like Prosolo to give peers the opportunity to “complete” each other for certain tasks.

Counting participation or “buts-in-seats” is only a proxy for learning. It is like following footsteps on a beach and trying to figure out what the experience was. This is frustrating to most learning professionals because we want to know what our real impact was. Did people learn? But is that a realistic measure to hold ourselves to? We can’t get inside people’s heads. When we use an organizations’ resources to create learning experiences we have to be held accountable for something measurable. We have to show that at the very least we reached people with our content. Maybe if we can simplify that process, we can move on to the next questions: Are people able to change how they process new information and challenges? Are they able to do things differently after experiencing learning?

The End of Re-orgs

When trying to create an ecosystem for learning, the bottleneck that keeps coming up is the role of the manager. If the manager doesn’t think that learning will solve their problem of how to deliver on commitments, then the learning will not happen. The learner relies on their manager to be committed to their development. Most managers agree with this in theory but there is always a conflict of interest between developing people and delivering immediate results. The problem is exasperated by re-orgs. Changes in managers mean that a person’s development has to be constantly rebooted. Re-orgs are necessary because the business needs to change but the uncertainty disrupts performance.

This blog usually doesn’t go into organizational design but I came up with a radical idea that might solve both problems: managers have trouble reconciling people development and execution, meanwhile re-orgs disrupt the development of people and performance.

No more re-orgs.

The only way you can stop re-orgs is to stop having organizations. Imagine a 100% flat organization. No one belongs to any formal organization. No group has a formal manager. How would work get done? Ad Hoc teams would be formed and staffed as needed to solve specific problems and then would be disbanded on completion of the deliverables. There would be a Team Lead who would ONLY be responsible for deliverables. How would performance be managed and development be guided? Mentors would be assigned to each employee. Mentorship is often informal but this would be a formal role that would be part of the mentor’s job description. The employee is answerable to the team lead for delivering results and they are responsible to the Manager for performance and self development.

end-of-reorgs

This separates people management from results management and it removes the disruption and lack of clarity around reorganizing formal hierarchies. This would free learning from the tyranny of deadlines and it would make learning a priority of the mentor.

The Radical Eradication of Bad eLearning

I have a radical idea:

There should be no bad eLearning in your LMS.

I know, crazy right? Here’s another one:

People don’t really need eLearning.

See there’s this thing called a Document (or an article) that everyone knows how to use and is easy to distribute. It can be used to provide information on anything. The only reason you would ever want eLearning is because sometimes the information in documents is hard to process. eLearning can make it easier. However if the eLearning is bad and makes it harder to process the information, than it makes things worse. eLearning takes a lot more resources to create, distribute and maintain than documents so if it makes things worse why on earth would you keep it?

If your LMS contains bad eLearning, you will kill the credibility of all the learning that you offer. People have so many options for learning that they have limited patience for something that makes it harder to learn.

I propose getting rid of all bad eLearning in your LMS right now. I know it’s heresy but really can you afford to have bad eLearning in your LMS? How do you know what is bad and what is good? It’s easier than you think when you look at it from the learner’s point of view. I propose the following rubric to test your eLearning against. If your content doesn’t meet every one of these criteria then I say you should chuck it.

eLearning should solve a problem.

eLearning should state why the topic is important to the learner (not the learning organization) within the first 3 pages.

eLearning should have more pages that provide valuable information than pages that don’t.

eLearning should not go longer than 30 minutes without some type of break.

eLearning should not try to cram too much information on a page but instead should link out to documents.

eLearning should not have navigational elements that make it harder to access information.

eLearning should not have outdated or inaccurate information.

Wow, this is just so subversive! Accuracy, brevity, clarity. This is crazy talk. What will our stakeholders say? Well, they will ask why we created bad eLearning for them in the first place. That may be a sensitive issue but if you don’t address it then no one will look at any of the content good or bad.

Maybe you have 5000 courses in your LMS and you are thinking that reviewing it all is going to be too much work. You better get started. Managing thousands of courses most of which probably don’t deliver value is an exercise in futility. All the more reason to be ruthless in purging it. At the very least, hide the bad programs from search. Your stakeholders can still send out links but your credibility won’t be damaged. Start with your highest visibility programs. If those are bad, you need to fix them.

If you want to talk with me more about this, you may have to find me in L&D Siberia where I will have been banished for my radical ideas.

Google Maps: The Ideal Performance Support

I’m a typical guy in that I don’t ask for directions, but it’s not a macho thing. I just know that I won’t understand them. I’m a visual person. I need to see the big picture, the context. I need a map. But that’s just me.

I wrote this post about the advantage of maps over directions as a metaphor for learning,  but who am I to tell people how to access information. As I prepare for the Performance Support Symposium in Austin next week, I am thinking about the maxim of Performance Support: “Get people what they need and get out of their way.” I keep wondering if there is a way to let people access step by step directions AND see the bigger picture. Bob Mosher calls this the flipped pyramid. In a formal classroom, there is an inverted pyramid. The grand concepts are on top and you drill down until you get to the instructions to do the task at hand at the end. In Performance Support you reverse this picture. You start with what is needed to get the task done, and then you let the user choose to drill down to the deeper concepts.

I’ve been trying to think of an example of how this would work and it hit me: Maps. Specifically online maps. Google Maps and its competitors are the ideal example of what Performance Support should be:

  • It let’s you switch back and forth from maps to directions and from individual steps back to the map.
  • You can access it at the moment of need. It can be on your desktop at home when planning a trip or on your mobile device when you are lost.
  • You can dive deep into the detail or zoom out to get a broader perspective.
  • You can link to other resources like the menu of a restaurant.
  • You can contribute by uploading photos and commenting on sites.
  • You can embed interactive maps into other applications.
  • Everyone understands how to use it (Is this because of its ubiquity or it’s straightforwardness?).
  • It uses the affordances of the mobile device (most obviously the GPS.) You can even use the Accelerometer for setting the compass so you can see which way you are facing.
  • It has a “Show me” function in the form of “Street View”
  • It warns you about challenges by showing traffic patterns.
  • It gives you options for completing the task with optional routes and optional modes of transportation.
  • It tells you how long the task will take.
  • Most importantly it gives you information that you can act on immediately.

These features of online maps could be added to any Performance Support solution to make it more robust. It is a good way to demonstrate the power of Performance Support.

I’m looking forward to see where this goes…as long as I don’t have to ask for directions.

Shifting Perspective: Recap of Learning Solutions #LSCon and Learning Ecosystems #Ecocon

How do you capture the “vibe” of two collocated conferences with over 100 sessions? The Learning Solutions and Learning and Performance Ecosystems conferences held in late March by the eLearning Guild in Orlando, Florida each had their own vibe but the overarching phrase I would use to describe them would be “Shifting Perspectives.” We have all heard countless times that the learning industry like so many others is in the crosshairs of major upheavals fueled by technology and driven by intense economic forces. These two conferences went far in showing concrete examples of thinking and methodologies that are equipped to handle this level of change. The key to making a difference is in shifting our perspective and these sessions made a strong case for doing just that.

Bob Mosher and Conrad Gottfredson, the gurus of Performance Support held a Morning Buzz session on Wednesday. These sessions are usually supposed to be informal chats over coffee about topics of interest but Bob and Conrad led a full scale invasion of their topic piling on a wealth of information and insight. Their key message was that much of what people need to know is needed at the moment when the work is being done. Learning groups need to shift their perspective from pulling people out of the work to learn through training towards bringing learning into the workflow as Performance Support. “When we enter the classroom, we leave context behind. We then have to work hard to recreate the context.” Bob explained. “With Performance Support, the context is already there in the work.”

The keynote speaker Tom Wujec used the recent history of technological disruption to show the necessity for changing perspective in order to keep up. He challenged the crowd by saying “As educators we have an obligation to help people understand how to use technology.”

The audience at Learning Solutions is always lively, fun and a bit irreverent but over on the Learning Ecosystems side, things were more serious. Here were the people who have been tasked in their companies with creating this amorphous thing called an Ecosystem. When Marc Rosenberg and Steve Foreman, both outspoken proponents of the Ecosystems concept gave their presentations, the attendees were hanging on every word. Marc explained that we all already have Ecosystems. The question is whether they are robust enough to serve our needs. Again we were being encouraged to shift our perspectives from being focused on what we need to deliver as learning professionals to focus more on what the associates need to know to do their jobs. This expands learning beyond just training and across a spectrum of resources: Talent Management, Knowledge Management, Social Media collaboration, access to experts and Performance Support as well as standard training.

The person who for me gave the best hands-on example of this kind of shift in perspective was my friend JD Dillon who recounted his approach over the past five years transforming corporate learning at Kaplan. Instead of focusing on the content of courses, JD focused on the knowledge that people needed access to. If it wasn’t written down and available for everyone then it wasn’t going to be provided as learning. To that end he built a Wiki of the entire body of knowledge of how work gets done at Kaplan. More importantly he built it and maintained it by creating a culture of collaboration. as the work process evolves, the people doing the work continuously contribute to the Wiki. The next shift happens in moving needs analysis directly to the learners themselves. Every morning everyone plays games on the Gaming-Assessment engine provided by Axonify. When they struggle they are sent to the exact place in the Wiki where the information exists. When they win, they get points that can be traded for swag or bid on things like a 25 minute meeting with the CEO. This twist in focus means that the daily life of an employee is tied in with learning and contributing to knowledge. This frees the L&D department to create targeted learning that covers deeper more impactful topics.

The world around us is shifting rapidly and shifting our perspectives is how we will adapt and better serve our constituents. Conferences like these are good places to be be reminded and encouraged in this direction.