Sales people have it easy. They are measured on something very concrete: sales. Marketers on the other hand are measured on reach. How many people did they reach with their message. There is no way to know whether their message resulted in sales but if they don’t reach people, those people don’t buy. Learning professionals are in the same boat. It’s hard to measure whether people learned but they are interested in who they reached with their learning content. They may or may not learn but they won’t get the content if they can’t be reached.
This means that built into every learning experience must be a mechanism to capture who had the experience. The issue is that how people access the experience is tangled up in how it is tracked. There is a danger in creating a tracking process that is more complex than the learning itself. This is unacceptable in the age of Google. Learners expect minimum friction in accessing learning.
This struggle follows the continuum of formality in delivering training. In a formal classroom environment, responsibility for determining completion is given to the instructor and they send that information to the LMS Admin for tracking. In a formal eLearning course, the course developer determines the criteria for completion and uses the SCORM protocol to make sure that completion is passed accurately to the LMS. Once you start looking at informal learning or microlearning where tracking activities might hinder the access to the learning, it gets more complicated.
With social media learning you can extract data about people’s interactions around the learning experience. You can even reintroduce the human into the equation. For instance, in certain MOOCs (Massive Open Online Courses) they use software like Prosolo to give peers the opportunity to “complete” each other for certain tasks.
Counting participation or “buts-in-seats” is only a proxy for learning. It is like following footsteps on a beach and trying to figure out what the experience was. This is frustrating to most learning professionals because we want to know what our real impact was. Did people learn? But is that a realistic measure to hold ourselves to? We can’t get inside people’s heads. When we use an organizations’ resources to create learning experiences we have to be held accountable for something measurable. We have to show that at the very least we reached people with our content. Maybe if we can simplify that process, we can move on to the next questions: Are people able to change how they process new information and challenges? Are they able to do things differently after experiencing learning?
Hooboy, do people like to complain about their LMS (Learning Management System). As evidenced by this conversation on #chat2lrn that I was a part of, L&D folk would like to wish away this beast of a software program. The thing is though, that no one ever really does get rid of their LMS. Why is that?
I believe that the root cause of this contradiction lies in two conflicting drivers: 1) It is important to the business to track training. 2) The way we track training now gets in the way of learning. Many people including myself have argued the first point so I’ll focus now on the second point. In the current process of using an LMS, in order to track training, you must control access to content. But we live in the Internet age now and we all know that content wants to be free. If we cannot resolve this disconnect, there is no hope for improving our relationship with our learning architecture.
The Flipped Classroom is a concept that breathed life into the disconnects of the education world. Perhaps the idea of a Flipped LMS can bring us to a solution. In the Flipped Classroom, instruction and experimentation (homework) were separated and switched, where the instruction was provided by video at home and the teacher supported the experimentation in the classroom. Imagine if we separated the content from the assessment. Put the assessment in the LMS where it can track whether someone learned, and let the content exist outside the LMS where it is always accessible for anyone. The assessment can have a landing page (most assessment tools can do this) that provides context for the information being assessed: why it’s important, how it is assessed, where to learn what you need to improve your score. Here would be the link to the content. There could be three assessments per program, all orchestrated by the LMS: A pre-test; the post-assesment and a follow-up assessment to reinforce the learning.
This way you are using the LMS for what it does best. By allowing multiple attempts and multiple sources of learning, you are letting the learner be more flexible and you are tracking improvement over time with less complexity.
But how do we know who accessed the content? This is the beauty of the idea. By splitting up access and assessment you also split up what you have to measure. For assessment, you must track individuals and so you need the LMS, but to track the reach of your content, you only need numbers of users and visits, not individuals. This can be done by any web analytics tool like Google Analytics.
Hopefully the clarity produced by this split in efforts will help L&D folk move on to more important conversations than how much they hate their LMS. That is, until they get the next bill from the LMS vendor.
The role of learning professionals is NOT to make people learn. That’s an unrealistic task. Learners are responsible for learning. Learning happens inside people’s heads and so it is impossible to measure. So then what do Learning Professionals do? Why do companies give them money?
The goal of a corporate learning function is to increase individuals’ capability to produce value. A company with employees that produce more value is more valued itself. To accomplish this goal, Learning Professionals need to create learning experiences, and get the right people to participate in them. If the experiences are designed correctly, the participants should be able to do their work faster, better, smarter or safer. The first part is handled by a Learning Management System. An LMS , when set up correctly, facilitates participation in Learning Experiences for target audiences. You can’t grow your capability if you don’t participate. However, just because you participate doesn’t mean you grow your capability.
Most Learning Organizations stop here. Participation is easy to measure and report on. Stakeholders understand the results. The problem is that learning and Learning Management Systems are a big investment for only getting half of the desired results. The Learning industry has been scratching their collective heads for years looking for a way to measure learning, but that is the wrong measure. We need to measure growth in capability.
Measuring growth is done by setting a baseline and measuring changes over time. The complication is that people are always learning and always adjusting their capability. To try to measure the effect of a single learning experience is going to be hard. Another way to look at it is that Learning Experiences are transactions happening on a regular basis. Capability could be tested at timed intervals and increases could be attributed to the transactions that occurred between the intervals. This kind of measurement is not easy for an LMS but could be done with the xAPI.
Using this model, Learning Experiences could be offered as smaller components of a larger practice. The key is in creating assessments that also provide value.
Tim Cook, the CEO of Apple wakes up in the middle of the night from a dream. He has had a premonition that somewhere in his company of 80,000 employees there are 100 people who will develop and market Apples next blockbuster product: iPet. Right now, though, these people don’t know enough about pets to make that happen. So Mr. Cook calls his Learning and Development team and tells them to deploy animal behavior training to everyone in the company at a cost of $10 million.
If Cook was the head of a division, he would count the $10 million as an expense and subtract that number, along with other expenses like salaries, from the projected revenues to show his boss that there would be profits. But Tim Cook doesn’t have a boss like you and I do. His bosses are the stockholders. They have bought Apple stock for a lot of money and they want the value of their stock to keep going up. The value of the stock is a share of the Market Value of the company. It is Cook’s job to make sure that value keeps increasing.
When Jobs and Wozniak were starting Apple Computer in their garage in the seventies, 80% of the value of companies depended on tangeable asssets like factories. Only 20% was created by people. So if you built another factory, you could increase the value of the company. Now the numbers are flipped. 80% of the value of companies is created by people. But people aren’t factories, how do you account for the value that they produce?
Not coincidentally, Apple’s book value of $120 billion is about 20% of it’s market value of $570 billion. Book value is calculated by accountants and it lists employees as expenses and liabilities. No where in the book value of Apple is any mention of the legacy of Steve Jobs but you can be sure that it is factored into the market value.
The 100 people in Tim Cook’s dream are already contributing to the market value of the company. If Cook wants to increase that value, he needs to increase the value that those 100 people are capable of producing. Since he doesn’t know which employees will produce that increased value, he needs to increase the capability of all of his employees. He will invest 10 million dollars in a learning program that has the potential of raising the company valuation 10% or $57 billion dollars. That is the return on investment for learning.
Not every CEO wakes up in the middle of the night with visions of new products, but most good leaders instinctively know that they need to invest in their people as producers of value even though that calculation is not figured into their accounting. The reason companies set aside budgets for learning is to increase the value of the company.
I facilitated a Morning Buzz session at DevLearn in October about Learning Technology Strategy. The conversation was very robust. It seems that the topic I picked had touched a nerve. But I noticed that a friend of mine was unusually quiet. When I asked him about it later he explained that he felt it was the wrong conversation. I understood what he meant. It is the ongoing concern about putting too much energy into strategizing technology for learning instead of talking about the learning first. What do people need to learn? How do they need to learn it? That is what the conversation should be about in an ideal world.
In our world though, the conversation about technology has already taken up everyone’s energy and without a strategy it has been going around in circles.”We want eLearning to save on travel costs.” “We want rapid development tools to meet aggressive deadlines and tight budgets.” “We want an LMS with the most features so we can automate delivery and track compliance and make all our stakeholders happy”. We jump from one tech conversation to another without any sense of desired outcomes and priorities. This endless conversation is already taking time and resources away from talking about learning content. It’s easier to get lost down the rabbit hole of tech than to think about what learners really need.
This is why I wanted to start a dialog about developing a Learning Technology Strategy. I want to reduce the noise and commotion. Having a strategy means that you can start talking about outcomes before you talk about tools. For instance, many people in the group had spent a lot of time finding the right LMS but almost none of them were happy with their decision. This is probably because they focused on the feature set available compared to the feature set they wanted, instead of looking at what they wanted to get out of an LMS. An organization decides that they want an LMS to hold people accountable for delivering learning. A secondary benefit is the savings in automating class roster management. Focus on delivering on this requirement will make an LMS selection process faster and more effective.
The same is happening with rapid development tools, virtual classroom tools and content management tools. All of this technology is becoming commoditized anyway. It’s hard to find a truly bad product at this point as well as it is hard to find a perfect product. The real value is in deciding how technology is used. Instead of thinking about how to use the feature sets of all of the tools in your toolkit, think about what these tools make possible for the learner to find and consume.
If we get this conversation going in the right direction, maybe we can get back to the right conversation.
Memo to the provisional government in Coruscant, committee for post rebellion leadership development by Princess Leia.
Dear committee members,
I am writing to you to give an overview of the Jedi Talent Strategies for recruitment, performance management, learning and development and succession planning and key areas of weakness that led to the dissolution of the program.
Recruitment: The use of arcane credentials like the blood-count of midi-chlorians does not take into account personal characteristics both positive and negative as evidenced by Anakin Skywalker. It also can lead to nepotism, as in the case of Luke Skywalker, which discourages diversity. Continue reading →
I have just completed my first Massively Open Online Course (MOOC), Learning and Knowledge Analytics taught by George Siemens of the University of Athabasca. Participating in a course of this format is both challenging and rewarding. I did not participate to the level that I had wanted to (2-3 hours per week vs the recommended 4-5) but I made sure that I stuck it out to the end. I had two goals for the course that I would like to discuss here Continue reading →