The 5 Levels of Curation

There is a lot of learning content out there.

Do we really always need to create more new content or can we just help people get to what already exists that can be valuable to them? This is of course called Curation and it is gaining interest as content proliferates and yet at the same time, resources for creating new content dwindle.

But curation is not the same as creating content. When we treat it like content we add to the problem. Curation is a practice. It is a behavior that we develop like a muscle. Like all practice, there are levels of commitment. There is a continuum of involvement from none to the ideal. Here is my model of the Curation spectrum:

The Five Levels of Curation:

1. No curation at all:
If you aren’t contributing, you are missing out on the give and take
of value in your community.

2. Only links:
Links by themselves don’t add much value but they are
better than nothing.

3. Links and context:
Telling a story about why this resource is valuable
gives the content resonance.

4. Links and relevance to work:
Providing an explanation on how the resources support the work
that we do makes the value proposition clear.

5. Building a community around curation
and committing to ongoing maintenance:
Keeping content fresh on a regular basis assures the
engagement of your audience

This model helps you to explain to stakeholders the value proposition of curation. When they come to you to create a page of hundreds of links as learning content that they will forget about as soon as they sign off on it, you can show them that the effort is not warranted by the value produced. You can encourage them to share less but provide more context, relevance and community.

Advertisements

Marketing Your Learning Ecosystem

If you build it they may come…but probably not. Don’t get caught in the trap of thinking that since you made something that is so beneficial to your target audience, that they should just naturally gravitate to your solution. I’m sorry but I think you’ve forgotten that we are all being barraged with a typhoon of interruptions, screaming for our attention. You can’t expect your potential users to notice your system above the noise, even if it is the most amazing thing that was ever created.

But why should we be marketers? We are learning professionals. Marketing is not our job or background. But is that true? Aren’t marketers and learning professionals both selling ideas? Don’t we both focus on key messages, target audiences and desired outcomes?

If you have built a system to help people access your Learning Ecosystem, you probably invested resources and the people who provided you with those resources want to know that the system is going to be used. An unused system has no value. So long before the project is nearing completion, you need to start putting together a plan to market your Learning Ecosystem. You will have to find a way to break through that wall of noise.

The first thing you have to do is get out of the ivory tower of the L&D department. You need to talk to actual learners not learning professionals or their stakeholders. How do they find out about the systems they use today? Where are they getting their daily work done? What kinds of problems are they trying to solve? What works for them outside of work?

Next, you need to go to where the learners are. Don’t stick with your usual communication channels if they don’t work. If people use social media, then you should use their social media tool. If people find out about systems through departmental meetings, then you should get yourself invited to those meetings. When you do show up, stick around. Don’t make your pitch and leave. Make sure you become part of the community otherwise people will ignore you. Create value for that community, not just for yourself and your initiative. Speak about your work in terms of their work.

Finally, bring fun. Work can be dreary. We are being tugged at by things that we are told are SO serious. It’s tiresome. Learning folk can be very irreverent and funny. Let that part of you out. People will remember it.

Voice of the Learner

We’ve been hearing about the Echo Chamber a lot lately. This is the effect where you become surrounded by people who think like you to the exclusion of new ideas. This happens in L&D a lot. We talk to each other about what works and what doesn’t work in learning. We talk to our stakeholders but the conversation is focused on the same ideas around delivering learning. This is problematic for creating learning content, but it is disastrous for implementing learning technology. It can lead to spending a lot of money on things that the learners don’t want.

In IT and Business improvement methodologies like ITIL and Six Sigma put a to of focus on getting “The Voice of the Customer,” finding out what the end user of the system, process, product or service is going to do in the real world. This way, the requirements of the project line up with the way users get their work done. What if we did this with learning content and learning technology? When was the last time you talked to a real-life learner? What would it be like to talk to someone who has no knowledge of our methods or jargon. That project that you’ve been working on for months that seems to be the most important project in the history of the company, might draw blank stares from someone who has been focused on their own work. Really listening to these people might just break us out of our Echo Chamber.

When my team does VOC work, we break it down into three tiers:

  • We collect general quantitative information from surveys of large groups
  • We get more specific insights and trends from small focus groups
  • We get targeted information from individual observations

Start by figuring out what questions you want answered. There are two types: questions to get information that you don’t currently have and questions designed to challenge your assumptions about what you already know. Here are some examples:

  • How are people solving the problem today?
  • What are people not getting out of their current solution?
  • Why do people need what we think they need?
  • How will our solution fit into the way people do their work?

Next, find people to participate. Make sure they are not in L&D, not stakeholders of L&D projects, and not in HR. It helps if they come from the remotest of offices and a variety of business functions. Find a strong community where people are engaged. This could be a charitable giving group or a cultural or health based club. Ask the leader for access to their member list. It’s best if the leader reaches out to their community on your behalf. Remember to always explain the importance of getting people’s feedback and most importantly THANK them profusely and at every opportunity for taking the time to help.

Now it’s time to implement the components of your VOC project:

  • Survey: Keep it short. People don’t have time for multipage surveys. Ask only what you need to get your core questions answered. Ask quantitative questions (multiple choice and likert scales, so that you can create graphs that show trends. At the end of the survey ask if they would be willing to participate in more user research (this will help you generate the list for the forums.)
  • Forums: Here’s where you can get details on the answers to the survey. Make use of the group dynamics to a get a discussion going. However, the discussion can be dominated by one or two dynamic personalities. Make sure others get a chance to talk by directing questions to them by name. It helps to show them something existing or at east a mockup of the proposed solution so that the participants have something to comment on. Make sure you schedule sessions around applicable time zones.
  • Direct Observations: After the survey and user forums, you might have very specific questions about what you heard. Or you may still have an assumption that you think might be off base from reality. By watching someone actually use a system or content, you can see directly what the effects of your design decisions will be. Schedule one to one meetings with individuals either in person or using video conferences with screensharing. Ask them to use the existing system or the mock up and have them narrate what they are doing. Ask them what they think each element means and what they expect to happen when they use them. This is where you will get some big surprises. Users always see at least one thing quite differently than was intended.
Finally you can aggregate the results and report out to your stakeholders. Don’t skip this step. You won’t remember all the conversations later. Keep good notes and summarize them. What you learned can be helpful to other projects in the future.

Reach and Access

Sales people have it easy. They are measured on something very concrete: sales. Marketers on the other hand are measured on reach. How many people did they reach with their message. There is no way to know whether their message resulted in sales but if they don’t reach people, those people don’t buy. Learning professionals are in the same boat. It’s hard to measure whether people learned but they are interested in who they reached with their learning content. They may or may not learn but they won’t get the content if they can’t be reached.

This means that built into every learning experience must be a mechanism to capture who had the experience. The issue is that how people access the experience is tangled up in how it is tracked. There is a danger in creating a tracking process that is more complex than the learning itself. This is unacceptable in the age of Google. Learners expect minimum friction in accessing learning.

This struggle follows the continuum of formality in delivering training. In a formal classroom environment, responsibility for determining completion is given to the instructor and they send that information to the LMS Admin for tracking. In a formal eLearning course, the course developer determines the criteria for completion and uses the SCORM protocol to make sure that completion is passed accurately to the LMS. Once you start looking at informal learning or microlearning where tracking activities might hinder the access to the learning, it gets more complicated.

One solution is to separate usage and learner success. You can use web analytics to measure the usage of the content and then track a post learning assessment to see if people learned. Or you can make the tracking more passive (not requiring action by the learner) by embedding it into the content. The xAPI protocol with its light javascript programming can be good for this.

With social media learning you can extract data about people’s interactions around the learning experience. You can even reintroduce the human into the equation. For instance, in certain MOOCs (Massive Open Online Courses) they use software like Prosolo to give peers the opportunity to “complete” each other for certain tasks.

Counting participation or “buts-in-seats” is only a proxy for learning. It is like following footsteps on a beach and trying to figure out what the experience was. This is frustrating to most learning professionals because we want to know what our real impact was. Did people learn? But is that a realistic measure to hold ourselves to? We can’t get inside people’s heads. When we use an organizations’ resources to create learning experiences we have to be held accountable for something measurable. We have to show that at the very least we reached people with our content. Maybe if we can simplify that process, we can move on to the next questions: Are people able to change how they process new information and challenges? Are they able to do things differently after experiencing learning?

The End of Re-orgs

When trying to create an ecosystem for learning, the bottleneck that keeps coming up is the role of the manager. If the manager doesn’t think that learning will solve their problem of how to deliver on commitments, then the learning will not happen. The learner relies on their manager to be committed to their development. Most managers agree with this in theory but there is always a conflict of interest between developing people and delivering immediate results. The problem is exasperated by re-orgs. Changes in managers mean that a person’s development has to be constantly rebooted. Re-orgs are necessary because the business needs to change but the uncertainty disrupts performance.

This blog usually doesn’t go into organizational design but I came up with a radical idea that might solve both problems: managers have trouble reconciling people development and execution, meanwhile re-orgs disrupt the development of people and performance.

No more re-orgs.

The only way you can stop re-orgs is to stop having organizations. Imagine a 100% flat organization. No one belongs to any formal organization. No group has a formal manager. How would work get done? Ad Hoc teams would be formed and staffed as needed to solve specific problems and then would be disbanded on completion of the deliverables. There would be a Team Lead who would ONLY be responsible for deliverables. How would performance be managed and development be guided? Mentors would be assigned to each employee. Mentorship is often informal but this would be a formal role that would be part of the mentor’s job description. The employee is answerable to the team lead for delivering results and they are responsible to the Manager for performance and self development.

end-of-reorgs

This separates people management from results management and it removes the disruption and lack of clarity around reorganizing formal hierarchies. This would free learning from the tyranny of deadlines and it would make learning a priority of the mentor.

The Radical Eradication of Bad eLearning

I have a radical idea:

There should be no bad eLearning in your LMS.

I know, crazy right? Here’s another one:

People don’t really need eLearning.

See there’s this thing called a Document (or an article) that everyone knows how to use and is easy to distribute. It can be used to provide information on anything. The only reason you would ever want eLearning is because sometimes the information in documents is hard to process. eLearning can make it easier. However if the eLearning is bad and makes it harder to process the information, than it makes things worse. eLearning takes a lot more resources to create, distribute and maintain than documents so if it makes things worse why on earth would you keep it?

If your LMS contains bad eLearning, you will kill the credibility of all the learning that you offer. People have so many options for learning that they have limited patience for something that makes it harder to learn.

I propose getting rid of all bad eLearning in your LMS right now. I know it’s heresy but really can you afford to have bad eLearning in your LMS? How do you know what is bad and what is good? It’s easier than you think when you look at it from the learner’s point of view. I propose the following rubric to test your eLearning against. If your content doesn’t meet every one of these criteria then I say you should chuck it.

eLearning should solve a problem.

eLearning should state why the topic is important to the learner (not the learning organization) within the first 3 pages.

eLearning should have more pages that provide valuable information than pages that don’t.

eLearning should not go longer than 30 minutes without some type of break.

eLearning should not try to cram too much information on a page but instead should link out to documents.

eLearning should not have navigational elements that make it harder to access information.

eLearning should not have outdated or inaccurate information.

Wow, this is just so subversive! Accuracy, brevity, clarity. This is crazy talk. What will our stakeholders say? Well, they will ask why we created bad eLearning for them in the first place. That may be a sensitive issue but if you don’t address it then no one will look at any of the content good or bad.

Maybe you have 5000 courses in your LMS and you are thinking that reviewing it all is going to be too much work. You better get started. Managing thousands of courses most of which probably don’t deliver value is an exercise in futility. All the more reason to be ruthless in purging it. At the very least, hide the bad programs from search. Your stakeholders can still send out links but your credibility won’t be damaged. Start with your highest visibility programs. If those are bad, you need to fix them.

If you want to talk with me more about this, you may have to find me in L&D Siberia where I will have been banished for my radical ideas.

Google Maps: The Ideal Performance Support

I’m a typical guy in that I don’t ask for directions, but it’s not a macho thing. I just know that I won’t understand them. I’m a visual person. I need to see the big picture, the context. I need a map. But that’s just me.

I wrote this post about the advantage of maps over directions as a metaphor for learning,  but who am I to tell people how to access information. As I prepare for the Performance Support Symposium in Austin next week, I am thinking about the maxim of Performance Support: “Get people what they need and get out of their way.” I keep wondering if there is a way to let people access step by step directions AND see the bigger picture. Bob Mosher calls this the flipped pyramid. In a formal classroom, there is an inverted pyramid. The grand concepts are on top and you drill down until you get to the instructions to do the task at hand at the end. In Performance Support you reverse this picture. You start with what is needed to get the task done, and then you let the user choose to drill down to the deeper concepts.

I’ve been trying to think of an example of how this would work and it hit me: Maps. Specifically online maps. Google Maps and its competitors are the ideal example of what Performance Support should be:

  • It let’s you switch back and forth from maps to directions and from individual steps back to the map.
  • You can access it at the moment of need. It can be on your desktop at home when planning a trip or on your mobile device when you are lost.
  • You can dive deep into the detail or zoom out to get a broader perspective.
  • You can link to other resources like the menu of a restaurant.
  • You can contribute by uploading photos and commenting on sites.
  • You can embed interactive maps into other applications.
  • Everyone understands how to use it (Is this because of its ubiquity or it’s straightforwardness?).
  • It uses the affordances of the mobile device (most obviously the GPS.) You can even use the Accelerometer for setting the compass so you can see which way you are facing.
  • It has a “Show me” function in the form of “Street View”
  • It warns you about challenges by showing traffic patterns.
  • It gives you options for completing the task with optional routes and optional modes of transportation.
  • It tells you how long the task will take.
  • Most importantly it gives you information that you can act on immediately.

These features of online maps could be added to any Performance Support solution to make it more robust. It is a good way to demonstrate the power of Performance Support.

I’m looking forward to see where this goes…as long as I don’t have to ask for directions.