Reach and Access

Sales people have it easy. They are measured on something very concrete: sales. Marketers on the other hand are measured on reach. How many people did they reach with their message. There is no way to know whether their message resulted in sales but if they don’t reach people, those people don’t buy. Learning professionals are in the same boat. It’s hard to measure whether people learned but they are interested in who they reached with their learning content. They may or may not learn but they won’t get the content if they can’t be reached.

This means that built into every learning experience must be a mechanism to capture who had the experience. The issue is that how people access the experience is tangled up in how it is tracked. There is a danger in creating a tracking process that is more complex than the learning itself. This is unacceptable in the age of Google. Learners expect minimum friction in accessing learning.

This struggle follows the continuum of formality in delivering training. In a formal classroom environment, responsibility for determining completion is given to the instructor and they send that information to the LMS Admin for tracking. In a formal eLearning course, the course developer determines the criteria for completion and uses the SCORM protocol to make sure that completion is passed accurately to the LMS. Once you start looking at informal learning or microlearning where tracking activities might hinder the access to the learning, it gets more complicated.

One solution is to separate usage and learner success. You can use web analytics to measure the usage of the content and then track a post learning assessment to see if people learned. Or you can make the tracking more passive (not requiring action by the learner) by embedding it into the content. The xAPI protocol with its light javascript programming can be good for this.

With social media learning you can extract data about people’s interactions around the learning experience. You can even reintroduce the human into the equation. For instance, in certain MOOCs (Massive Open Online Courses) they use software like Prosolo to give peers the opportunity to “complete” each other for certain tasks.

Counting participation or “buts-in-seats” is only a proxy for learning. It is like following footsteps on a beach and trying to figure out what the experience was. This is frustrating to most learning professionals because we want to know what our real impact was. Did people learn? But is that a realistic measure to hold ourselves to? We can’t get inside people’s heads. When we use an organizations’ resources to create learning experiences we have to be held accountable for something measurable. We have to show that at the very least we reached people with our content. Maybe if we can simplify that process, we can move on to the next questions: Are people able to change how they process new information and challenges? Are they able to do things differently after experiencing learning?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s