The world of enterprise L&D has yet another reason to be buzzing about microlearning. As if the potential to transform behaviors and organizations wasn’t intriguing enough, a new study out of Germany indicates that short content drives over 20% more information retention. The excitement never stops, does it?
The reason (and the irony) behind microlearning’s superior information retention is that as a method, microlearning is basically unconcerned with information transfer. The kind of learning you did in school, where you learned things from a book and had to apply them towards a test, is not what microlearning designs to achieve. But in the process of doing its day job—helping learners build successful behaviors—microlearning moonlights as a pretty good transmitter of information, too. Here’s the story.
In December 2015, researchers at Germany’s Dresden University of Technology released a study that examined whether online learners performed better on multiple-choice assessment questions (“learning questions”) that were frequent and fine-grained or infrequent and blocked.
In other words: the format of most e-learning is a piece of content followed by an assessment question. The team’s goal was to see if learners answered those follow-up questions better when they watched many small pieces of content and answered many corresponding questions, or when they watched big chunks of content and saw few assessment sections.
To do this, the researchers divided a 16-chapter online text into three sizes. The “fine-grained” group of learners answered a learning question after reading every single chapter; the “medium-grained” group answered four questions after reading four chapters; and “blocked” learners got eight questions after each half of the digital text. After all of this, the participants all took the same multiple choice test covering the whole lesson. (The topic was patent law, if you were wondering.)
The results were clear: smaller slices of content were better. Not just for helping the participants retain the information, either, but to do it more efficiently as well.
- The fine-grained group took 28% less time to answer their assessment questions than the blocked group, yet did 20% better
- Fine-grained performed 8% better on the comprehensive test than blocked
- Learners in the blocked group had to re-read more than 3x the number of sections that the fine-grained group did
Overall, across the three scoring measures in the study, fine-grained performed 22.2% better than the blocked group and 8.4% better than the medium-grained group. The fine-grained group, with their “micro” content and frequent assessment questions, fared better than both competing groups in every category. From this study, it would appear that bite-sized content is, indeed, better.
Why is bite-size better?
The researchers mentioned two dynamics potentially in effect. First, the larger amount of material and questions given to the blocked group might have “put greater demands on learners,” resulting in them having to do more work to “access necessary information from their memory.” In other words, stockpiling information slowed down the process of retrieving it. Sounds familiar.
Second, the blocked group could have suffered from having less feedback than the medium and fine groups: “Longer study phases without learning questions may lead to uncertainty about whether they have understood all relevant content or not.”
The importance of being interactive
The problem with giving people a ton of content at once is that there’s very little interaction between the information and the learner. It’s not just that long content overflows working memory and leads to insecurity within the learning environment—though both are true. More broadly, it’s that learners passively drowning in information aren’t able to do the active work it takes to make the information their own.
For example, knowing what you’ll be expected to do with new information is crucial to being able to retain it. Consider that, in this experiment, even after the blocked group read eight sections of text, they still had no idea what the questions would be like, or the type of information they would test on, or the test’s level of difficulty. These are important concerns when you’re taking a test! You wouldn’t want to take the SATs if you had never seen an SAT question, would you?
A learner who asks “Will this be on the test?” is like a navigator who keeps asking “Are we there yet?”
The fine-grained group, meanwhile, had ample opportunity to interact with the learning experience. They consumed a little bit of information and immediately applied it towards a learning question. By the time they clicked through to their second helping of (highly-digestible) information, they already knew what they would be expected to do with it. That’s the power of expectation-setting interactive learning.
Long-form training is notoriously poor at setting expectations because it divorces a heap of information from its application context. Think of that terrible question from classroom training: “Will this be on the test?” That’s as encouraging as hearing your navigator ask “Are we there yet?” Learners that don’t know what they’re learning or why aren’t as good as ones who are in control of the situation. Setting expectations is crucial to information retention, and it requires an active reflection not available in passive learning experiences.
The implications for microlearning
This experiment simulated genuine microlearning in that important way: it used small learning moments to create interactivity.
In the paper’s opening, the researchers offered a model of learning that had three components: forethought (planning out your learning), performance (doing the learning), and reflection (thinking about how you did and how it will help your future learning). These three phases combined to form a sort of learning “cycle.”
Microlearning’s approach aligns with this model. It gives people digestible, relevant, interactive learning resources they can apply right away. Learners process and reflect constantly. So a microlearning moment, in this model, presents a complete learning cycle of cognitive work and reflection. And there are many microlearning moments over the course of learning something. Multiplying the number of learning cycles multiplies learning outcomes.
The best part of all of this is that the micro approach doesn’t just help learners hold onto content. Microlearning isn’t trying to innovate the textbook. It’s a method of changing learner behaviors through small, participatory learning resources. Had the experiment been about behavioral learning—let’s say, learning to play the piano—the fine-grained group would probably outperformed even more.
The upshot, it seems, is that the same mechanisms that transform behaviors also drive information retention. And that’s something to get excited about.
Source: Kapp et al. “Distributing Vs. Blocking Learning Questions In A Web-Based Learning Environment.” Journal of Educational Computing Research 51.4 (2015): 397-416.