If you don’t already have BrightSlide downloaded then run, don’t walk: Download BrightSlide.
Check out a BrightCarbon masterclass!
Bonus resolution! Did you know that some of my incredibly talented colleagues run free masterclasses every week? It’s 30 minutes of expert knowledge delivered live with Q&As from you, the audience. They cover topics from graphs and charts, to creating accessible eLearning, to how to run sales kick offs, to how to get the most out of BrightSlide. You can sign up for free on our Events page.
That’s all folks! Happy new year from all of us at BrightCarbon!
Calling all eLearning aficionados! We’re back phone numbers korea with a brand-spanking new blog post all about eLearning assessments. When we work with subject matter experts to develop multiple-choice questions we see the same mistakes again and again that make a big impact on the validity and reliability of assessments. But don’t despair, we’re going to show you how to fix them!
Before we jump right in, here are a few definitions of important terms you’ll see in this post:
Validity = An assessment is valid if learners who know the content get high scores, but learners who don’t know the content get low scores.
Reliability = An assessment is reliable if learners with similar ability levels get similar scores. These scores can be high or low, as long as they’re consistent.
To give you an example, if all learners who know the content well get similar low scores then the eLearning assessment is reliable but not valid. If all learners who know the content well get similar high scores then the assessment is reliable and valid.