Just like the operational data generated by most businesses, event data is used here for additional purposes. support predictive analytics for scheduling toy fabrication. The Fabrication Team can’t wait until the preliminary Naughty-Nice list is generated in November to know for whom they’re making presents. Instead, once a month the event data is gathered from the regional repositories and a confidence score is calculated to identify those children most likely to be usa rcs data categorized as Nice. The higher the likelihood, the earlier in the year fabrication is started. Many Nice determinations are made at the very last minute. Many of us remember the urgency of improving our behavior once we start seeing Christmas decorations around town. It gets extra-busy in December, but that’s normal. It is also possible to flip from Nice to Naughty, necessitating the removal of a previously prepared gift. Fortunately, this is extremely rare, but it does happen.
These predictive analyses require as input much more than just the single binary result for each individual. It also requires historical data from previous years. The more history the better. It’s a familiar conversation. One side wants all of the data forever, and the other is trying to manage cost, maintenance effort, and risk. It’s also a familiar compromise. Detailed data stored for a couple years then summarized and “skinnied” farther back in history.
Since the final result is just a binary choice, it might be tempting for the operational systems not to worry about data quality. This is clearly not true. Data quality is a top priority. Nobody ever says, “The data is good enough for Santa’s List, it’s good enough for you.” Nobody wants their bad data to be the cause of an error or miscategorization in any process anywhere. That would be Naughty.