In lieu of a monthly article, I am sharing a teaser for what I’m working on for next month’s piece.
For the past few election cycles, we've been bombarded with outrageous claims from all sides about election misconduct. From the shock in 2016 when Trump won the election (after the polling industry and media failed to correctly gauge the public sentiment) to 2020, where Trump outperformed most polls and the race turned out closer than predicted—this topic has become a flashpoint for controversy.
With accusations of fraud, manipulation, and interference, a closely contested race only adds more fuel to partisan fights. So, why bother weighing in?
Over the past four years, as I've watched how COVID-19 data was intentionally obfuscated, misconstrued, misinterpreted, manipulated, and sometimes suppressed, I’ve developed a different perspective on how data is presented by the media. I’ve had some success in cutting through the noise, offering objective and transparent data in a way that connects and informs. Wading into such a polarizing topic now feels like less of a challenge compared to the COVID era—so, why not?
Background on My Approach:
One factor that pollster Robert Cahaly understood in 2016 was the concept of social acceptability bias. There's often a large gap between what people say publicly when they perceive the "consensus" is against them and what they actually believe. Traditional polling methods—such as surveys—are prone to bias, and that’s just the nature of the data. Nate Silver has developed a creative way to adjust for these biases by building models that weight and average polls across the country to present a more accurate picture. However, the quality of your results is only as good as the data you’re working with. Predictors and indicators are only as reliable as what a person chooses to share or wants you to hear. This issue is similar to what I’ve observed in low-quality medical papers based on surveys or other unreliable data that can’t be trusted.
But what if there were a way to forecast an election not based on polls or surveys, but on data that reflects what people actually do, rather than what they say?
While I recognize I’m wading into risky territory, I’m hopeful that the model and data I’ve found will offer a fresh approach.
Stay tuned for later this month, as I put my model to the test and let the data guide us.