AI Pollsters

Polling has gotten a bad name.

It used to seem that we were moving toward a system in which pollsters would know, with certainty, the results of an election before election night. Pundits seemed to just be waiting to come on the air the moment they were allowed to do so, just after polls closed, and let us know who’d won.

Not anymore.

The first recent pundrity that I remember getting it wrong was when the networks were calling Florida for Al Gore in 2000. I remember it looked strange to me as I watched TV that night—nowhere near enough of the vote was in to actually be calling anything. And sure enough, as we went through the evening, suddenly it got closer and closer, and then … suddenly no one was calling anything anymore.

That evening could be seen as an aberration, as it was an extremely close (and perhaps never actually resolved) election. And certainly the US elections in 2008 and 2012 went according to polling—but those were pretty easy calls.

More recently, it seems that, despite living in the time of “big data,” where more is known (supposedly) about us than ever before, polling has become more and more shaky. The certainty has all but disappeared, and while there seem to be more polls than ever, they seem to be all over the map, predicting wildly differing outcomes. I’m thinking the British elections in 2015 when the Conservatives won a majority in what was supposed to be a hung parliament, Brexit, the US elections in 2016, and Theresa May’s unfortunate decision to have another general election in 2017, losing the Conservative majority and nearly her PM-ship.

The thing is, polls only take in people’s actual answers—about not only their preferences, but also race, ethnicity, even gender. And any of these can be faked over the phone (either on purpose or not). In the US presidential election, the so-called “afraid to tell you I’m voting for Trump” voters were a prime example.

So for all the data in the world (Cambridge Analytica or Google or whoever), you’re only as good as the humans who give you that data.

Personality AI could potentially give polls a way to factor in the “human element” by being more human themselves. They could estimate from personality types and from run-throughs given people’s actual answers what the true, underlying vote might be—and how many people might not vote at all, even though they say they will. It could possibly even estimate how to get those non-voters excited enough to vote.

Of course, as with anything related to AI (or science, or emotions), there’s the potential for misuse (or ethically foggy use), but if used strictly to interpret polling data, personality AI would be providing a service to those of us who want to know how close our elections actually are.

This entry was posted in Artificial Intelligence and tagged , , , . Bookmark the permalink.