AI Can Love Countdown, Too (and why I love it)

I love the game show Countdown. I’ve watched it every time I’ve been in England since the now long-past Carol Vorderman days. I’ve even had near-spiritual experiences watching it in my days at uni (I even blogged a little about it). Well, maybe not transcendentally spiritual, but I did feel like I was on the horizon of great things, and Countdown was a symbol of a time I loved as I studied games and AI.

Yet, somehow, for all my love of the game, for the unscrambling of letters and numbers, I am very, very average at it. How can that be? After all, I’ve been a professional proofreader for over twenty-five years—so long that I’ve proved even to my skeptical self that I am no imposter—and thus knowing and spelling words is not a problem. And I have a master’s in computer games tech and I write code nearly every day for AI systems, and while one could argue that that is only obliquely math (or maths, in England), I am in fact quite good at it. So I should be applying to be an octochamp right now, surely!


Few things make me more anxious than the idea that someone will somehow sign me up for Countdown and I’ll suddenly be a contestant on the show; my nightmare is that I’d get zero points, probably spell something using American English, become fixated on a letters game that spells a naughty word (the other day “shithead” appeared, and it became the only word I could see in that bunch of letters), and generally embarrass myself in front of people I respect.

There are two problems here: one is the obvious anxiety, a massive fear I have of appearing to be smart but really turning out to be rather full of hot air (and clichés), which stretches into impostor syndrome and other issues that would be difficult to go into here; the other is just in the way the mind works. For instance, although my living is made correcting mistakes in spelling (among other things), I can’t “correct” a random group of nine letters into properly lengthy words. I can get 5-, 6-, 7-, or occasionally 8-letter words, but it often seems to be one letter short of what the contestants find or what Susie Dent finds in Dictionary Corner. When the future octochamps find their nine-letter words in thirty seconds, I know that my mind just doesn’t work that way or that quickly; I’d have been left in the studio long after the cast and crew have gone home, suddenly crying out in the darkness, “Oh! I’ve got an eight-letter word!”

To be honest, I am better at the math(s), but there is still a point at which the twists and turns on the trail to the answer leave me in the thickets (much as this sentence has been lost in the alliteration). I can see the basics–multiply, divide, subtract, and add in a slow, deliberate way–but not the strange and wonderful combos that sound like they belong in a video game: “Multitraction uppercut!” as we subtract 4 from 75 to then multiply by 8 to get 568, or “Number Crunch Kick!” by going up to 5000 and then back down to 203 in five easy steps, thank you, Rachel, I’ll take my trophy now, please …

There are certainly strategies to improve on all this; just watching for a few weeks, I’ve managed to average slightly more in the letters games and see a bit more in the way the maths can work. But it looks like some people can just somehow … see the letter combos, feel the way the numbers can come together. I particularly see this in conundrums, where the best of the best get the words in a second or two. I very rarely figure out the conundrum at all. It just doesn’t come to me.

Is this really a thing, or am I clever at making excuses? According to a study by Peter Sargius and a team at the University of Calgary (as described in Gizmodo [] and in full at Cortex [])  on the related game of Scrabble, players who are good at rearranging letters use a different part of the brain from those who are not as good at it. Whereas neophyte players tend to use the parts of the brain dealing with language, the experts use areas dealing “with visual processing and working memory.” So while it is important to know the words are words in the first place, which, say, a proofreader such as myself would be expected to know, a Countdown octochamp is able to visualize the various combinations of letters, perhaps hold them better in working memory, and reach a lengthy solution.

This jibes with my own impression of how good Countdown players just seem to “see” the words, while I sit around feeling like I’m manually moving letters around one at a time. I tend to do better when I can “see” them in blocks (like knowing there’s a “tion” or an “ing” in the letters), but even then I have trouble remembering (in working memory) which letters are left, and I still feel like I’m manipulating the remaining letters individually.

Although the maths element requires a different kind of manipulation, my guess is that it would still require good working memory and very possibly visual processing to sort the various calculations, especially to see exotic (to me in the moment) combinations of operations. However, I also imagine that it would use different parts of the brain from the letters games, which would explain why players who are good at one might not be quite as good at the other.

The Calgary researchers hope that one can “train” one’s brain to be better at these tasks (and therefore be able to use these different parts of the brain in other situations, or be able to recover from disease or injury); all I can say is that I’ve gotten a little better at Countdown these last few weeks, but I’m not sure how to know whether I’m using the right parts of my brain for this task or whether I can switch over to them. I just need a spare fMRI machine, I guess.

So Can I Train My AI to Play Countdown?

The simple answer is “Sure!” But it would merely use the speed of a computer and brute force to sort through all the possible combinations of letters and compare them to the dictionary. The more human-like solution would be to somehow give your AI the equivalent of a visual processing center and have it do what the best players do (and what I have trouble doing).

But, for this game, why would you do that? Unlike Scrabble, which builds a game space filled with different letters with different values and where often the best move is not the one using the most letters, Countdown straightforwardly asks for words created from the most letters each turn, and those turns do not affect any other turns. No letters are worth more than any others. So why not, if your computer is fast enough, just go through every 9-letter word in the dictionary and compare it to the letters to be used that turn (and then every 8-letter word, etc. if necessary)?

I guess in this case my desire to have an AI be more human would trump (or at least modify) my desire to have it be a brute and power through games winning every time. (This would also make for more interesting games.) But instead of just hamstringing the algorithm, I’d rather give it a “personality.” This then would affect the way the AI accesses its word-finding algorithm–maybe it’s the kind of player who gets more nervous as time goes on, and this nervousness and anxiety affects its ability to find words. Maybe it’s the kind of player who becomes fascinated by certain combinations of letters (swear words or not) and either tries those or has to take a lower-scoring safe-for-TV word. One could study how personality and emotions affect players’ abilities in games such as those in Countdown and utilize this in creating a whole crowd of AIs, each interesting to play against in a different way.

This might not help me in my quest to become an expert Countdown player. But it certainly would be fun.

Posted in Artificial Intelligence | Tagged , , | Comments Off on AI Can Love Countdown, Too (and why I love it)

Why Release a Beta as a Demo?

As I release my Manager: A Sports Management Mystery demo/beta, I want to explain why it’s not just a full release of the demo; why am I widely releasing a Beta version, which almost certainly will have bugs (perhaps even fatal ones)? A few reasons …

Being a single-person team, I feel that there’s so much value in having other eyes look at the game that I’m willing to take on any sort of reputational risk in order to make the final release better and more robust. I have very specific areas I feel pretty competent in (I’m no 10x engineer, so I won’t say I’m an expert at anything): I’ve spent years developing my AI engines (both for personalities and for human-like memories) and I’ve been involved in various aspects of writing/editing for decades, so I think I’ve done a decent job with those (although I also know that dialogue is probably the weakest aspect of my writing, and this game is chock-full of interactive conversations, so …). On the other hand, UX and design aren’t my strongest skills, and art is about as far from a “skill” as one can get. And putting everything together into a game is, well, a first for me (not counting SteamSaga, which never quite got this far). So I figure any constructive comments/advice I get can only be helpful as I finish the back half of the game.

(As an aside, I often feel I’m very good at coming up with the base of something, but by myself I’m not the best at fleshing something out. So outside comments are again a very good thing …)

Second, the original intent of this entire project was to create a simple (!) demo of my personality engine so I could create videos about game building using it, and show that numerous NPCs could use the engine simultaneously (64 in Manager). These aspects of the project are ready for documenting (whether that be blogging or videos [although I know how exciting my YouTube vids are lol] or some other format), and I’m excited about releasing these.

I’m both excited and nervous about releasing any sort of game or story or anything, to be honest. I look back on my old short stories and nonfiction writing and think many of those were really good, but I fear that I’m out of practice now (having been on the editing/proofreading side of things for years now), and my only game “release” was the SteamSaga demo at PAX some years ago, where one bug resulted in a player somehow flying up into the sky and being unable to come down, just before the project itself crashed into the ground (all the wrong highs and all the wrong lows). So I am filled with trepidation … but, one cannot grow without getting some kind of feedback, cannot ever tell one’s stories to the world without actually letting the world see them. And so …

(As always, comments on this blog should be tweeted to me @QuantumTigerAI, posted in the QTG Forums, or emailed to one of QTG’s contact emails. The actual comments section has been overrun by spam bots, and I don’t have time to weed through them every day.)

Posted in Artificial Intelligence, Game Development, SMMG, Uncategorized | Tagged , , , | Comments Off on Why Release a Beta as a Demo?

How AI Don’t Speak Like a Robot

I often go on about how my AI characters can’t just speak to me in their own words, and if they only could, it would really demonstrate the capabilities of my Extreme AI personality engine (and possibly let my AI tell me just what it thought of all this). While that is an admirable end goal, however, spending a little time working on even simplistically adaptable text is more productive than just whining about full natural language processing, so I’m going to write a little about my explorations in trying to get speech to “work” in my SMMG (that’s “Sports Management Mystery Game” to those not yet in the know).

By “work” I mean a kind of semi-procedurally generated text that is both created for reports (say, when your AI head coach tells you about the team’s practice that week or when the AI press writes up how your team did in actual games) and is influenced by the reporter’s personality. The text should not read like it’s being said by a robot, and it has certain beats it has to hit to advance the plot of the game/give you (the Player) necessary information for playing. (I’ll also explore how best to save these kinds of conversations and any emotional ties to them in the characters’ memory using RealMemory, but that’ll be in a later article, I think.)

I’m sure I’m approaching this from a fairly simplistic, naïve POV, as procedurally generated text isn’t my expertise; if you’re looking for that, definitely visit Emily Short’s blog and other similar sites for a deeper insight. (I have, but I know I’ve only scratched the surface; plus, I’ve only researched far enough to be able to implement the text in my specific game situation. Part of trying not to get lost in a sea of interesting information and never managing to complete my game, lol.)

Let’s take the example of Practice. In the SMMG, each week you can have your team practice (or not, but that leads to out-of-shape players, resentment, and losses). Each of your players can focus on a specific skill (one of six, such as “catching” or “running”) or have a more general practice that spans the lot. The results of this practice can range from “great” (definitely going to help in a real match) to “terrible” (practice has managed to make the player worse). As manager/owner, you find out about these results from the head coach. (You can also get an idea of which kinds of things a player is good at/enjoys doing from the results.)

Originally, I had the coach tell you results using a simple iteration for each player that evaluated each area in which a player practiced and gave the result, such as (for a general practice covering all kinds of skills):

Did great at running. Did well at catching. Did terribly at kicking. Did average at run defense. [etc]

This gets the information across, but that’s about it. It doesn’t sound like a person is telling you this, and it certainly doesn’t sound like a person with any sort of personality is saying any of it. I guess it’s a kind of procedural text generation, but … well, not really. Actually, to me it reads like a computer printout on an old dot-matrix printer. I can almost hear the sound … rat-a-tat whir, rat-a-tat whir …

Next I decided that if my text were to sound more human, it should be processed more like a human processes info, rather than just being spit out linearly. My second iteration thus tried a little preprocessing, in a limited way, of the information that the coach was about to give you. She would actually “think” about a player’s overall practice (in code, that meant creating lists that kept track of which skills were done to what effect before saying anything about them, such as a simple list of strings called “practiceGreat” including every skill practiced at that level, and other lists for each of the other levels). Now we have a choice: The coach’s report can either be robotic as before, or (by adding a simple grammar parser as well) can say something like “DotsPlayer1 did really well in running, catching, and kicking, but was terrible at pass defense and run defense. Otherwise, was very average.”

This is much better. It reads like real sentences, especially the sort of thing someone would write in a report. It does become kind of repetitive when written out for several players in a row, though. And it also still doesn’t take into account personality: every AI coach on every team would say the same thing the same way, which (since in this case you don’t know what any other coach said to his or her respective manager) may not be a big deal, but doesn’t make the text generator very adaptable. Plus, it doesn’t take into account change in your coach’s personality or attitude toward you over time.

In terms of just making the sentences a little more variable and realistic, one can add some simple code (just checking how many times she’s talked about a general practice already, and adapting the opening) to create sentences like these:

Practiced everything. Did really well at kicking, and made many good choices. Otherwise very average.

Also practiced everything. Did really well at pass defense and kicking, and had a good week defending against the run. Otherwise average.

Had a go at everything. Did well at running and catching, but was terrible at kicking. Otherwise average.

This kind of thing just requires conditional statements that take into account how many players have already been described and maybe adds a random element as we get further down the list. For one report, this works great, actually. I could even “add” personality quirks to the way it’s said, all hard-coded. But how lovely to be able to make it a bit more automated, to give the coach the ability to take this information and speak for herself in a limited way without my having to hard-code every block and set of conditions and personality variance possible throughout the game.

Using ExAI, one can check the coach’s current feelings and attitudes toward the Player (you), and her current overall personality, very easily. The difficulty is in knowing which personality facets will make her say which kinds of things.

Luckily, in reading Emily Short’s blog I came across links that lead me to a series of articles on the Personage project (a project of Francois Mairesse and Marilyn A. Walker; see refs at end). Between this and works referenced in my own Master’s project (Costa and McCrae, 1995; John et al, 2010; and Saucier and Ostendorf, 1999*), I was able to create a more research-based connection between a character’s language and her Big Five personality traits (I say “more” because I don’t think you can with certainty predict what someone will say, or what sentence structure they will use, etc., but you can make generalizations that help when determining what an AI character will say in a limited situation such as this one). Averaging our coach’s underlying facets to give us her Big Five, we get:

  • Openness 43
  • Conscientiousness 71
  • Extraversion 69
  • Agreeableness 53
  • Neuroticism 33

There are many very complex ways to look at this (see the 59-page Personage paper), but in a really general way for our purposes here, we can say the following:

  • Mid-range Openness shouldn’t have as much of an effect as other, more extreme scores.
  • High Conscientiousness should lead to high numbers of positive vs negative emotion words, good information, and getting straight to the point. It would indicate fewer swear words and more hedges, and longer words.
  • The high Extraversion score would indicate less-formal sentences, few tentative phrases, few hedges/softeners, few errs and ums, more near-swear words, verbal exaggeration, shorter words, and a less-rich vocabulary.
  • Mid-range Agreeableness shouldn’t have as much of an effect as other scores.
  • Low Neuroticism should lead to calmness, few conjunctions, few pronouns, many articles, less exaggeration, and again shorter words plus a less-rich vocabulary.

Some of these work against one another, and one could create a complex series of weights to figure out what wins out/how they affect one another, which I believe the Personage project does (I can’t actually get the project files themselves to work, unfortunately). For my simple solution, my overall take on this is that the coach would get straight to the point in her reports, would tend to vary her word choice but avoid overly negative phrasing, and wouldn’t be enamored of long sentences or flowery language. She generally wouldn’t hesitate or use verbal placeholders like uh, um, and er. Interestingly, this is basically what I’ve already written for her, although maybe the sentences can get a bit long with the grammar parser (so maybe my writer’s ear for this sort of thing is working 🙂 ). She’s not one for varying her vocab a whole bunch, so maybe the repetition in the phrasing is workable—although I still don’t want her to sound un-human.

However, in the interest of there being a little variation in her speech, especially over the course of many weeks, I could change the probabilities of using one of a limited number of phrases or words, still keeping to the general rules above. And, of course, if I were to be using this for a character with high verbal variability, I’d need to provide more opportunities for varied speech, or varied phrasing, types of words, etc. And if I were to try to use the same set of code for several characters, I’d need even more complexity. But that would start to be an engine unto itself, and that’s a whole other ball game.

Also, when using this type of variability even for a single character but over a long period (such that her personality could change over time, as in ExAI or real life), you’d need to have her speech adapt to these changes; e.g., if she became more extraverted, those qualities associated with extraversion would become more pronounced (or would be likely to become more pronounced).

Yikes! I seem to have gone a bit overboard here; I wasn’t trying to write a paper of my own! I’ll save the discussion of press reports for another time.

‘Til later!

(As always, comments on this blog should either be tweeted to me @QuantumTigerAI or emailed to one of QTG’s contact emails. The actual comments section has been overrun by spam bots, as has my Forum, and I don’t have time to weed through them every day.)

*Full refs are as follows:

Costa, PT, Jr and McCrae, RR 1995, ‘Domains and Facets: Hierarchical Personality Assessment Using the Revised NEO Personality Inventory’, Journal of Personality Assessment, vol. 64, no. 1, pp. 21-50.

John, OP et al  2010, ‘Paradigm Shift to the Integrative Big Five Trait Taxonomy: History, Measurement, and Conceptual Issues’, in John et al (ed.), Handbook of personality: Theory and research, 3d ed, Guilford, New York. (US edition.)

Mairesse, F and Walker, MA [n.d.], ‘Can Conversational Agents Express Big Five Personality Traits through Language?: Evaluating a Psychologically-Informed Language Generator’, available at

Saucier, G & Ostendorf, F 1999, ‘Hierarchical Subcomponents of the Big Five Personality Factors: A Cross-Language Replication’, Journal of Personality and Social Psychology, vol. 76, no. 4, pp. 613-627.

Short, E [various dates], Emily Short’s Interactive Storytelling, blog available at

Posted in Artificial Intelligence, Game Development, SMMG, Uncategorized | Tagged , , , , , | Comments Off on How AI Don’t Speak Like a Robot

What Am AI Building? An Update on My Sports Management-Mystery Game

When I started creating my still-nameless sports management game, it was with the idea that it would be a good demo for my AI personality engine, Extreme AI. In fact, that was really the whole point of it (well, almost; I also wanted to actually finish an entire game of my own, no matter how simple). But, somehow, the characters in it started to make it more complex, more complicated, and by including these very different managerial personalities within the game, the game itself has morphed from just a sports management game into a mystery to be solved as well. A Sports Management Mystery Game, if you will (SMMG, because, you know, acronyms).

It’s funny, because this is kinda the opposite of my first game project, SteamSaga, which was built to take advantage of ExAI but was from the outset supposed to be a complex tour de force putting my company on “the map” (I think someone sold me the wrong map, lol). It was never completed and got bogged down in missed milestones and changed directions (really necessary changes, but still), and the experience reminded me of all those writers who think they’re going to rush out and hit it big and make a living from their bestselling books without ever having to grind it all out, because they’ve got something special, see, and, well, yeah, fifteen years later it’s a hobby and they eventually self-publish (gasp!) a self-designed book of short stories with a terrible cover image of some guacamole superimposed on a barrel as a way to at least get their work out there somewhere. Yup. (To have done that with writing and then turned around and done it again with game development … well, I’m certainly ambitious, anyway.)

Now older and at least occasionally more relaxed, all I really want out of my SMMG is to a) complete it; b) use it to demonstrate what ExAI and RealMemory can do for/in a game, both to myself and to others; and c) have a few people play it and enjoy it. Plus I’d like to grow as a developer and storyteller. But that’s the case with any project, isn’t it? At heart?

Part of this growth is realizing that there are limitations to creating a project like this entirely on one’s own, especially given I’m trying to create realistic characters with differing gender identities and personalities and backgrounds, and so I’m hoping for more input about the beta version than might normally be the case—ideas about character dialogue, artwork, UI, etc, etc—in exchange for giving credit to whoever ends up contributing. (As I don’t expect the game to make much money, I can’t offer a monetary reward, unless someone wants a 1% cut of whatever diddly profits the game itself makes.) I guess I’m hoping for some sort of crowd-building experience, or at least some kind of crowd contributing experience. Even by talking to one other person about the game, I’ve received some awesome storytelling advice. Hopefully more people = more awesome ideas, right? (I realize I’m being an optimist, here, and that sometimes more != better … I mean, I’ve been a magazine editor; I know what can come slithering its way through the slush pile … but the occasional gems! And I’m trying to embrace some hope, rather than what has become the usual darkness.)

Another part of this growth is not creating an absurd timetable for getting it done, so at the risk of sounding so laid-back I make Jerry Garcia look agitated, I’m not going to even give a release date for the beta, let alone the full game. Soon for the beta? Soon-ish? It depends on what my day-job throws at me. But when it’s up there on the website for all to see and download and comment upon, I’ll let you know.

And thanks for listening! Even though I didn’t actually describe what I’m building, not in any detail. Next time! “Soon …”

(As always, comments on this blog should either be tweeted to me @QuantumTigerAI or emailed to one of QTG’s contact emails. The actual comments section has been overrun by spam bots, as has my Forum, and I don’t have time to weed through them every day.)

Posted in Artificial Intelligence, Game Development, SMMG | Tagged , , , , | Comments Off on What Am AI Building? An Update on My Sports Management-Mystery Game

Agile AI All by Myself

Just came across this old post in my master’s thesis project blog. The blog itself was part of an adaptation of Agile development to the needs of a single developer. As I remember, scrums were a little lonely … but then, I’ve always been way too good at talking to myself! Anyway, pasted below just for the fun of it.

Agile Development?
I’ll be using a modified form of the Agile development method for this project. I say modified because the development team is a team of one, and therefore there will be no paired programming (unless I talk to myself, which I do frequently) and scrums will be equally singular. There are many Agile processes that are useful regardless of limited team size, however.

The Scrum, for instance, will be modified to include sitting down each morning and looking back at what was done the previous day, looking forward to that day’s work, and taking a moment to rehash problems and decide what to do about them (e.g., research further for help, ask the project supervisor for direction, etc.) This will be explicitly done from hereon through the blog (thus the blog will move from weekly to ‘whenever a workday occurs’).

I’ll be working in iterations determined generally by the Gantt chart, but modified as necessary by day-to-day needs. Each weekly Gantt objective (time frame or timebox) will be looked at at the beginning of the week and divided into daily activities and goals, to be modified as necessary as each day dawns. For instance, this week’s discovery that doing a bit more research into the Big Five facet descriptions would be helpful lead to a modification of daily goals (it didn’t alter the week’s goal, however).

The Gantt chart will, in effect, show/be the project backlog, along with modifications made through the scrum and written about on the blog. In this case, as the only ‘team’, I will sign up to do everything. Each week (or so) will be a sprint (depends what needs to be done that week), and the backlog (Gantt chart) will be adjusted/reprioritised after each sprint. Each sprint may not, in my case, be a working piece of software (given that I need to do each phase of development myself); the sprints will be sections–e.g., creating a working database was this week’s sprint, and even though that isn’t a complete piece of software, it was a definable piece of the backlog.

The Gantt chart will be modified as necessary as work moves along, in keeping with Agile’s emphasis of responding to change over slavishly working to a plan. The overall Gantt chart is about to be modified in just this fashion to cover some items not originally included (primarily allowing time for writing up the project) and to create iterations of the working software so that it isn’t aimed at being complete only in time for testing; there will be iterations of working software as often as possible.

Priority will be given to working software and the business value derived therefrom; in this case, business value pertains to the dissertation itself; it might be retermed ‘academic value’.

I will discuss other methods that could have been used and why I chose Agile over them in the project paper itself.

BTW—duh! I kept thinking iterative development was separate from Agile, but Agile incorporates iterations as well. Not sure what I was thinking of–something between waterfall and Agile, based on my memory of Games Dev module notes or slides.

Posted in Artificial Intelligence | Tagged | Comments Off on Agile AI All by Myself

AI make the fake / AI spot the fake

I was reading the other day about AI being used to create fake images and videos that were so real people wouldn’t be able to tell they were fakes. For instance, it can create fake people (this article from The Verge), or even fake events (the University of Washington project described in  this engadget article and another Verge article). And creating fake reviews on Yelp (only a step away from making up news stories as well) seems pretty easy for AI (as in this Scientific American article).

But what about using AI to spot fakes—fake photos, videos, fake news?

Many of these same articles point to the fact that AI can be used to spot as well as produce fakes. However, the machine-learning algorithms used by Yelp to detect fake reviews had a hard time figuring out which AI-generated reviews were fake (as did humans). Fox News (of all things) has an article that’s a little more hopeful (“How AI fights the war against fake news”), in which various means of detecting false news are described, such as semantic processing (finding typically used keywords for fake or sensationalist stories) and rating sites as more or less trustworthy. (Amusingly, this Fox story is interspersed with headlines for the usual “AI-as-Michael-Myers” stories, with such phrases as “save humanity” and “killer robots.”) But, at best, it seems we are in for a fake news arms race in which good and evil AI try to outsmart one another.

I have a feeling that combinations of data-driven, deep-learning AI and elements of “more human” AI (such as personality and emotion engines) will end up being the best tools in this fight—but probably for both sides. Deep learning can take massive amounts of data and find patterns in it, and thus can train for finding fake news by comparing it to other fake news (to oversimplify). The personality and emotion engines can tap into the human element (for instance, how fake news makes people react emotionally, or how it makes people with certain personalities react differently than others). And both kinds of AI can work tirelessly at their task.

Which is something that humans, for all their intuition, cannot do.

Posted in Artificial Intelligence | Tagged , , , | Comments Off on AI make the fake / AI spot the fake

AI Pollsters

Polling has gotten a bad name.

It used to seem that we were moving toward a system in which pollsters would know, with certainty, the results of an election before election night. Pundits seemed to just be waiting to come on the air the moment they were allowed to do so, just after polls closed, and let us know who’d won.

Not anymore.

The first recent pundrity that I remember getting it wrong was when the networks were calling Florida for Al Gore in 2000. I remember it looked strange to me as I watched TV that night—nowhere near enough of the vote was in to actually be calling anything. And sure enough, as we went through the evening, suddenly it got closer and closer, and then … suddenly no one was calling anything anymore.

That evening could be seen as an aberration, as it was an extremely close (and perhaps never actually resolved) election. And certainly the US elections in 2008 and 2012 went according to polling—but those were pretty easy calls.

More recently, it seems that, despite living in the time of “big data,” where more is known (supposedly) about us than ever before, polling has become more and more shaky. The certainty has all but disappeared, and while there seem to be more polls than ever, they seem to be all over the map, predicting wildly differing outcomes. I’m thinking the British elections in 2015 when the Conservatives won a majority in what was supposed to be a hung parliament, Brexit, the US elections in 2016, and Theresa May’s unfortunate decision to have another general election in 2017, losing the Conservative majority and nearly her PM-ship.

The thing is, polls only take in people’s actual answers—about not only their preferences, but also race, ethnicity, even gender. And any of these can be faked over the phone (either on purpose or not). In the US presidential election, the so-called “afraid to tell you I’m voting for Trump” voters were a prime example.

So for all the data in the world (Cambridge Analytica or Google or whoever), you’re only as good as the humans who give you that data.

Personality AI could potentially give polls a way to factor in the “human element” by being more human themselves. They could estimate from personality types and from run-throughs given people’s actual answers what the true, underlying vote might be—and how many people might not vote at all, even though they say they will. It could possibly even estimate how to get those non-voters excited enough to vote.

Of course, as with anything related to AI (or science, or emotions), there’s the potential for misuse (or ethically foggy use), but if used strictly to interpret polling data, personality AI would be providing a service to those of us who want to know how close our elections actually are.

Posted in Artificial Intelligence | Tagged , , , | Comments Off on AI Pollsters

AI am Everywhere … Even Where AI’m Not

Up until recently, if asked what I was working on and I answered, “artificial intelligence,” I could pretty well predict the direction of the conversation from there: Is it like the replicants in Blade Runner (or Skynet in Terminator)? Or chess- or Go-playing AI like IBM or DeepMind make? Or, occasionally, a person familiar with video games might wonder if I’m working on pathfinding algorithms or flocking behaviors for enemy characters. But now there’s a different trend; now, if I say I’m working on AI, it’s as if I’m a chef who’s declared his menu is “food.”

AI has become huge in the last few years, not so much in a cyberpunk kind of way but instead as an integral part of, well, stuff. Lots of stuff. Phones have AI. Houses do. Cars do. Stuffed animals do. Nearly every industry does, from manufacturing to finance to marketing. Basically, anything that can use data to determine its course of action is said to have, at least on a rudimentary level, AI.

In some ways this is because AI has really made leaps and bounds in the last ten years or so, going from a sort of dead end (where’re the human-like robots we were promised in the 1960s?) to interpreting language, determining the best way to build an object (or at least helping), helping interpret data for various industries to make them more efficient and effective, and, of course, beating human players at Go. But it seems AI has also benefited from an incredible marketing makeover—the movers and shakers in the field are often savvy in all the most modern methods of advertising, using social networks to create buzz, streaming video to pique interest, and even using “old-fashioned” but still effective routes like TV and print ads and getting themselves mentioned on the news. This image makeover/explosion has made AI the next “cloud,” and the term has become almost more buzzword than substance in some cases. Marketers for products that have very little to do with any kind of advanced AI (and certainly nothing to do with Roy Batty) are touting toasters, thermostats, and even clothing (well, maybe not yet, but it’s coming) as having the latest cool “AI” to help them do their jobs. In some cases they’re almost right; a thermostat could be said to have a very limited form of, errr, intelligence when it comes to knowing when to turn on the heat. Maybe. But more and more, I’m seeing “AI” bantered about just to give things cache. And it’s becoming very annoying.

However, one interesting (good? bad?) thing that will likely come of all this is that the public’s perception of AI will change from one of science fiction/horror (it’ll eat us all!) to one of acceptance of AI in our daily lives (even though, ironically, at present it’s more bluster than substance). Maybe this is why folks like Elon Musk are shouting the most outrageous things they can think of to warn us that the sky (Skynet?) is falling, just to get our attention. But in some ways I see the overall blunting of histrionic reactions as a good thing. It’s not that I want us to just accept our new robot overlords as the latest “in” thing, but I would like to be able to have a conversation about AI that isn’t overlaid with emotion and knee-jerk reaction.

I’m not sure, though, that this is where we’re going. It isn’t that I’m going to be able to have a different, better conversation about AI; it’s very possible that in the future, when I say I’m working on AI, the reaction will be one of boredom, of “Who cares?” And that’s maybe the worst reaction of all.

Posted in Artificial Intelligence | Tagged , | Comments Off on AI am Everywhere … Even Where AI’m Not

Raising Your AI to Be Ethical and Unbiased (You Know, Like Humans Should Be)

I tend to think of AI as a fresh start, as a way to explore the best parts of ourselves and to create algorithms that help advance humanity in positive ways. I realize that view is fraught with problems, however, ranging from questions of meaning—Who defines the “best” parts of ourselves? Who defines “positive ways”?—to questions of actual intent or impacts. It would be foolish to assume that all scientists are trying to create “good” AI and not “evil,” or that even the best intentions don’t have a chance of turning into Skynet.

Another challenge, especially in personality AI (and associated “make it human” AI specialties), is whether one can make the AI too human. Recent research, such as in Zhao, et al., indicates that allowing AI learning algorithms to go to school by sifting through words and images on the Internet only teaches the AI to be as biased as humans; e.g., to associate females with cooking and shopping. Other research has found racial bias as well (see Thorpe for examples). This only perpetuates gender and racial stereotypes and bias—not something we would think of as humans’ best qualities.

So what’s the solution? To raise our AI the way we would hope to raise our children, as unbiased and accepting of the full diversity of human beings, whether gender, race, sexuality, culture, etc., etc. How “raise our AI” works in the world of programming as opposed to the world of diapers is an open question; however, I think the similarities outweigh the differences. For instance, you would hope that your children would be exposed to many different kinds of people in their everyday lives, and that this would help them become less biased and more accepting of others. AI learning could occur the same way—they could be trained using diverse data that better represent human diversity. Or, perhaps, represents the best of human diversity. Ethically, that might be seen as manipulating data so that we get a desired result, which is true; in this case, however, we’re trying to educate our AI in a forward-thinking, what-we-wish-humans-themselves-to-be kind of way.

Which, looking at it, almost looks like a kind of do-gooder desire to play god. However, would you rather we developed AI that was prejudiced? That represented the worst of our biases?

More articles on this:
Devlin, “AI programs exhibit racial and gender biases, research reveals”
Kleinman, “Artificial intelligence: How to avoid racist algorithms”

Posted in Artificial Intelligence | Tagged , | Comments Off on Raising Your AI to Be Ethical and Unbiased (You Know, Like Humans Should Be)

AI Have No Comment

Due to an absolute overload of spam (15,000+ spam comments), I won’t be allowing any further comments directly on this blog–but I’d still love to hear from you! If you have something non-spammy to say about anything I’ve written, please go to my Forums.

Note that if you’ve tried to leave me a message through this blog, it’s been lost in all the other spam and won’t be seen. Sorry!


Posted in Uncategorized | Comments Off on AI Have No Comment