Agile AI All by Myself

Just came across this old post in my master’s thesis project blog. The blog itself was part of an adaptation of Agile development to the needs of a single developer. As I remember, scrums were a little lonely … but then, I’ve always been way too good at talking to myself! Anyway, pasted below just for the fun of it.

Agile Development?
I’ll be using a modified form of the Agile development method for this project. I say modified because the development team is a team of one, and therefore there will be no paired programming (unless I talk to myself, which I do frequently) and scrums will be equally singular. There are many Agile processes that are useful regardless of limited team size, however.

The Scrum, for instance, will be modified to include sitting down each morning and looking back at what was done the previous day, looking forward to that day’s work, and taking a moment to rehash problems and decide what to do about them (e.g., research further for help, ask the project supervisor for direction, etc.) This will be explicitly done from hereon through the blog (thus the blog will move from weekly to ‘whenever a workday occurs’).

I’ll be working in iterations determined generally by the Gantt chart, but modified as necessary by day-to-day needs. Each weekly Gantt objective (time frame or timebox) will be looked at at the beginning of the week and divided into daily activities and goals, to be modified as necessary as each day dawns. For instance, this week’s discovery that doing a bit more research into the Big Five facet descriptions would be helpful lead to a modification of daily goals (it didn’t alter the week’s goal, however).

The Gantt chart will, in effect, show/be the project backlog, along with modifications made through the scrum and written about on the blog. In this case, as the only ‘team’, I will sign up to do everything. Each week (or so) will be a sprint (depends what needs to be done that week), and the backlog (Gantt chart) will be adjusted/reprioritised after each sprint. Each sprint may not, in my case, be a working piece of software (given that I need to do each phase of development myself); the sprints will be sections–e.g., creating a working database was this week’s sprint, and even though that isn’t a complete piece of software, it was a definable piece of the backlog.

The Gantt chart will be modified as necessary as work moves along, in keeping with Agile’s emphasis of responding to change over slavishly working to a plan. The overall Gantt chart is about to be modified in just this fashion to cover some items not originally included (primarily allowing time for writing up the project) and to create iterations of the working software so that it isn’t aimed at being complete only in time for testing; there will be iterations of working software as often as possible.

Priority will be given to working software and the business value derived therefrom; in this case, business value pertains to the dissertation itself; it might be retermed ‘academic value’.

I will discuss other methods that could have been used and why I chose Agile over them in the project paper itself.

BTW—duh! I kept thinking iterative development was separate from Agile, but Agile incorporates iterations as well. Not sure what I was thinking of–something between waterfall and Agile, based on my memory of Games Dev module notes or slides.

Posted in Artificial Intelligence | Tagged | Comments Off on Agile AI All by Myself

AI make the fake / AI spot the fake

I was reading the other day about AI being used to create fake images and videos that were so real people wouldn’t be able to tell they were fakes. For instance, it can create fake people (this article from The Verge), or even fake events (the University of Washington project described in  this engadget article and another Verge article). And creating fake reviews on Yelp (only a step away from making up news stories as well) seems pretty easy for AI (as in this Scientific American article).

But what about using AI to spot fakes—fake photos, videos, fake news?

Many of these same articles point to the fact that AI can be used to spot as well as produce fakes. However, the machine-learning algorithms used by Yelp to detect fake reviews had a hard time figuring out which AI-generated reviews were fake (as did humans). Fox News (of all things) has an article that’s a little more hopeful (“How AI fights the war against fake news”), in which various means of detecting false news are described, such as semantic processing (finding typically used keywords for fake or sensationalist stories) and rating sites as more or less trustworthy. (Amusingly, this Fox story is interspersed with headlines for the usual “AI-as-Michael-Myers” stories, with such phrases as “save humanity” and “killer robots.”) But, at best, it seems we are in for a fake news arms race in which good and evil AI try to outsmart one another.

I have a feeling that combinations of data-driven, deep-learning AI and elements of “more human” AI (such as personality and emotion engines) will end up being the best tools in this fight—but probably for both sides. Deep learning can take massive amounts of data and find patterns in it, and thus can train for finding fake news by comparing it to other fake news (to oversimplify). The personality and emotion engines can tap into the human element (for instance, how fake news makes people react emotionally, or how it makes people with certain personalities react differently than others). And both kinds of AI can work tirelessly at their task.

Which is something that humans, for all their intuition, cannot do.

Posted in Artificial Intelligence | Tagged , , , | Comments Off on AI make the fake / AI spot the fake

AI Pollsters

Polling has gotten a bad name.

It used to seem that we were moving toward a system in which pollsters would know, with certainty, the results of an election before election night. Pundits seemed to just be waiting to come on the air the moment they were allowed to do so, just after polls closed, and let us know who’d won.

Not anymore.

The first recent pundrity that I remember getting it wrong was when the networks were calling Florida for Al Gore in 2000. I remember it looked strange to me as I watched TV that night—nowhere near enough of the vote was in to actually be calling anything. And sure enough, as we went through the evening, suddenly it got closer and closer, and then … suddenly no one was calling anything anymore.

That evening could be seen as an aberration, as it was an extremely close (and perhaps never actually resolved) election. And certainly the US elections in 2008 and 2012 went according to polling—but those were pretty easy calls.

More recently, it seems that, despite living in the time of “big data,” where more is known (supposedly) about us than ever before, polling has become more and more shaky. The certainty has all but disappeared, and while there seem to be more polls than ever, they seem to be all over the map, predicting wildly differing outcomes. I’m thinking the British elections in 2015 when the Conservatives won a majority in what was supposed to be a hung parliament, Brexit, the US elections in 2016, and Theresa May’s unfortunate decision to have another general election in 2017, losing the Conservative majority and nearly her PM-ship.

The thing is, polls only take in people’s actual answers—about not only their preferences, but also race, ethnicity, even gender. And any of these can be faked over the phone (either on purpose or not). In the US presidential election, the so-called “afraid to tell you I’m voting for Trump” voters were a prime example.

So for all the data in the world (Cambridge Analytica or Google or whoever), you’re only as good as the humans who give you that data.

Personality AI could potentially give polls a way to factor in the “human element” by being more human themselves. They could estimate from personality types and from run-throughs given people’s actual answers what the true, underlying vote might be—and how many people might not vote at all, even though they say they will. It could possibly even estimate how to get those non-voters excited enough to vote.

Of course, as with anything related to AI (or science, or emotions), there’s the potential for misuse (or ethically foggy use), but if used strictly to interpret polling data, personality AI would be providing a service to those of us who want to know how close our elections actually are.

Posted in Artificial Intelligence | Tagged , , , | Comments Off on AI Pollsters

AI am Everywhere … Even Where AI’m Not

Up until recently, if asked what I was working on and I answered, “artificial intelligence,” I could pretty well predict the direction of the conversation from there: Is it like the replicants in Blade Runner (or Skynet in Terminator)? Or chess- or Go-playing AI like IBM or DeepMind make? Or, occasionally, a person familiar with video games might wonder if I’m working on pathfinding algorithms or flocking behaviors for enemy characters. But now there’s a different trend; now, if I say I’m working on AI, it’s as if I’m a chef who’s declared his menu is “food.”

AI has become huge in the last few years, not so much in a cyberpunk kind of way but instead as an integral part of, well, stuff. Lots of stuff. Phones have AI. Houses do. Cars do. Stuffed animals do. Nearly every industry does, from manufacturing to finance to marketing. Basically, anything that can use data to determine its course of action is said to have, at least on a rudimentary level, AI.

In some ways this is because AI has really made leaps and bounds in the last ten years or so, going from a sort of dead end (where’re the human-like robots we were promised in the 1960s?) to interpreting language, determining the best way to build an object (or at least helping), helping interpret data for various industries to make them more efficient and effective, and, of course, beating human players at Go. But it seems AI has also benefited from an incredible marketing makeover—the movers and shakers in the field are often savvy in all the most modern methods of advertising, using social networks to create buzz, streaming video to pique interest, and even using “old-fashioned” but still effective routes like TV and print ads and getting themselves mentioned on the news. This image makeover/explosion has made AI the next “cloud,” and the term has become almost more buzzword than substance in some cases. Marketers for products that have very little to do with any kind of advanced AI (and certainly nothing to do with Roy Batty) are touting toasters, thermostats, and even clothing (well, maybe not yet, but it’s coming) as having the latest cool “AI” to help them do their jobs. In some cases they’re almost right; a thermostat could be said to have a very limited form of, errr, intelligence when it comes to knowing when to turn on the heat. Maybe. But more and more, I’m seeing “AI” bantered about just to give things cache. And it’s becoming very annoying.

However, one interesting (good? bad?) thing that will likely come of all this is that the public’s perception of AI will change from one of science fiction/horror (it’ll eat us all!) to one of acceptance of AI in our daily lives (even though, ironically, at present it’s more bluster than substance). Maybe this is why folks like Elon Musk are shouting the most outrageous things they can think of to warn us that the sky (Skynet?) is falling, just to get our attention. But in some ways I see the overall blunting of histrionic reactions as a good thing. It’s not that I want us to just accept our new robot overlords as the latest “in” thing, but I would like to be able to have a conversation about AI that isn’t overlaid with emotion and knee-jerk reaction.

I’m not sure, though, that this is where we’re going. It isn’t that I’m going to be able to have a different, better conversation about AI; it’s very possible that in the future, when I say I’m working on AI, the reaction will be one of boredom, of “Who cares?” And that’s maybe the worst reaction of all.

Posted in Artificial Intelligence | Tagged , | Comments Off on AI am Everywhere … Even Where AI’m Not

Raising Your AI to Be Ethical and Unbiased (You Know, Like Humans Should Be)

I tend to think of AI as a fresh start, as a way to explore the best parts of ourselves and to create algorithms that help advance humanity in positive ways. I realize that view is fraught with problems, however, ranging from questions of meaning—Who defines the “best” parts of ourselves? Who defines “positive ways”?—to questions of actual intent or impacts. It would be foolish to assume that all scientists are trying to create “good” AI and not “evil,” or that even the best intentions don’t have a chance of turning into Skynet.

Another challenge, especially in personality AI (and associated “make it human” AI specialties), is whether one can make the AI too human. Recent research, such as in Zhao, et al., indicates that allowing AI learning algorithms to go to school by sifting through words and images on the Internet only teaches the AI to be as biased as humans; e.g., to associate females with cooking and shopping. Other research has found racial bias as well (see Thorpe for examples). This only perpetuates gender and racial stereotypes and bias—not something we would think of as humans’ best qualities.

So what’s the solution? To raise our AI the way we would hope to raise our children, as unbiased and accepting of the full diversity of human beings, whether gender, race, sexuality, culture, etc., etc. How “raise our AI” works in the world of programming as opposed to the world of diapers is an open question; however, I think the similarities outweigh the differences. For instance, you would hope that your children would be exposed to many different kinds of people in their everyday lives, and that this would help them become less biased and more accepting of others. AI learning could occur the same way—they could be trained using diverse data that better represent human diversity. Or, perhaps, represents the best of human diversity. Ethically, that might be seen as manipulating data so that we get a desired result, which is true; in this case, however, we’re trying to educate our AI in a forward-thinking, what-we-wish-humans-themselves-to-be kind of way.

Which, looking at it, almost looks like a kind of do-gooder desire to play god. However, would you rather we developed AI that was prejudiced? That represented the worst of our biases?

More articles on this:
Devlin, “AI programs exhibit racial and gender biases, research reveals”
Kleinman, “Artificial intelligence: How to avoid racist algorithms”

Posted in Artificial Intelligence | Tagged , | Comments Off on Raising Your AI to Be Ethical and Unbiased (You Know, Like Humans Should Be)

AI Have No Comment

Due to an absolute overload of spam (15,000+ spam comments), I won’t be allowing any further comments directly on this blog–but I’d still love to hear from you! If you have something non-spammy to say about anything I’ve written, please go to my Forums.

Note that if you’ve tried to leave me a message through this blog, it’s been lost in all the other spam and won’t be seen. Sorry!

Jeff

Posted in Uncategorized | Comments Off on AI Have No Comment

Why Am AI Extremely Happy?

Because my ExtremeAI personality engine has been accepted by the Unity Asset Store!

I’ve decided that, once again, the engine should be available to other developers for use in games besides SteamSaga (it was available for sale from the QTG website ages ago, but I took it down when we started developing our RPG). Why? For one thing, I’d really like to see what everyone can do with it, and what kinds of interesting NPCs are created using ExtremeAI. For another, SteamSaga seems to be on the longest road ever to final development, and I don’t want to hold up potential uses of the AI system any longer.

You can find it at http://u3d.as/bCj. It would seem to be the only one of its kind up there; most of the other AI is for pathfinding or changes in behavior (not behavior as in personality, but behavior as in state changes, such as patrolling/waiting/attacking/following).

Now to actually set up the QTG website to allow direct purchases again 🙂

Posted in Artificial Intelligence | Tagged , , | Comments Off on Why Am AI Extremely Happy?

What’s This? An Update?

(This is a long update, so in summary … there’s a (short, rough) demo available! And the game is gonna take a veeerrry long time to complete without additional funding, like Kickstarter or something. And we want your opinion on the matter! But please, read on …)

As you may have noticed, we’ve dropped off the edge of the Earth for the last couple months or so. What is our tale of woe? Gather round SteamSaga’s low-poly fire, my friends.

Back in 2013, we were making the rounds at conventions, some of us [i.e., me] far too confident in our ability to get an RPG (short, yes, but pretty darn complex, with a brand new AI system and 3D art) done by our original release date of 31 October—a mere six months after properly getting started. The schedule looked lovely and practical, all laid out in Microsoft Project, and I was seduced by an incredibly good start to our venture. Characters were leaping off the page and into the computer, art assets went from moodboard to sketch to 3D animated creatures in barely more than the blink of an eye, AI systems all go … all looked good … and then …

Well, then reality hit. Going to the conventions gave us some valuable feedback, along with a very heartening flood of goodwill and interest (thank you!), and we wanted to adjust a few things. (We also met with a Sony rep, and that also gave us good feedback … and more adjustments we wanted to make.) For example, the story I’d written never really grabbed anybody but me (that being the story on the backs of the postcards we were handing out); I think I had a case of 1980’s fantasy mixed with a shot of old-style D&D, along with hefty amounts of Final Fantasy—none of it bad, but not exactly new, either. Then I played some very nifty indie games (including Digital: A Love Story by Christine Love, and Blendo Games’ Thirty Flights of Loving), and Epiphany! I was released from the Old Ways. (Which is funny, since my AI system is all shiny and new … must’ve been using all my creativity for that …) So we asked for some extra help on the writing front, and a new, cool, mysterious, fun (funner?) storyline was created. Which, of course, took time.

We also discovered that the game’s already cool art style could be enhanced by all kinds of fun tweaks and twists, including particle systems, various kinds of shading, different animations, etc. Possibly too much ‘etc.’, actually, as all this also took time. Lots of time.

The conversation system had to be invented fresh as well (that’s where my OTHER creative brain cells went!), in order to show off the AI system as best we could without trying to invent a way to use natural language processing in an RPG. And it’s very cool … but it also took, you guessed it, time.

And 31 October came and went, and then Christmas, and then January … all with errant promises from us about when we’d have the game out, or even just a demo … until I felt kinda broken-hearted from it all.

And while I already seem to be playing the weepy weepy violins for myself, there was another reason it all has gone into “go slow” mode—money. For while we had the proper budget for a game coming out at the end of October, or even stretching on to the end of last year, the money began to run out after that. And so, to catch ourselves in a Catch-22, we had to get paying jobs to supplement our incomes (to BE our incomes), and then work on SteamSaga when and where we could, after work … and thus there were even fewer hours to put into the game, which meant it would go even more slowly, which … well, etc.

What does all this mean to you?
Right. So we’re still coming out with SteamSaga, and we have a fifteen-minute short-short demo RIGHT NOW (yea!) that you can download from www.quantumtigergames.com/steamsaga/SteamSagaDemoSetup.zip. It ain’t all polished up, the music isn’t necessarily what we’re using, and it’s the tiniest of story arcs, but it shows the basic elements of the game. You can play as any of the four main characters, Fighter, Healer, Thief, or Bard—in fact, you get a different experience playing as each of them! The personality changes are ratcheted up to eleven, so the tiniest actions you take have outsized effects on how the characters react to you; you can see this at the end of the demo, or possibly even in battle, if you’ve pissed someone off enough that they won’t fight for you. Or if they’ve lost all respect for your leadership abilities. And of course you can replay it to see how your choices change the way people react to you. (So really it’s more than just a fifteen-minute demo!)

At our present pace, the rest of the demo (about an hour of gameplay overall, just playing as one character) will be done by the end of the year. And then, if nothing else changes, it’ll be about a year before the whole game is complete. Yup, only two-years-plus late. Ack.

Can We KickStart It? Should We?
One option to get this moving faster is to put the project up on KickStarter. I estimate that for every $5000 raised, we could knock a couple months off that release date. (To a point, obviously; $25K isn’t going to get the game done yesterday.) If we were to ask for $5K, that would move release up to maybe summer of next year; $10K would move it up to spring-ish of the year. Actually, I wouldn’t want to promise any sooner than that. But what do you guys think? Try KickStarter? Don’t? Try something else? Please write us at info@quantumtigergames.com or visit our forums at www.quantumtigergames.com/forums/viewtopic.php?f=7&t=11 and let us know. We REALLY, REALLY value your input, so please give us any thoughts on the subject.

Thanks so much, and we’ll chat again soon!
Jeff

Posted in SteamSaga | Tagged | 1 Comment

Why AI Use the Five-Factor Model

I’m always saying, like some sort of carnival barker trying to get people to listen, that the AI used in SteamSaga is based on real-world personality theory, that our NPCs act as human as they do because of this adherence to actual psychology, and that we’re using the Five-Factor Model of human personality. I think all of two people who’ve heard this actually knew what the heck I was going on about, and so here’s a little primer as to why I chose it and what it is.

Why the Five?
As one can imagine, there have been many, many theories of personality over many, many years. It seems we’ve always wanted to explain why we act the way we do; as far back as Hippocrates there were ideas about extraversion and neuroticism. So I had a cornucopia of theory from which to choose to create my AI system; what made me choose the Five-Factor Model?

First and most importantly, it seemed about as up-to-date and accepted as is possible for a personality theory. Developed (although not originated) in the 1980s and 90s by several sets of researchers, including JM Digman (1989) and Lewis Goldberg (see especially 1993), extensive research and testing has been done since then, including especially PT Costa, Jr and RR McCrae.

Second, the theory gave me the sort of fine-tuning I felt would be necessary for really giving an NPC depth of feeling, ideas, and so forth. Older two- and three-factor theories only tested along a few axes and were too broad. And actually, only five factors would have seemed too broad as well—but the Five-Factor Model divides the five factors (Openness (to Experience), Conscientiousness, Extraversion, Agreeableness and Neuroticism (which neatly become OCEAN when acronymed)) into a total of 30 personality facets, and these gave me the fine-grained control I sought for AI development.

Note that, even though thirty sounds like a lot, I’m often operating with combinations of these facets to make up more complex feelings/attitudes, and so can test potentially hundreds of millions of combinations—which I doubt I’d ever need to do, really. Can you think of hundreds of millions of human attitudes? In fact, right now I test around 40 attitudes.

Note as well that I needed these to change in a realistic fashion over time (and yes, there’s some debate as to how much personality can change over a lifetime, so ‘realistic’ is a vague measurement), for I wanted the NPCs to react to changes in their environment with changes in attitude—if no longer living with the oppressive fear of that dragon on Stone Mountain, one might feel a weight has lifted, feel openness and opportunity, feel perhaps a bit less pessimistic about life. Again, having a fine-grained way in which to do this seemed imperative, for trying to make tiny changes in broad strokes would be difficult to make appear realistic, and I wanted the NPCs to be able to grow and change without outside manipulation or sleight-of-hand (e.g., ‘knowing’ as programmers that the dragon has been killed, and therefore ‘making’ the NPC say certain things, would be faking it; the NPC finding out about the dragon and making her own decisions about how to react is what I’m striving for). I guess the idea is to ultimately make the NPCs actors in their own lives, rather than puppets.*

Getting all this behind-the-scenes stuff to show up in a world of tree-delineated conversations is difficult, I’ve discovered; imagine trying to describe some complex emotion in a foreign language in which you know only three sentences. So output is still a bit of a challenge, until I get to something approaching natural language or trees complex enough to give the NPC enough range. But this has little to do with the personality model chosen. It’s more a gripe. So much to say, so limited the means to do so.

So What Is the Theory?
Right, I’ve started this already, but basically (very basically) the Five-Factor Model uses the OCEAN factors as general areas of personality. Openness is willingness to seek out new experiences; Conscientiousness is about organisation, control, and goal-directed behaviour; Extraversion is outwardly directed social energy; Agreeableness includes things like compassion, trust, and modesty; and Neuroticism includes anxiety, depression, hostility, impulsiveness, and the like. There are six facets in each factor; for instance, within Openness are Fantasy, Aesthetics, Feelings, Actions, Ideas, and Values, each of which represents a finer grained kind of openness. High marks in Fantasy would mean one is receptive to imagination and creativity stemming from that; low marks would give one a very solid grounding in the real world. (For a complete list, see various sources online.)

Speaking of the real world, there’s a test for all this developed by Costa and McCrae, the NEO PI-R. If one had the inclination, one could be scored on all this and have one’s personality evaluated. I haven’t, except on a sample test. (And no, I’m not telling.)

That’s the total nutshell version. I’m not sure how much further to go here, as I don’t want to go all wordy and academic, but there are many, many scholarly sources, both online and in libraries, if you’re intrigued by all this. Or feel free to ask me directly 🙂

*Of course, given that in a game the NPCs are there for storytelling reasons, and thus have to say and do certain things to make the plot advance, there will always be a certain amount of puppeteering. But I do want them to have the potential to live their own lives.

Posted in Artificial Intelligence, SteamSaga | Tagged , , , , | Comments Off on Why AI Use the Five-Factor Model

AI Work on SteamSaga

So I go on and on about personality AI, but what does this do for our RPG, SteamSaga? Make it more fun? Make the NPCs have more fun? Sow the seeds for Skynet? *mad scientist laughter, fading*

The main way you’ll learn about the way the AI works is through interacting with the NPCs, which will occur in two ways: through talking to them and through actions (such as fighting them, or fighting alongside them, even). Our dialogue will still use choices (haven’t developed natural language in the last few weeks!), but most conversations will have more than the few you see in other games (well, if you have choices at all in those games); you’ll see a “group” of possible choices, such as “Answer honestly …”, “Actions, not words …” and so forth, as in the screenshot below.

AI_Speech_groups

Clicking on one of these options gives you another set, the actual words you’ll say (or action you’ll take). For instance, if you’d chosen “Answer honestly …”, you’d see something like the following:

AI_Speech_actual_response

Depending on your prior interactions with this NPC, or on how you interacted with this NPC’s friends (who may have told her all about what you’ve said to them, and how they feel about it), you may not even get this set of options; the conversation may go very differently. Or you may have this set of options, but how the NPC interprets your speech/actions may lead to the conversation proceeding very differently. And the NPC will remember this conversation, and how she felt about it and you, for the next time you interact. (And of course she may report these feelings to her friends.)

In addition to your conversations with others, you have conversations with yourself!  You’ll have to play the game to find out why you have this “inner voice”, but it will give you occasional insights and  suggestions as to what it thinks you should say or do in a given situation. Whether you follow this advice or not is up to you, but it may eventually lead to consequences (rewards or penalties) for you and your party. The “inner voice” will also remember how you’ve reacted to it throughout the game, just as NPCs do. You can see examples of this voice in the upper-right of the two previous screenshots.

Finally (well, not necessarily finally, but finally for this discussion), in battle you will control the actions of your group—attack, cast spells, fall back—and the NPCs with you will not only notice this, but give you feedback. For instance, the Healer is not particularly thrilled with being sent to the front lines, and she will let you know this. Eventually, if you continue to do things she doesn’t like, or if you’ve treated her badly in other situations, you may ask to be healed and get a response you don’t like … *

AI_Speech_battle

This sort of rebellion won’t be easy to create, and of course you might be so nice to everyone that you get every effort and nicety out of your group, but remember that your actions have consequences—and that making one person happy might make someone else not so pleased. After all, these are individuals, and they react differently. There’s nothing to stop them from being jealous of your lavish praise on someone else, or from disagreeing with your tactics and coming to think of you as not so much a leader but a quivering mass of contradictions and cowardice, or somesuch.

In these ways (and others!) SteamSaga will be different from other RPGs, with more depth and replayability. And it has nothing at all to do with Skynet.

* Note in the screenshot she’s been hit twice in quick succession, and hence her repeating the phrase about being singed. Going to fix that …

Posted in Artificial Intelligence, SteamSaga | Tagged , , , , | Comments Off on AI Work on SteamSaga