How AI Don’t Speak Like a Robot

I often go on about how my AI characters can’t just speak to me in their own words, and if they only could, it would really demonstrate the capabilities of my Extreme AI personality engine (and possibly let my AI tell me just what it thought of all this). While that is an admirable end goal, however, spending a little time working on even simplistically adaptable text is more productive than just whining about full natural language processing, so I’m going to write a little about my explorations in trying to get speech to “work” in my SMMG (that’s “Sports Management Mystery Game” to those not yet in the know).

By “work” I mean a kind of semi-procedurally generated text that is both created for reports (say, when your AI head coach tells you about the team’s practice that week or when the AI press writes up how your team did in actual games) and is influenced by the reporter’s personality. The text should not read like it’s being said by a robot, and it has certain beats it has to hit to advance the plot of the game/give you (the Player) necessary information for playing. (I’ll also explore how best to save these kinds of conversations and any emotional ties to them in the characters’ memory using RealMemory, but that’ll be in a later article, I think.)

I’m sure I’m approaching this from a fairly simplistic, naïve POV, as procedurally generated text isn’t my expertise; if you’re looking for that, definitely visit Emily Short’s blog and other similar sites for a deeper insight. (I have, but I know I’ve only scratched the surface; plus, I’ve only researched far enough to be able to implement the text in my specific game situation. Part of trying not to get lost in a sea of interesting information and never managing to complete my game, lol.)

Let’s take the example of Practice. In the SMMG, each week you can have your team practice (or not, but that leads to out-of-shape players, resentment, and losses). Each of your players can focus on a specific skill (one of six, such as “catching” or “running”) or have a more general practice that spans the lot. The results of this practice can range from “great” (definitely going to help in a real match) to “terrible” (practice has managed to make the player worse). As manager/owner, you find out about these results from the head coach. (You can also get an idea of which kinds of things a player is good at/enjoys doing from the results.)

Originally, I had the coach tell you results using a simple iteration for each player that evaluated each area in which a player practiced and gave the result, such as (for a general practice covering all kinds of skills):

DotsPlayer1:
Did great at running. Did well at catching. Did terribly at kicking. Did average at run defense. [etc]

This gets the information across, but that’s about it. It doesn’t sound like a person is telling you this, and it certainly doesn’t sound like a person with any sort of personality is saying any of it. I guess it’s a kind of procedural text generation, but … well, not really. Actually, to me it reads like a computer printout on an old dot-matrix printer. I can almost hear the sound … rat-a-tat whir, rat-a-tat whir …

Next I decided that if my text were to sound more human, it should be processed more like a human processes info, rather than just being spit out linearly. My second iteration thus tried a little preprocessing, in a limited way, of the information that the coach was about to give you. She would actually “think” about a player’s overall practice (in code, that meant creating lists that kept track of which skills were done to what effect before saying anything about them, such as a simple list of strings called “practiceGreat” including every skill practiced at that level, and other lists for each of the other levels). Now we have a choice: The coach’s report can either be robotic as before, or (by adding a simple grammar parser as well) can say something like “DotsPlayer1 did really well in running, catching, and kicking, but was terrible at pass defense and run defense. Otherwise, was very average.”

This is much better. It reads like real sentences, especially the sort of thing someone would write in a report. It does become kind of repetitive when written out for several players in a row, though. And it also still doesn’t take into account personality: every AI coach on every team would say the same thing the same way, which (since in this case you don’t know what any other coach said to his or her respective manager) may not be a big deal, but doesn’t make the text generator very adaptable. Plus, it doesn’t take into account change in your coach’s personality or attitude toward you over time.

In terms of just making the sentences a little more variable and realistic, one can add some simple code (just checking how many times she’s talked about a general practice already, and adapting the opening) to create sentences like these:

DotsPlayer1:
Practiced everything. Did really well at kicking, and made many good choices. Otherwise very average.

DotsPlayer2:
Also practiced everything. Did really well at pass defense and kicking, and had a good week defending against the run. Otherwise average.

DotsPlayer3:
Had a go at everything. Did well at running and catching, but was terrible at kicking. Otherwise average.

This kind of thing just requires conditional statements that take into account how many players have already been described and maybe adds a random element as we get further down the list. For one report, this works great, actually. I could even “add” personality quirks to the way it’s said, all hard-coded. But how lovely to be able to make it a bit more automated, to give the coach the ability to take this information and speak for herself in a limited way without my having to hard-code every block and set of conditions and personality variance possible throughout the game.

Using ExAI, one can check the coach’s current feelings and attitudes toward the Player (you), and her current overall personality, very easily. The difficulty is in knowing which personality facets will make her say which kinds of things.

Luckily, in reading Emily Short’s blog I came across links that lead me to a series of articles on the Personage project (a project of Francois Mairesse and Marilyn A. Walker; see refs at end). Between this and works referenced in my own Master’s project (Costa and McCrae, 1995; John et al, 2010; and Saucier and Ostendorf, 1999*), I was able to create a more research-based connection between a character’s language and her Big Five personality traits (I say “more” because I don’t think you can with certainty predict what someone will say, or what sentence structure they will use, etc., but you can make generalizations that help when determining what an AI character will say in a limited situation such as this one). Averaging our coach’s underlying facets to give us her Big Five, we get:

  • Openness 43
  • Conscientiousness 71
  • Extraversion 69
  • Agreeableness 53
  • Neuroticism 33

There are many very complex ways to look at this (see the 59-page Personage paper), but in a really general way for our purposes here, we can say the following:

  • Mid-range Openness shouldn’t have as much of an effect as other, more extreme scores.
  • High Conscientiousness should lead to high numbers of positive vs negative emotion words, good information, and getting straight to the point. It would indicate fewer swear words and more hedges, and longer words.
  • The high Extraversion score would indicate less-formal sentences, few tentative phrases, few hedges/softeners, few errs and ums, more near-swear words, verbal exaggeration, shorter words, and a less-rich vocabulary.
  • Mid-range Agreeableness shouldn’t have as much of an effect as other scores.
  • Low Neuroticism should lead to calmness, few conjunctions, few pronouns, many articles, less exaggeration, and again shorter words plus a less-rich vocabulary.

Some of these work against one another, and one could create a complex series of weights to figure out what wins out/how they affect one another, which I believe the Personage project does (I can’t actually get the project files themselves to work, unfortunately). For my simple solution, my overall take on this is that the coach would get straight to the point in her reports, would tend to vary her word choice but avoid overly negative phrasing, and wouldn’t be enamored of long sentences or flowery language. She generally wouldn’t hesitate or use verbal placeholders like uh, um, and er. Interestingly, this is basically what I’ve already written for her, although maybe the sentences can get a bit long with the grammar parser (so maybe my writer’s ear for this sort of thing is working 🙂 ). She’s not one for varying her vocab a whole bunch, so maybe the repetition in the phrasing is workable—although I still don’t want her to sound un-human.

However, in the interest of there being a little variation in her speech, especially over the course of many weeks, I could change the probabilities of using one of a limited number of phrases or words, still keeping to the general rules above. And, of course, if I were to be using this for a character with high verbal variability, I’d need to provide more opportunities for varied speech, or varied phrasing, types of words, etc. And if I were to try to use the same set of code for several characters, I’d need even more complexity. But that would start to be an engine unto itself, and that’s a whole other ball game.

Also, when using this type of variability even for a single character but over a long period (such that her personality could change over time, as in ExAI or real life), you’d need to have her speech adapt to these changes; e.g., if she became more extraverted, those qualities associated with extraversion would become more pronounced (or would be likely to become more pronounced).

Yikes! I seem to have gone a bit overboard here; I wasn’t trying to write a paper of my own! I’ll save the discussion of press reports for another time.

‘Til later!

(As always, comments on this blog should either be tweeted to me @QuantumTigerAI or emailed to one of QTG’s contact emails. The actual comments section has been overrun by spam bots, as has my Forum, and I don’t have time to weed through them every day.)

*Full refs are as follows:

Costa, PT, Jr and McCrae, RR 1995, ‘Domains and Facets: Hierarchical Personality Assessment Using the Revised NEO Personality Inventory’, Journal of Personality Assessment, vol. 64, no. 1, pp. 21-50.

John, OP et al  2010, ‘Paradigm Shift to the Integrative Big Five Trait Taxonomy: History, Measurement, and Conceptual Issues’, in John et al (ed.), Handbook of personality: Theory and research, 3d ed, Guilford, New York. (US edition.)

Mairesse, F and Walker, MA [n.d.], ‘Can Conversational Agents Express Big Five Personality Traits through Language?: Evaluating a Psychologically-Informed Language Generator’, available at https://pdfs.semanticscholar.org/d6a0/e683ea321dcfcd52c9be78180079ccaeb424.pdf.

Saucier, G & Ostendorf, F 1999, ‘Hierarchical Subcomponents of the Big Five Personality Factors: A Cross-Language Replication’, Journal of Personality and Social Psychology, vol. 76, no. 4, pp. 613-627.

Short, E [various dates], Emily Short’s Interactive Storytelling, blog available at https://emshort.blog.

Posted in Artificial Intelligence, Game Development, SMMG, Uncategorized | Tagged , , , , , | Leave a comment

What Am AI Building? An Update on My Sports Management-Mystery Game

When I started creating my still-nameless sports management game, it was with the idea that it would be a good demo for my AI personality engine, Extreme AI. In fact, that was really the whole point of it (well, almost; I also wanted to actually finish an entire game of my own, no matter how simple). But, somehow, the characters in it started to make it more complex, more complicated, and by including these very different managerial personalities within the game, the game itself has morphed from just a sports management game into a mystery to be solved as well. A Sports Management Mystery Game, if you will (SMMG, because, you know, acronyms).

It’s funny, because this is kinda the opposite of my first game project, SteamSaga, which was built to take advantage of ExAI but was from the outset supposed to be a complex tour de force putting my company on “the map” (I think someone sold me the wrong map, lol). It was never completed and got bogged down in missed milestones and changed directions (really necessary changes, but still), and the experience reminded me of all those writers who think they’re going to rush out and hit it big and make a living from their bestselling books without ever having to grind it all out, because they’ve got something special, see, and, well, yeah, fifteen years later it’s a hobby and they eventually self-publish (gasp!) a self-designed book of short stories with a terrible cover image of some guacamole superimposed on a barrel as a way to at least get their work out there somewhere. Yup. (To have done that with writing and then turned around and done it again with game development … well, I’m certainly ambitious, anyway.)

Now older and at least occasionally more relaxed, all I really want out of my SMMG is to a) complete it; b) use it to demonstrate what ExAI and RealMemory can do for/in a game, both to myself and to others; and c) have a few people play it and enjoy it. Plus I’d like to grow as a developer and storyteller. But that’s the case with any project, isn’t it? At heart?

Part of this growth is realizing that there are limitations to creating a project like this entirely on one’s own, especially given I’m trying to create realistic characters with differing gender identities and personalities and backgrounds, and so I’m hoping for more input about the beta version than might normally be the case—ideas about character dialogue, artwork, UI, etc, etc—in exchange for giving credit to whoever ends up contributing. (As I don’t expect the game to make much money, I can’t offer a monetary reward, unless someone wants a 1% cut of whatever diddly profits the game itself makes.) I guess I’m hoping for some sort of crowd-building experience, or at least some kind of crowd contributing experience. Even by talking to one other person about the game, I’ve received some awesome storytelling advice. Hopefully more people = more awesome ideas, right? (I realize I’m being an optimist, here, and that sometimes more != better … I mean, I’ve been a magazine editor; I know what can come slithering its way through the slush pile … but the occasional gems! And I’m trying to embrace some hope, rather than what has become the usual darkness.)

Another part of this growth is not creating an absurd timetable for getting it done, so at the risk of sounding so laid-back I make Jerry Garcia look agitated, I’m not going to even give a release date for the beta, let alone the full game. Soon for the beta? Soon-ish? It depends on what my day-job throws at me. But when it’s up there on the website for all to see and download and comment upon, I’ll let you know.

And thanks for listening! Even though I didn’t actually describe what I’m building, not in any detail. Next time! “Soon …”

(As always, comments on this blog should either be tweeted to me @QuantumTigerAI or emailed to one of QTG’s contact emails. The actual comments section has been overrun by spam bots, as has my Forum, and I don’t have time to weed through them every day.)

Posted in Artificial Intelligence, Game Development, SMMG | Tagged , , , , | Comments Off on What Am AI Building? An Update on My Sports Management-Mystery Game

Agile AI All by Myself

Just came across this old post in my master’s thesis project blog. The blog itself was part of an adaptation of Agile development to the needs of a single developer. As I remember, scrums were a little lonely … but then, I’ve always been way too good at talking to myself! Anyway, pasted below just for the fun of it.

Agile Development?
I’ll be using a modified form of the Agile development method for this project. I say modified because the development team is a team of one, and therefore there will be no paired programming (unless I talk to myself, which I do frequently) and scrums will be equally singular. There are many Agile processes that are useful regardless of limited team size, however.

The Scrum, for instance, will be modified to include sitting down each morning and looking back at what was done the previous day, looking forward to that day’s work, and taking a moment to rehash problems and decide what to do about them (e.g., research further for help, ask the project supervisor for direction, etc.) This will be explicitly done from hereon through the blog (thus the blog will move from weekly to ‘whenever a workday occurs’).

I’ll be working in iterations determined generally by the Gantt chart, but modified as necessary by day-to-day needs. Each weekly Gantt objective (time frame or timebox) will be looked at at the beginning of the week and divided into daily activities and goals, to be modified as necessary as each day dawns. For instance, this week’s discovery that doing a bit more research into the Big Five facet descriptions would be helpful lead to a modification of daily goals (it didn’t alter the week’s goal, however).

The Gantt chart will, in effect, show/be the project backlog, along with modifications made through the scrum and written about on the blog. In this case, as the only ‘team’, I will sign up to do everything. Each week (or so) will be a sprint (depends what needs to be done that week), and the backlog (Gantt chart) will be adjusted/reprioritised after each sprint. Each sprint may not, in my case, be a working piece of software (given that I need to do each phase of development myself); the sprints will be sections–e.g., creating a working database was this week’s sprint, and even though that isn’t a complete piece of software, it was a definable piece of the backlog.

The Gantt chart will be modified as necessary as work moves along, in keeping with Agile’s emphasis of responding to change over slavishly working to a plan. The overall Gantt chart is about to be modified in just this fashion to cover some items not originally included (primarily allowing time for writing up the project) and to create iterations of the working software so that it isn’t aimed at being complete only in time for testing; there will be iterations of working software as often as possible.

Priority will be given to working software and the business value derived therefrom; in this case, business value pertains to the dissertation itself; it might be retermed ‘academic value’.

I will discuss other methods that could have been used and why I chose Agile over them in the project paper itself.

BTW—duh! I kept thinking iterative development was separate from Agile, but Agile incorporates iterations as well. Not sure what I was thinking of–something between waterfall and Agile, based on my memory of Games Dev module notes or slides.

Posted in Artificial Intelligence | Tagged | Comments Off on Agile AI All by Myself

AI make the fake / AI spot the fake

I was reading the other day about AI being used to create fake images and videos that were so real people wouldn’t be able to tell they were fakes. For instance, it can create fake people (this article from The Verge), or even fake events (the University of Washington project described in  this engadget article and another Verge article). And creating fake reviews on Yelp (only a step away from making up news stories as well) seems pretty easy for AI (as in this Scientific American article).

But what about using AI to spot fakes—fake photos, videos, fake news?

Many of these same articles point to the fact that AI can be used to spot as well as produce fakes. However, the machine-learning algorithms used by Yelp to detect fake reviews had a hard time figuring out which AI-generated reviews were fake (as did humans). Fox News (of all things) has an article that’s a little more hopeful (“How AI fights the war against fake news”), in which various means of detecting false news are described, such as semantic processing (finding typically used keywords for fake or sensationalist stories) and rating sites as more or less trustworthy. (Amusingly, this Fox story is interspersed with headlines for the usual “AI-as-Michael-Myers” stories, with such phrases as “save humanity” and “killer robots.”) But, at best, it seems we are in for a fake news arms race in which good and evil AI try to outsmart one another.

I have a feeling that combinations of data-driven, deep-learning AI and elements of “more human” AI (such as personality and emotion engines) will end up being the best tools in this fight—but probably for both sides. Deep learning can take massive amounts of data and find patterns in it, and thus can train for finding fake news by comparing it to other fake news (to oversimplify). The personality and emotion engines can tap into the human element (for instance, how fake news makes people react emotionally, or how it makes people with certain personalities react differently than others). And both kinds of AI can work tirelessly at their task.

Which is something that humans, for all their intuition, cannot do.

Posted in Artificial Intelligence | Tagged , , , | Comments Off on AI make the fake / AI spot the fake

AI Pollsters

Polling has gotten a bad name.

It used to seem that we were moving toward a system in which pollsters would know, with certainty, the results of an election before election night. Pundits seemed to just be waiting to come on the air the moment they were allowed to do so, just after polls closed, and let us know who’d won.

Not anymore.

The first recent pundrity that I remember getting it wrong was when the networks were calling Florida for Al Gore in 2000. I remember it looked strange to me as I watched TV that night—nowhere near enough of the vote was in to actually be calling anything. And sure enough, as we went through the evening, suddenly it got closer and closer, and then … suddenly no one was calling anything anymore.

That evening could be seen as an aberration, as it was an extremely close (and perhaps never actually resolved) election. And certainly the US elections in 2008 and 2012 went according to polling—but those were pretty easy calls.

More recently, it seems that, despite living in the time of “big data,” where more is known (supposedly) about us than ever before, polling has become more and more shaky. The certainty has all but disappeared, and while there seem to be more polls than ever, they seem to be all over the map, predicting wildly differing outcomes. I’m thinking the British elections in 2015 when the Conservatives won a majority in what was supposed to be a hung parliament, Brexit, the US elections in 2016, and Theresa May’s unfortunate decision to have another general election in 2017, losing the Conservative majority and nearly her PM-ship.

The thing is, polls only take in people’s actual answers—about not only their preferences, but also race, ethnicity, even gender. And any of these can be faked over the phone (either on purpose or not). In the US presidential election, the so-called “afraid to tell you I’m voting for Trump” voters were a prime example.

So for all the data in the world (Cambridge Analytica or Google or whoever), you’re only as good as the humans who give you that data.

Personality AI could potentially give polls a way to factor in the “human element” by being more human themselves. They could estimate from personality types and from run-throughs given people’s actual answers what the true, underlying vote might be—and how many people might not vote at all, even though they say they will. It could possibly even estimate how to get those non-voters excited enough to vote.

Of course, as with anything related to AI (or science, or emotions), there’s the potential for misuse (or ethically foggy use), but if used strictly to interpret polling data, personality AI would be providing a service to those of us who want to know how close our elections actually are.

Posted in Artificial Intelligence | Tagged , , , | Comments Off on AI Pollsters

AI am Everywhere … Even Where AI’m Not

Up until recently, if asked what I was working on and I answered, “artificial intelligence,” I could pretty well predict the direction of the conversation from there: Is it like the replicants in Blade Runner (or Skynet in Terminator)? Or chess- or Go-playing AI like IBM or DeepMind make? Or, occasionally, a person familiar with video games might wonder if I’m working on pathfinding algorithms or flocking behaviors for enemy characters. But now there’s a different trend; now, if I say I’m working on AI, it’s as if I’m a chef who’s declared his menu is “food.”

AI has become huge in the last few years, not so much in a cyberpunk kind of way but instead as an integral part of, well, stuff. Lots of stuff. Phones have AI. Houses do. Cars do. Stuffed animals do. Nearly every industry does, from manufacturing to finance to marketing. Basically, anything that can use data to determine its course of action is said to have, at least on a rudimentary level, AI.

In some ways this is because AI has really made leaps and bounds in the last ten years or so, going from a sort of dead end (where’re the human-like robots we were promised in the 1960s?) to interpreting language, determining the best way to build an object (or at least helping), helping interpret data for various industries to make them more efficient and effective, and, of course, beating human players at Go. But it seems AI has also benefited from an incredible marketing makeover—the movers and shakers in the field are often savvy in all the most modern methods of advertising, using social networks to create buzz, streaming video to pique interest, and even using “old-fashioned” but still effective routes like TV and print ads and getting themselves mentioned on the news. This image makeover/explosion has made AI the next “cloud,” and the term has become almost more buzzword than substance in some cases. Marketers for products that have very little to do with any kind of advanced AI (and certainly nothing to do with Roy Batty) are touting toasters, thermostats, and even clothing (well, maybe not yet, but it’s coming) as having the latest cool “AI” to help them do their jobs. In some cases they’re almost right; a thermostat could be said to have a very limited form of, errr, intelligence when it comes to knowing when to turn on the heat. Maybe. But more and more, I’m seeing “AI” bantered about just to give things cache. And it’s becoming very annoying.

However, one interesting (good? bad?) thing that will likely come of all this is that the public’s perception of AI will change from one of science fiction/horror (it’ll eat us all!) to one of acceptance of AI in our daily lives (even though, ironically, at present it’s more bluster than substance). Maybe this is why folks like Elon Musk are shouting the most outrageous things they can think of to warn us that the sky (Skynet?) is falling, just to get our attention. But in some ways I see the overall blunting of histrionic reactions as a good thing. It’s not that I want us to just accept our new robot overlords as the latest “in” thing, but I would like to be able to have a conversation about AI that isn’t overlaid with emotion and knee-jerk reaction.

I’m not sure, though, that this is where we’re going. It isn’t that I’m going to be able to have a different, better conversation about AI; it’s very possible that in the future, when I say I’m working on AI, the reaction will be one of boredom, of “Who cares?” And that’s maybe the worst reaction of all.

Posted in Artificial Intelligence | Tagged , | Comments Off on AI am Everywhere … Even Where AI’m Not

Raising Your AI to Be Ethical and Unbiased (You Know, Like Humans Should Be)

I tend to think of AI as a fresh start, as a way to explore the best parts of ourselves and to create algorithms that help advance humanity in positive ways. I realize that view is fraught with problems, however, ranging from questions of meaning—Who defines the “best” parts of ourselves? Who defines “positive ways”?—to questions of actual intent or impacts. It would be foolish to assume that all scientists are trying to create “good” AI and not “evil,” or that even the best intentions don’t have a chance of turning into Skynet.

Another challenge, especially in personality AI (and associated “make it human” AI specialties), is whether one can make the AI too human. Recent research, such as in Zhao, et al., indicates that allowing AI learning algorithms to go to school by sifting through words and images on the Internet only teaches the AI to be as biased as humans; e.g., to associate females with cooking and shopping. Other research has found racial bias as well (see Thorpe for examples). This only perpetuates gender and racial stereotypes and bias—not something we would think of as humans’ best qualities.

So what’s the solution? To raise our AI the way we would hope to raise our children, as unbiased and accepting of the full diversity of human beings, whether gender, race, sexuality, culture, etc., etc. How “raise our AI” works in the world of programming as opposed to the world of diapers is an open question; however, I think the similarities outweigh the differences. For instance, you would hope that your children would be exposed to many different kinds of people in their everyday lives, and that this would help them become less biased and more accepting of others. AI learning could occur the same way—they could be trained using diverse data that better represent human diversity. Or, perhaps, represents the best of human diversity. Ethically, that might be seen as manipulating data so that we get a desired result, which is true; in this case, however, we’re trying to educate our AI in a forward-thinking, what-we-wish-humans-themselves-to-be kind of way.

Which, looking at it, almost looks like a kind of do-gooder desire to play god. However, would you rather we developed AI that was prejudiced? That represented the worst of our biases?

More articles on this:
Devlin, “AI programs exhibit racial and gender biases, research reveals”
Kleinman, “Artificial intelligence: How to avoid racist algorithms”

Posted in Artificial Intelligence | Tagged , | Comments Off on Raising Your AI to Be Ethical and Unbiased (You Know, Like Humans Should Be)

AI Have No Comment

Due to an absolute overload of spam (15,000+ spam comments), I won’t be allowing any further comments directly on this blog–but I’d still love to hear from you! If you have something non-spammy to say about anything I’ve written, please go to my Forums.

Note that if you’ve tried to leave me a message through this blog, it’s been lost in all the other spam and won’t be seen. Sorry!

Jeff

Posted in Uncategorized | Comments Off on AI Have No Comment

Why Am AI Extremely Happy?

Because my ExtremeAI personality engine has been accepted by the Unity Asset Store!

I’ve decided that, once again, the engine should be available to other developers for use in games besides SteamSaga (it was available for sale from the QTG website ages ago, but I took it down when we started developing our RPG). Why? For one thing, I’d really like to see what everyone can do with it, and what kinds of interesting NPCs are created using ExtremeAI. For another, SteamSaga seems to be on the longest road ever to final development, and I don’t want to hold up potential uses of the AI system any longer.

You can find it at http://u3d.as/bCj. It would seem to be the only one of its kind up there; most of the other AI is for pathfinding or changes in behavior (not behavior as in personality, but behavior as in state changes, such as patrolling/waiting/attacking/following).

Now to actually set up the QTG website to allow direct purchases again 🙂

Posted in Artificial Intelligence | Tagged , , | Comments Off on Why Am AI Extremely Happy?

What’s This? An Update?

(This is a long update, so in summary … there’s a (short, rough) demo available! And the game is gonna take a veeerrry long time to complete without additional funding, like Kickstarter or something. And we want your opinion on the matter! But please, read on …)

As you may have noticed, we’ve dropped off the edge of the Earth for the last couple months or so. What is our tale of woe? Gather round SteamSaga’s low-poly fire, my friends.

Back in 2013, we were making the rounds at conventions, some of us [i.e., me] far too confident in our ability to get an RPG (short, yes, but pretty darn complex, with a brand new AI system and 3D art) done by our original release date of 31 October—a mere six months after properly getting started. The schedule looked lovely and practical, all laid out in Microsoft Project, and I was seduced by an incredibly good start to our venture. Characters were leaping off the page and into the computer, art assets went from moodboard to sketch to 3D animated creatures in barely more than the blink of an eye, AI systems all go … all looked good … and then …

Well, then reality hit. Going to the conventions gave us some valuable feedback, along with a very heartening flood of goodwill and interest (thank you!), and we wanted to adjust a few things. (We also met with a Sony rep, and that also gave us good feedback … and more adjustments we wanted to make.) For example, the story I’d written never really grabbed anybody but me (that being the story on the backs of the postcards we were handing out); I think I had a case of 1980’s fantasy mixed with a shot of old-style D&D, along with hefty amounts of Final Fantasy—none of it bad, but not exactly new, either. Then I played some very nifty indie games (including Digital: A Love Story by Christine Love, and Blendo Games’ Thirty Flights of Loving), and Epiphany! I was released from the Old Ways. (Which is funny, since my AI system is all shiny and new … must’ve been using all my creativity for that …) So we asked for some extra help on the writing front, and a new, cool, mysterious, fun (funner?) storyline was created. Which, of course, took time.

We also discovered that the game’s already cool art style could be enhanced by all kinds of fun tweaks and twists, including particle systems, various kinds of shading, different animations, etc. Possibly too much ‘etc.’, actually, as all this also took time. Lots of time.

The conversation system had to be invented fresh as well (that’s where my OTHER creative brain cells went!), in order to show off the AI system as best we could without trying to invent a way to use natural language processing in an RPG. And it’s very cool … but it also took, you guessed it, time.

And 31 October came and went, and then Christmas, and then January … all with errant promises from us about when we’d have the game out, or even just a demo … until I felt kinda broken-hearted from it all.

And while I already seem to be playing the weepy weepy violins for myself, there was another reason it all has gone into “go slow” mode—money. For while we had the proper budget for a game coming out at the end of October, or even stretching on to the end of last year, the money began to run out after that. And so, to catch ourselves in a Catch-22, we had to get paying jobs to supplement our incomes (to BE our incomes), and then work on SteamSaga when and where we could, after work … and thus there were even fewer hours to put into the game, which meant it would go even more slowly, which … well, etc.

What does all this mean to you?
Right. So we’re still coming out with SteamSaga, and we have a fifteen-minute short-short demo RIGHT NOW (yea!) that you can download from www.quantumtigergames.com/steamsaga/SteamSagaDemoSetup.zip. It ain’t all polished up, the music isn’t necessarily what we’re using, and it’s the tiniest of story arcs, but it shows the basic elements of the game. You can play as any of the four main characters, Fighter, Healer, Thief, or Bard—in fact, you get a different experience playing as each of them! The personality changes are ratcheted up to eleven, so the tiniest actions you take have outsized effects on how the characters react to you; you can see this at the end of the demo, or possibly even in battle, if you’ve pissed someone off enough that they won’t fight for you. Or if they’ve lost all respect for your leadership abilities. And of course you can replay it to see how your choices change the way people react to you. (So really it’s more than just a fifteen-minute demo!)

At our present pace, the rest of the demo (about an hour of gameplay overall, just playing as one character) will be done by the end of the year. And then, if nothing else changes, it’ll be about a year before the whole game is complete. Yup, only two-years-plus late. Ack.

Can We KickStart It? Should We?
One option to get this moving faster is to put the project up on KickStarter. I estimate that for every $5000 raised, we could knock a couple months off that release date. (To a point, obviously; $25K isn’t going to get the game done yesterday.) If we were to ask for $5K, that would move release up to maybe summer of next year; $10K would move it up to spring-ish of the year. Actually, I wouldn’t want to promise any sooner than that. But what do you guys think? Try KickStarter? Don’t? Try something else? Please write us at info@quantumtigergames.com or visit our forums at www.quantumtigergames.com/forums/viewtopic.php?f=7&t=11 and let us know. We REALLY, REALLY value your input, so please give us any thoughts on the subject.

Thanks so much, and we’ll chat again soon!
Jeff

Posted in SteamSaga | Tagged | 1 Comment