Saturday, May 25, 2013

Game AI, Analysis Paralysis and Buridan's Ass


This post stems from reading this article over on io9 about the philosophical paradox of Buridan's Ass as applied to modern computing and a conversation I had on FB after sharing the article with my thoughts thereupon.  The article does a good job of summarizing the paradox, but I'll re-summarize here in case you can't be arsed to follow the link.  Essentially, a 14th century philosopher named Buridan proposed that a deterministic agent would get 'stuck' when presented with two equally attractive choices of the highest priority.  The example that was used to mock him was that by his logic if a starving donkey were placed equidistant between two piles of hay it would starve to death because it would be unable to decide between the two piles given that they were equally attractive.

This is, clearly, bunk.  At least as applied to animals.  Right?  Well, sort of.  There's a related concept called analysis paralysis (it's almost as much fun to say) which basically says that given enough inputs it's possible to freeze up and be unable to reach a decision.  I don't think animals suffer from it much, being a little more pragmatic about stuff, but humans do.  A common example is in sports where the split-second decision making required is overridden when a player spends too long weighing options.  Or, in a more relatable example: every time I use Netflix.  Seriously.  Every time the wife and I go to watch a movie we spend about half an hour delicately stepping around the fact that I have a lot of trouble watching anything I perceive to have an insufficient quantity of explosions, spaceships, facepunching or Luc Besson (yes, I'm aware these criteria imply The Fifth Element is my perfect movie and no, I'm not averse to that conclusion) and the fact that she would prefer to watch movies in which the plot is more differentiating characteristics than "Jason Statham has hair in this one."  There are too many inputs and too many criteria, so decision making takes a long damned time.  However, when I'm just putting a movie on for background noise it takes about thirty seconds.  Mongolian Death Worm?  That can't be bad!  (Spoiler alert: yes, yes, it can.  So bad.  Not good-bad.  Bad-bad.)

So where does that leave us?  Well, it means that agents with sufficiently complicated or overwhelming inputs can get stuck and be unable to reach a decision.  This would probably be a bad thing for game AI, right?  So let's talk about that.


Before we go further, I should point out that I am not an AI programmer, and my experience with AI is by osmosis of working with and around it and the AI scripting I've done myself.  So it's possible I'm about to wander off into Really Wrong Land.  I mean, I doubt it.  But it's possible.  I guess.

Alright, decision paralysis.  That's what I'm calling it for the rest of this post, bee-tee-dubs.   So, any kind of paralysis in AI is something devoutly to be avoided, right?  I certainly think so.  Game characters getting stuck is a pretty egregious bug.

So how could we reach a situation in which an AI agent is caught by Buridan's Ass, or analysis paralysis?  Well, the conclusion I reach is that you kind of can't, not if you're writing your code properly?  Why?  Because Buridan's Ass assumes that it is possible for two things to ever be of exactly equal priority, which is not something built into AI, just because of how code works.

Here's some lovely pseudocode to demonstrate a decision-making tree that is capable of being caught by Buridan's Ass:
if (food1 > food2) then eat(food1)
else if (food2 > food1) then eat(food2)
So you can see why the decision making would fail in the case of food1 and food2 being exactly equal: there's no case to handle that. And that's kind of bad programming.

Here's some example code you're more likely to see written:
if (food1 >= food2) then eat(food1)
else eat(food2)
This is more like what someone writing proper code would write.  It handles the case of both being equal by automatically preferring food1.  A more complex version might be to handle the two > cases separately and then add a random decision between the two if they're equal.

Of course, this is a super-basic decision making.  What if we having something a bit more complex?  Let's say the desirability of the food is based purely on distance and the agent should move towards the closest and interact with that.  This is a pretty likely example of how that might be written (again, not because it explicitly prevents this problem, but because it's the logical way):
desiredFood = foodList[0]
for each of (foodItem in foodList) do
    if (distanceTo(foodItem) < distanceTo(desiredFood))
      then desiredFood = foodItem
eat(foodItem)
In this case, we can never get stuck simply because we are not evaluating all inputs at once and even if all inputs are identically desirable the first input considered (or last, depending on how you write your code) is the choice.  The idea of hysteresis covers this.  It can roughly be summed up as "inertia", but it's a term I was only recently introduced to and the Wiki article and/or a friendly person with a Comp.Sci. degree would probably be more help if you want details.

So at this point it seems that while you could encounter a situation like Buridan's Ass, you probably won't and if you do, it's probably because you wrote your code wrong.  And the reality is that except in cases like the first one, where there simply isn't a safety case, you can't introduce the problem of the AI getting stuck between two choices.

The reason for that is that AI decision making, at some level (maybe far down, but there), is binary.  It all eventually comes down to acting on a single input.  To continue our example, let's look at the hypothetical guts of the eat(food) function.
function eat(object food) {
   destroy(object)
   playEffect(chewingSpray)
   playAnimation(chewing)
   hungerLevel = 0
}
Notice anything about that function?  It only accepts a single object.  You literally can't give it multiple inputs for it to (heh heh) choke on.  If you tried this your compiler would yell at you, things would break in the game, and you'd generally feel a tiny bit dumb and hope none of your coworkers noticed because you know that if you caught them making that mistake you'd probably laugh or at least feel a bit smug and superior.

That's how people behave, right?

Now, that's how Buridan's Ass might crop up in a basic decide-between-two-things case, and how you can (and probably will) prevent it.  There are other ways that a similar problem can crop up.  Your agent might get into a state where they either receiving no input at all, or they're only looking for a very specific input that they'll never receive.  The other one is infinite state looping.  This occurs when switching between two sets of priorities and the signal to switch back is received immediately.

Let's use our hungry donkey example for that one.  Only now he's hungry and thirsty.

So, our donkey sees a bowl of water and a bushel of hay at equal distance and his hunger and thirst levels are maxed out.  So what happens?  Well, thanks to your properly structured decision maker, the donkey decides to go for the water first.  However, inside the "I'm going to water" state, you put an override clause that says, "I'm REALLY HUNGRY" that sends the donkey to food, and vice versa inside the "I'm going to food" state, thinking that the donkey shouldn't be going to one resource if the other is completely empty.  Now the two states will cycle infinitely because every time one starts, it hits the override and swaps.  Broken donkey, but the basic idea is sound.  The best way would be to not let that safety case fire if the current demand is completely at maximum demand-y-ness and some other condition is not met.  So the donkey might decide to go for water and not go for the food until it's thirst is completely (or mostly, or whatever) slaked.

So, in summary?  Buridan's Ass isn't really that big a deal for game AI as long as you're Doing It Right.  I honestly don't know that I've ever run into it in the real world, simply because the fact that your AI code eventually boils down to a function that can only accept a single input means you tend to instinctively build prioritization systems into your code.


Thanks to Jade, Paul and Gil for engaging in the original discussion and formulating examples, etc.

No comments:

Post a Comment