Advanced programming is for dummies

There was an overwhelming sense of success getting a computer to play the game against me (and against itself), I’ve never really worked on AI, particularly beyond reactive states. But waiting for more than a minute each time the computer needed to make a move was not acceptable, or remotely fun. Especially when it would sometimes die, because it would not take risks.

The risk taking problem was easy to solve - if you treat flipped cards as jokers, the computer will appear to take ‘guesses’ - this does create almost the opposite effect, as finding 5 jokers in a row, the computer assumes it is a Royal Flush, and always takes the shot. But it places the AI in a state where it can play against itself indefinitely.

To get any use out of it, however, the speed issue needs to be fixed. The goal was to allow the game to run at the target framerate consistently, so any on-screen effects were not blocked by the CPU spikes. To measure this, I placed an animated poker chip on the screen, so I could see visually how things looked as well as reading the metric.

Unity 5 brought the profiling tools to the public, this was a great chance to use them, and was another example of using a known project, to test new tools. The basics were pretty simple, play the game, and watch the performance graph, when it spikes, you can break down your function calls, and see how much time you’re wasting.

Caching helped a lot, I was calling lots of functions repeatedly sometimes 4 or 5 times for each walker, for 11 hands, and 17,000 possible walks, none of these looked slow, but running them millions of times was causing me a huge hit. The second big loss of speed was using Collections, lists and dictionaries are convenient to work with, but come with a lot of baggage. I took each function and rewrote it to work with pure, fixed size arrays. This gained me a huge speed increase.

Finally, a rework of the logic - it dawned on me that the original design for AI was flawed, there can only be one possible hand for each walk, and each walk is a hand (even if it’s just a high-card). So rather than regenerating the walks ever turn, I generated the finite set of walks, and tested the current hand of each. This got the processing time down to a fraction of a second, perfectly acceptable to play against as a human, I was wasting more time animating cards than running AI. But it still caused a glitch.

Rather than go back down the complicated route of threading, I built a quick hack -

private float maxRun = 1.0f / 100f;

float frameStart = UnityEngine.Time.realtimeSinceStartup;
float realTime = UnityEngine.Time.realtimeSinceStartup;
int position = 0;

while (realTime - frameStart < maxRun) {
walks[position].GetHand()
position++;
realTime = UnityEngine.Time.realtimeSinceStartup;
}

Basically, at every frame, grab the start time, and start processing the list of walks, until the time elapsed is more than 1/100th of a second, and move on, and come back to that point next frame. The result is spikes in CPU usage, but no noticable frame rate drop (at 60fps). My spinning poker chip span smoothly through the whole event. Again, a simple script had done away with the problem with far less drama than more advanced methods.

The AI is mechanically sound, but needs some tweaking to make it fair, and give it some variance.