Mox is still not quite stable. Even with my work in the previous weeks, the AI (it’s always the AI…) can still lead to one of these disasters:
- Stack overflow: This usually means that the AI is stuck in a infinite recursion. For example, playing repeatedly an ability costs no mana and doesn’t require tapping. This is not the actual situation here, but it shows the point.
- Infinite loop: Similar to infinite recursion, but the AI is stuck in the same loop over and over. For example, trying to play a card, realizing it’s not possible (not enough mana), and then trying to play the same card, over and over. That would be a bug, of course.
- Data inconsistencies: Yes, it’s back. The problem with data consistency across AI tries that I took so long to fix this spring still happens (or the same symptoms anway), although much less often.
The nasty thing about card game AI is that there’s a lot of randomness involved (many starting positions possible given a deck). So it’s a pain to reproduce these bugs through normal playing, and even more difficult to debug. To help me, I started to write tests that use arenas (Another Radical Education Network for AI). Seriously, arenas are just tests where I pitch two AIs against each other with some given decks. The nice thing is that if I need to reproduce a bug that happened during a particular run, I can re-run the same game by feeding another arena with the same parameters (including random seed). Tadam! Can debug those nasty issues. Another nice advantage to arenas is that I can run many, many different games and find problems faster (about 1-2 seconds per game right now). Testing manually would take much longer!
By the way, I tried the optimization technique that I discussed in my last post, where the AI doesn’t try equivalent cards (4 mountains are equivalent). It worked majestically! A single AI choice that took about 30s now takes 1 second! Schwing!
So, now I’ve only got to fix those disasters!