I was discussing things with Sarenji over MSN right now. We were basically seeing how the bot could learn what to expect from the Pokemon it's facing. I think we actually found a good rough implementation (an abstract one, of course, for now), which is based on what you mentioned in your post (mainly previous experience of what the particular Pokemon did to the bot in previous games/turns). I'm glad that all of us - you, Sarenji and I - went for the learning approach to the AI bot.
There is much more to lever than the bot's own experience. Assuming there are 500 battles per day on the main competitor server, 20 turns per battle (I don't know if those figures are accurate), you're looking at 20,000 actions to analyze every day, which is not negligible.
A battle's state could be each pokemon's stats (or estimations thereof), types, boosts, status, weather, as well as all the flags like mean look and spikes and counts like the perish count. You can then view each possible action as modifying the state in a certain way. This allows you to do without the concepts of pokemon, move, item or ability. For example, switching to Tyranitar, instead of being an action number, would change the stats and types, reset the boosts, whip up a sandstorm, etc.; if you determine that going to a state where the type is rock gives you an edge, then obviously switching to Tyranitar will help you. If new pokemon and/or moves are added to the mix, the AI will adapt more easily. Seen under that angle, you want to know what a good state is and you want to manipulate the current state to make it better. Of course, it also helps to know how your opponent is going to manipulate the state.
I'm realising that Pokemon does have a big advantage over chess: in Pokemon, you only have at most 9 different choices to make (use one of your 4 moves, or switch into one of your 5 other Pokemon), whereas in chess this number can be much more.
In the first turn of a battle, using an uniform prior, the opponent could use any move his pokemon can possibly learn or he/she could switch to any of 493 possible pokemon, each of which could be personalized in zillions of ways. Hence why it's important to only consider likely possibilities, and who knows how many of those there are (not that many fundamentally different ones).
A problem I'm thinking of is - should the bot be expected to also modify its team, or will it just use predefined teams?
Eh, I don't know. It'd be cool if it could build its own teams.
I still don't know how to make the AI bot know when to use Swords Dance, or know when to use Baton Pass, or things like that, but I guess I don't know everything. Since you seem to know about how would you go about solving this particular problem, I will make another u-turn - if you and Sarenji want some help (mainly algorithmic, since I might not know the programming language that will be used for the implementation of the bot), I'm interested.
If the bot knows that having attack boosts increases its odds of winning (which is evidently true under certain simple conditions that could be learned), it will try to do that as much as it will try to damage the opponent. If it judges that the odds of winning are superior if pokemon Y gets the boosts, it might baton pass to it.
Nate said:
Another reason pokemon bots cannot approach chess bots is the fact that pokemon does contain luck. In Chess, a computer can predict with exact accuracy all possible outcomes of all possible turns a given period ahead of time, allowing it to consider all potential moves occuring five turns in the future. This can't really be replicated with pokemon, because a critical hit, etc. will obviously fuck shit up.
The effect of luck is that perfect play cannot result in a 100% win rate: it will be lower in proportion to the amount of luck (a 50% win rate in the case of pure luck). It doesn't matter in the long run, though: the more games are played, the more skill will stand out (statistically). That's annoying for impatient human players, but it's the best that can be done :) You can take luck into account during play by simulating the next rounds over and over again, so you get a precise idea of your winning expectancy. You can then choose the action that is the best on average (or an action that's a bit below average but has a lower risk (variance)).
So approaching it in the manner that chess bots are coded (heavy recursion) probably isn't the answer. Instead, you would probably want to use rule-based checks or maybe even a "learning" bot using some sort of genetic algorithms. Or, you could use a rule-based system to create a list of possibly good moves, and genetic algorithms to settle on an agression factor.
Rule-based AI is limited, can't follow the metagame and it is not scalable at all. I don't think it's a good idea.