I don't think an elitist strategy would fare well either. The way I see it, one would run the GA as follows: create N initial teams (I'd go in the order of 100 or 1000). Evaluate how well they perform. Kill off the worst M teams and replace them with random mutations and/or crossbreeds of the remaining best teams, which you also mutate. Maybe make a couple completely random teams or revive forgotten teams to inject variety. Redo. N and M are hyper-parameters of the system that must be adjusted for best results. Mutation and crossbreed must also be properly defined and they have to stay valid. For example, move evs from a stat to another, change the nature, replace one pokemon in the team with a new, random one. For crossbreeds, pick 6 pokemon out of the parents' 12. I'm worried about the time it would take before it yields interesting results, though.Ah, the elitist GA strategy, right? IIRC, this GA strategy has problems with local maximums/minimums. Which is why people prefer to add noise into the equation, even if it means losing the current best team in the short term. Which goes into why I'm saying inaccuracy in the evaluation formula can actually work to the GA's advantage.

There are several techniques to avoid falling into local maxima. You listed one. GAs are another. There is also a process called simulated annealing. Simulated annealing would basically go this way: first you make a random team and you put the temperature T at 1 or higher. Then, you make a new team by changing the team you have a bit. If it's better, you switch. If not, I don't remember the exact equation and I don't want to look it up, but you calculate a quantity that looks like e^((new - original)/T). That gives you a number between 0 and 1 (because the original was better). You draw a random number and if it's smaller you do the switch. Then, you reduce T.To "knock it out" of this local maximum, you have to be non-elitist. You must be willing to sacrifice the current known maximum and start the search over again elsewhere. Because of this, we have probabilistic methods of finding children in GAs. But lets say we are just using a simple hill-climber :-p

Anyway, so we run the hill-climber, reach a local maximum, and how do we try to get out of this local maximum? The easiest way that I've found is to simply introduce "noise" into the evaluator. That is, just add a random number to the evaluation function. Make it imperfect. So lets say this perfect evaluator is a function that converts "pokemon teams" to "integers". All you would do to add noise is simply add a random integer to the evaluation function.

The smaller T is, the smaller the transition probability will be. What this means is that when T = 0, we don't move. Furthermore, the higher T is, the more probable we are going to go to a new state even if it is much worse than the previous one. Therefore, at first, we'll look everywhere, hopefully finding a good zone. As we do more iterations and T decreases, we'll be less and less willing to move. This is like shaking the surface violently at the beginning to go for the most promising zone and then shaking more and more gently to find the deepest hole and hopefully stick in it. It is guaranteed to converge at the limit of infinite time, but we don't have to go that far to get good results.

The noise you are talking about is usually part of those algorithms, not of the evaluation function. And even if they are all hill climbers in a way, each algorithm climbs differently. If you use a GA with breeding, the teams you'll obtain at each generation from the previous ones won't resemble the teams you would obtain using other algorithms. That's important, because it means different algorithms hill-climb in different spaces: for a GA, the children node of two parents is a neighbor of the former, but not for simulated annealing. Another example: if your function is f = cos(x) - x^2 and your hill-climber only explores values on multiples of 2*pi, you're hill-climbing in a subspace where f is convex, so there are no local minima at all. That is a contrived example that doesn't happen in practice, but my point is that adding noise is important (well, it's good in practice), but the way you hill-climb is too.Now lets take a step back to reality. Where does this put us? It simply states that if we are going to use a hill-climbing procedure (whichMOSTof these search methods really are, a GA is really just a massively parallel hill-climber), then a perfect evaluation function is not what we want. We want something close to the perfect evaluation function with a good bit of noise so that we can get knocked out of local maximums and find the true global maximum.

Ideally, teams should all play the same amount of matches. That won't necessarily happen if you compare them as you sort. I think you should first make them play a fixed amount of matches against other teams and their value would be the amount of wins, and you sort using that.2. Now, I create a set of random (legal) teams. To evaluate the teams, I sort them using insertion sort or something. The "compare" function is going to be an actual simulation of the game. AI vs AI using the two different teams. The winner is judged as "better". And of course, after insertion sort is run (or maybe shell-sort or something). Or maybe even just a tourney is run... the general thing happens. Cross teams, mutate teams, etc. etc.

That's an interesting approach.4. Write a better AI, and write a simple hill climber based on random mutations. No crossover or anything, just change 1 to 4 aspects of the team, compare old vs new. Keep the better one. Iterate for thousands of generations till the team cannot be improved. The goal at this point is to find the local maximum. Hopefully, the GA found a solution close to the global maximum, and a specialized hill-climber will find the local maximum faster.

Please, do elaborate on your business model :)5. Profit!

In theory, you're right that the min/max expansion would see those things but only if you have enough folds. I doubt looking only 4 or 5 turns in advance would correctly capture that complexity. And even if it did, a more clever evaluation of the situation might allow us to get similar results with less folds and thus less computations.Certainly. However, those issues can be simplified down to damage dealt later on in the min/max expansion. I'm probably beginning to stretch the heuristic here too far, but please correct me if you see things otherwise.

Heal Bell/Aromatherapy can be seen as preventing damage from the opponent. This would be caught in the "min" part of the min/max tree. Minimizing the % damage the opponent would do to you in a given turn in the future.

Spikes, inflicting status effects, and so forth can be seen as Maximizing the total damage dealt by us, or minimizing the total damage dealt by the opponent. As a stupid example, if we look ahead 5 turns, and we predict that the enemy will switch 4 times with our given plan,EVENif spikes are on the field, then that can be seen as increasing our % damage 5 turns later. If the opponent has a rapid spinner, he'd want toMINIMIZEthe % damage dealt to him by switching his rapid spinner first, and then switching 3 times. Or at least, that would be his plan.

You might be right about this. I'm still a bit skeptical, because pokemon offers more ways to set an opponent up. If you play checkers and you're not taking pieces, what are you doing? Counting pieces in checkers tells you more than counting damage in pokemon.Again, this is a heuristic. Just as chess computers heavily use a bad heuristic to play chess well, I'm sure we can use a bad heuristic to make Pokemon AIs play pokemon well. As stated before, many chess and checkers AIs simply count the number of pieces left on the board, and check for the possibility of forced checkmate. Similarly, all a Pokemon AI would have to do would count the total % damage dealt to the opponent, and check for the possibility of a perfect sweep.