Whereami01
2016-08-29T10:07:37Z
We have seen that alpha go has beaten the world champion in the game of go  so why not make the same thing for yugioh ai we could follow Sethbling  instructions on neuro evolution of augmenting topologies.
Snarky
2016-09-10T12:49:59Z
Maybe, but that stuff is way too complicated for me. If someone can make that work for the YGOPro AI, be my guest 🙂
Xaddgx
2016-09-19T00:09:51Z
To me, this seems impossible to implement for a game like Yu-gi-oh. In Go, there is only one object the A.I. has to worry about, which are the enemy pieces that all have the potential to perform the exact same action. In the video with Sethbling, the A.I. adapts to the level because that is set in a controlled environment. It can reliably tell itself, after a large amount of tries, "The terrain is here. The Goomba is here, so I must jump here to get further."

Yu-gi-oh is nowhere near as straightforward as Go, and nowhere near as controlled of an environment as a Mario level. It's not straightforward because life points have become more of a payment method to use powerful cards than a sign that you're actually winning against your opponent. Additionally, it is not a controlled environment because there are tons of different decks. But even if there were to be controlled practice sessions where it only fights one particular deck, that raises the question of how the A.I. would detect skewed results if a player made an error.

Example: Let's say that instead of calculating "I am getting further to victory" via life points, we were to calculate it by gaining field advantage. If the A.I. suddenly gets field control on one turn because a human Performapal player used Torrential Tribute on their five monsters, in chain to an A.I.'s Pendulum Summon of multiple Majespecters because they forgot Majespecters can't be destroyed, the A.I. will undoubtedly read that as,

"I just Pendulum Summoned as many monsters as possible, while the opponent had a facedown card in their spell/trap zone, and that got rid of all the opponent's monsters!"

Thus, the A.I. would develop an idea that it is safe to overextend with a Pendulum Summon when the opponent has unknown S/T cards, and then get hit with Solemn Warning/Solemn Notice. The argument could be made that it would balance out in favor of not overextending, because it would still have other times where it would get punished, but that idea made from human error would always remain in the calculations.

After watching those videos, I saw a YouTube video of a Super Smash Bros. Melee A.I. that would "learn" which attacks are most effective in a Captain Falcon VS Captain Falcon matchup. It had some great maneuvers, but the problem was that it was facing an unprofessional human player. Thus, when the human got hit by attacks that top level players would have a greater chance to predict, the A.I. would develop a habit of using them more often. This is a more simplified example of what I said above. I don't really know what else to type, but hopefully I've reasonably made my point.

Disclaimer: I'm not nearly as intelligent as the people who made AlphaGo. Don't take my word as gospel, but instead as an internet opinion.
donpas
2016-09-19T12:52:58Z
I think it would be possible, but it would require that YGOPro make public all the replays of the players for this idea. And what these replays are easily readable.

With that information we can build an AI that is able to create combinations that achieve victory.

We would add some more factors to the model, and it would matter to prove it.

It is complex, if difficult, if not impossible.