A few years A few days ago, chess website Chess.com temporarily banned US grandmaster Hans Niemann for playing chess moves online that the site suspected was suggested to him by a computer program. He reportedly previously banned his mentor, Maxim Dlugy.
And at the Sinquefield Cup earlier this month, world champion Magnus Carlsen resigned without comment after playing a poor game against 19-year-old Niemann. He said it was because he thought Niemann had continued to cheat recently.
Another participant, Russian grandmaster Ian Nepomniachtchi, called Niemann’s performance “more than impressive”. While Nieman admitted to occasionally cheating in previous online games, he strongly denied ever cheating in a live chess tournament.
But how does Chess.com, the world’s largest chess site, decide that a player has probably cheated? It can’t show the world the code it uses, otherwise would-be cheaters would know exactly how to avoid detection.
The website states:
Although legal and practical considerations prevent Chess.com from revealing the full set of data, metrics and tracking used to rate games in our fair play tool, we can say that at the heart of the Chess system. com is a statistical model that rates the likelihood of a human player matching an engine’s best picks and outperforming the confirmed clean play of some of the greatest chess players in history.
Fortunately, research can shed some light on the approach the website may be using.
Humans versus AI
When the artificial intelligence company DeepMind developed the AlphaGo program, which could play the strategy game Go, it was taught to predict the movements a human would make from a given position.
Predicting human movements is a supervised learning problem, the bread and butter of machine learning. Given many examples of human game positions (the dataset) and an example of human movement from each of these positions (the label), machine learning algorithms can be trained to predict the labels to new data points. DeepMind has therefore taught its AI to estimate the probability that a human will make a given movement from a given position.
AlphaGo beat his human rival Lee Sedol in 2017. One of the famous AI moves in the game was “Move 37”. As lead researcher David Silver noted in the documentary Alpha Go“AlphaGo said there was a 1/10,000 probability that move 37 was played by a human player.”
So according to this machine learning model of human Go players, if you saw someone playing Move 37, that would be proof that they didn’t come up with the idea themselves. But of course, that wouldn’t be proof. any human could make this move.
To become very sure that someone is cheating at a game, you need to examine many moves. For example, researchers have studied how many of a player’s movements can be collectively analyzed to detect anomalies.
Chess.com openly uses machine learning to predict what moves a human could make in a given position. It has different models of famous chess players and you can actually play against them. Presumably, similar patterns are used to detect cheating.
A good hit
A recent study suggested that in addition to predicting the likelihood of a human making a certain move, it’s also important to consider the quality of that move. This aligns with Chess.com’s statement that it assesses whether the moves “surpass…the confirmed clean play” of the greats.
But how do you measure which moves are better than others? In theory, a chess position is either “winning” (you can guarantee a win), “losing” (the other player can), or “draw” (none can), and a good move would be all blow that only worsen your position. But realistically, although computers are much better at calculating and choosing future moves than humans, for many positions even they can’t tell for sure whether a position is a winner, a loser, or a draw. And they certainly could never prove it – a proof would usually require too many calculations, examining every leaf of an exponential game tree.
So what people and computers do is use “heuristics” (gut instincts) to assess the “value” of different positions – estimate which player they think will win. This can also be presented as a machine learning problem where the dataset is made up of many positions on the board and the labels indicate who has won – which trains the algorithm to predict who will win from a given position .
Typically, the machine learning models used for this purpose think about likely next moves, consider which positions are accessible to both players, and then use the “hunch” of those future positions to inform their assessment of the current position.
Why the Player Matters
But who wins from a given position depends on the quality of the players. Thus, the evaluation of a particular game by the model will depend on who played the games that entered the training dataset. When chess commentators talk about the “objective value” of different positions, they mean who is likely to win from a given position when both sides are played by the best chess AI available. But this measure of value is not always the most useful when considering a position that human players will have to take at the end. So it’s not clear exactly what Chess.com (or us) should consider a “good move”.
If I cheated in chess and did some moves suggested by a chess engine, it might not even help me win. These moves could set up a brilliant attack that would never occur to me, so I’d waste it unless I told the chess engine to play the rest of the game for me. (Lichess.org tells me I’ve played 3,049 games of Blitz at the time of writing, and my not-so-great ELO rating of 1632 means you can expect me to lack good tactics left and right. .)
Detecting cheating is difficult. If you’re playing online and wondering if your opponent is cheating, you won’t be able to tell for sure, because you haven’t seen millions of human games played with drastically different styles. This is a problem where machine learning models trained with huge amounts of data have a big advantage. Ultimately, they can be critical to the continued integrity of failures.
This article was originally published on The conversation by Michael K. Cohen at Oxford University. Read it original article here.
#Cheating #online #chess #huge #problem #heres #scientists #spot