HOME

TheInfoList



OR:

Leela Chess Zero (abbreviated as LCZero, lc0) is a free, open-source, and
deep neural network Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. De ...
–based
chess engine In computer chess, a chess engine is a computer program that analyzes chess or chess variant positions, and generates a move or list of moves that it regards as strongest. A chess engine is usually a back end with a command-line interface wit ...
and
volunteer computing Volunteer computing is a type of distributed computing in which people donate their computers' unused resources to a research-oriented project, and sometimes in exchange for credit points. The fundamental idea behind it is that a modern desktop co ...
project. Development has been spearheaded by programmer Gary Linscott, who is also a developer for the
Stockfish chess engine Stockfish is a free and open-source chess engine, available for various desktop and mobile platforms. It can be used in chess software through the Universal Chess Interface. Stockfish has consistently ranked first or near the top of most chess ...
. Leela Chess Zero was adapted from the
Leela Zero Leela Zero is a free and open-source computer Go program released on 25 October 2017. It is developed by Belgian programmer Gian-Carlo Pascutto, the author of chess engine Sjeng and Go engine Leela. Leela Zero's algorithm is based on DeepMind' ...
Go engine, which in turn was based on
Google Google LLC () is an American multinational technology company focusing on search engine technology, online advertising, cloud computing, computer software, quantum computing, e-commerce, artificial intelligence, and consumer electronics. ...
's
AlphaGo Zero AlphaGo Zero is a version of DeepMind's Go software AlphaGo. AlphaGo's team published an article in the journal ''Nature'' on 19 October 2017, introducing AlphaGo Zero, a version created without using data from human games, and stronger than any ...
project. One of the purposes of Leela Chess Zero was to verify the methods in the
AlphaZero AlphaZero is a computer program developed by artificial intelligence research company DeepMind to master the games of chess, shogi and go. This algorithm uses an approach similar to AlphaGo Zero. On December 5, 2017, the DeepMind team rel ...
paper as applied to the game of chess. Like Leela Zero and AlphaGo Zero, Leela Chess Zero starts with no intrinsic chess-specific knowledge other than the basic rules of the game. Leela Chess Zero then learns how to play chess by
reinforcement learning Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine ...
from repeated self-play, using a distributed computing network coordinated at the Leela Chess Zero website. As of December 2022, Leela Chess Zero has played over 1.5 billion games against itself, playing around 1 million games every day, and is capable of play at a level that is comparable with Stockfish, the leading conventional chess program.


History

The Leela Chess Zero project was first announced on TalkChess.com on January 9, 2018. This revealed Leela Chess Zero as the open-source, self-learning chess engine it would come to be known as, with a goal of creating a strong chess engine. Within the first few months of training, Leela Chess Zero had already reached the Grandmaster level, surpassing the strength of early releases of Rybka, Stockfish, and
Komodo Komodo may refer to: Computers * Komodo Edit, a free text editor for dynamic programming languages * Komodo IDE an integrated development environment (IDE) for dynamic programming languages * Komodo (chess), a chess engine People * Komodo ...
, despite evaluating orders of magnitude fewer positions due to its
deep neural network Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. De ...
in its
evaluation function An evaluation function, also known as a heuristic evaluation function or static evaluation function, is a function used by game-playing computer programs to estimate the value or goodness of a position (usually at a leaf or terminal node) in a g ...
and its use of
Monte Carlo tree search In computer science, Monte Carlo tree search (MCTS) is a heuristic search algorithm for some kinds of decision processes, most notably those employed in software that plays board games. In that context MCTS is used to solve the game tree. MCTS ...
. In December 2018, the
AlphaZero AlphaZero is a computer program developed by artificial intelligence research company DeepMind to master the games of chess, shogi and go. This algorithm uses an approach similar to AlphaGo Zero. On December 5, 2017, the DeepMind team rel ...
team published a new paper in ''
Science Science is a systematic endeavor that builds and organizes knowledge in the form of testable explanations and predictions about the universe. Science may be as old as the human species, and some of the earliest archeological evidence for ...
'' magazine revealing previously undisclosed details of the architecture and training parameters used for AlphaZero. These changes were soon incorporated into Leela Chess Zero and increased both its strength and training efficiency. The work on Leela Chess Zero has informed the similar AobaZero project for
shogi , also known as Japanese chess, is a strategy board game for two players. It is one of the most popular board games in Japan and is in the same family of games as Western chess, ''chaturanga, Xiangqi'', Indian chess, and '' janggi''. ''Shōgi'' ...
. The engine has been rewritten and carefully iterated upon since its inception, and now runs on multiple backends, allowing it to effectively utilize different types of hardware, both CPU and GPU. The engine supports the
Fischer Random Chess Fischer random chess, also known as Chess960 (often read in this context as 'chess nine-sixty' instead of 'chess nine hundred sixty'), is a variation of the game of chess invented by the former world chess champion Bobby Fischer. Fischer announ ...
variant, and a network is being trained to test the viability of such a network as of May 2020.


Program and use

The method used by its designers to make Leela Chess Zero self-learn and play chess at above human level is
reinforcement learning Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine ...
. This is a machine-learning algorithm, mirrored from
AlphaZero AlphaZero is a computer program developed by artificial intelligence research company DeepMind to master the games of chess, shogi and go. This algorithm uses an approach similar to AlphaGo Zero. On December 5, 2017, the DeepMind team rel ...
used by the Leela Chess Zero training
executable In computing, executable code, an executable file, or an executable program, sometimes simply referred to as an executable or binary, causes a computer "to perform indicated tasks according to encoded instruction (computer science), instructi ...
/
binary code A binary code represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is often "0" and "1" from the binary number system. The binary code assigns a pattern of binary digits, also ...
(called "binary") to maximize reward through self-play. As an open-source distributed computing project, volunteer users run Leela Chess Zero to play hundreds of millions of games which are fed to the reinforcement algorithm. In order to contribute to the advancement of the Leela Chess Zero engine, the latest non-release candidate (non-rc) version of the Engine as well as the Client must be downloaded. The Client is needed to connect to the current server of Leela Chess Zero, where all of the information from the self-play chess games are stored, to obtain the latest network, generate self-play games, and upload the training data back to the server. In order to play against the Leela Chess Zero engine on a machine, two components are needed: the engine binary and a network. (The engine binary is distinct from the client, in that the client is used as a training platform for the engine). The network contains Leela Chess Zero's evaluation function that is needed to evaluate positions. Older networks can also be downloaded and used by placing those networks in the folder with the Lc0 binary.


Self-play Elo

Self-play Elo is used to gauge relative network strength to look for anomalies and general changes in network strength, and can be used as a diagnostic tool when Lc0 undergoes significant changes. Through test match games that are played with minimal temperature-based variation, Lc0 engine clients test the most recent version against other recent versions of the same network's run, which is then sent to the training server to create an overall Elo assessment. Standard Elo formulae are used to calculate relative Elo strength between the two players. More recent Self-play Elo calculations use match game results against multiple network versions to calculate a more accurate Elo value. The Self-play approach has several consequences on gauging Lc0 Elo rating: *Initial Cumulative Elo inflation in training runs differ drastically due to a-periodic gains in self-improvement and adversarial play. *Measuring Elo relative to previous networks fails to measure general strength since networks are trained to anticipate and beat the predictions made by prior Lc0 networks rather than opponents outside the training domain. This is a type of
Overfitting mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitt ...
measured most drastically when testing smaller networks. *There is no direct one-to-one correlation between self-play Elo and the strength against Alpha-Beta engines, and no known correlation to strength against humans. *Input training data has a significant effect on how a network will perform elo-wise against the next iteration. *Cumulative Self-Play Elo does not have a universal conversion to conventional Human Elo due to inflation issues presented by adversarial play, and the reliance of the measure on time control. This holds true even when the engine runs on a standard set of initial conditions. Cumulative Self-Play Elo inflation can be compared with other runs to gauge the lack of generality in gauging strength with pure cumulative self-play elo. The
Fischer Random Chess Fischer random chess, also known as Chess960 (often read in this context as 'chess nine-sixty' instead of 'chess nine hundred sixty'), is a variation of the game of chess invented by the former world chess champion Bobby Fischer. Fischer announ ...
run Test 71.4 (named 714xxx nets), ranks at nearly 4000 cumulative self-play Elo 76 nets into its run (714076). The T60 (6xxxx) run 63000 net has a cumulative self-play Elo of around 2900. Pitting 714076 against net 63000 reveals 63000 clearly beats 714076 in head-to-head matches at most, if not all "fair" time controls. 4000 Elo >> 2900 elo, but the net with 2900 Elo is clearly beating the 4000 Elo net. This alone is enough to credit the claim that Cumulative self-play Elo is not an objective measure of strength, nor is it a measure which allows one to linearly compare Lc0 network strength to Human strength. Setting up the engine to play a single node with --minibatch-size=1 and go nodes 1 for each played move creates deterministic play, and Self-Play Elo on such settings will always yield the same result between 2 of the same networks on the same start position—always win, always loss, or always draw. Self-play Elo is not reliable for determining strength in these deterministic circumstances.


Spinoffs

In season 15 of the
Top Chess Engine Championship Top Chess Engine Championship, formerly known as Thoresen Chess Engines Competition (TCEC or nTCEC), is a computer chess tournament that has been run since 2010. It was organized, directed, and hosted by Martin Thoresen until the end of Season 6; ...
, the engine AllieStein competed alongside Leela. AllieStein is a combination of two different spinoffs from Leela: Allie, which uses the same evaluation network as Leela, but has a unique search algorithm for exploring different lines of play, and Stein, an evaluation network which has been trained using
supervised learning Supervised learning (SL) is a machine learning paradigm for problems where the available data consists of labelled examples, meaning that each data point contains features (covariates) and an associated label. The goal of supervised learning alg ...
based on existing game data featuring other engines (as opposed to the
unsupervised learning Unsupervised learning is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, which is an important mode of learning in people, the machine is forced to build a concise representation of its world and t ...
which Leela uses). While neither of these projects would be admitted to TCEC separately due to their similarity to Leela, the combination of Allie's search algorithm with the Stein network, called AllieStein, is unique enough to warrant it competing alongside mainstream Lc0. (The TCEC rules require that a neural network-based engine has at least two unique components out of three essential features: The code that evaluates a network, the network itself, and the search algorithm. While AllieStein uses the same code to evaluate its network as Lc0, since the other two components are fresh, AllieStein is considered a distinct engine.) In early 2021, the LcZero blog announced Ceres, a new chess engine that uses LcZero networks. It implements
Monte Carlo tree search In computer science, Monte Carlo tree search (MCTS) is a heuristic search algorithm for some kinds of decision processes, most notably those employed in software that plays board games. In that context MCTS is used to solve the game tree. MCTS ...
as well as many novel algorithmic improvement ideas. Initial Elo testing showed that Ceres is stronger than Lc0 with the same network.


Competition results

In April 2018, Leela Chess Zero became the first engine using a deep neural network to enter the
Top Chess Engine Championship Top Chess Engine Championship, formerly known as Thoresen Chess Engines Competition (TCEC or nTCEC), is a computer chess tournament that has been run since 2010. It was organized, directed, and hosted by Martin Thoresen until the end of Season 6; ...
(TCEC), during season 12 in the lowest division, division 4. Leela did not perform well: in 28 games, it won one, drew two, and lost the remainder; its sole victory came from a position in which its opponent, Scorpio 2.82, crashed in three moves. However, it improved quickly. In July 2018, Leela placed seventh out of eight competitors at the 2018
World Computer Chess Championship World Computer Chess Championship (WCCC) is an event held periodically since 1974 where computer chess engines compete against each other. The event is organized by the International Computer Games Association. It is often held in conjunction with ...
. In August 2018, it won division 4 of TCEC season 13 with a record of 14 wins, 12 draws, and 2 losses. In Division 3, Leela scored 16/28 points, finishing third behind Ethereal, which scored 22.5/28 points, and Arasan on tiebreak. By September 2018, Leela had become competitive with the strongest engines in the world. In the 2018 Chess.com Computer Chess Championship (CCCC), Leela placed fifth out of 24 entrants. The top eight engines advanced to round 2, where Leela placed fourth. Leela then won the 30-game match against
Komodo Komodo may refer to: Computers * Komodo Edit, a free text editor for dynamic programming languages * Komodo IDE an integrated development environment (IDE) for dynamic programming languages * Komodo (chess), a chess engine People * Komodo ...
to secure third place in the tournament. Leela participated in the "TCEC cup", a event in which engines from different TCEC divisions can play matches against one another. Leela defeated higher-division engines Laser, Ethereal and Fire before finally being eliminated by Stockfish in the semi-finals. In October and November 2018, Leela participated in the Chess.com Computer Chess Championship Blitz Battle. Leela finished third behind Stockfish and Komodo. In December 2018, Leela participated in season 14 of the Top Chess Engine Championship. Leela dominated divisions 3, 2, and 1, easily finishing first in all of them. In the premier division, Stockfish dominated while
Houdini Harry Houdini (, born Erik Weisz; March 24, 1874 – October 31, 1926) was a Hungarian-American escape artist, magic man, and stunt performer, noted for his escape acts. His pseudonym is a reference to his spiritual master, French magician ...
, Komodo and Leela competed for second place. It came down to a final-round game where Leela needed to hold Stockfish to a draw with black to finish second ahead of Komodo. It successfully managed this and therefore competed in the superfinal against Stockfish. Whilst many expected Stockfish to win comfortably, Leela exceeded all expectations and scored several impressive wins, eventually losing the superfinal by the narrowest of margins in a 49.5–50.5 final score. In February 2019, Leela scored its first major tournament win when it defeated Houdini in the final of the second TCEC cup. Leela did not lose a game the entire tournament. In April 2019, Leela won the Chess.com Computer Chess Championship 7: Blitz Bonanza, becoming the first neural-network project to take the title. In the season 15 of the Top Chess Engine Championship (May 2019), Leela defended its TCEC cup title, this time defeating Stockfish in the final 5.5–4.5 (+2 =7 -1) after Stockfish blundered a seven-man
tablebase An endgame tablebase is a computerized database that contains precalculated exhaustive analysis of chess endgame positions. It is typically used by a computer chess engine during play, or by a human or computer that is retrospectively analysin ...
draw. Leela also won the Superfinal for the first time, scoring 53.5–46.5 (+14 -7 =79) versus Stockfish, including winning as both white and black in the same predetermined opening in games 61 and 62. Season 16 of TCEC saw Leela finish in third place in premier division, missing qualification for the superfinal to Stockfish and new deep neural network engine AllieStein. Leela did not suffer any losses in the Premier division, the only engine to do so, and defeated Stockfish in one of the six games they played. However, Leela only managed to score nine wins, while AllieStein and Stockfish both scored 14 wins. This inability to defeat weaker engines led to Leela finishing third, half a point behind AllieStein and a point behind Stockfish. In the fourth TCEC cup, Leela was seeded first as the defending champion, which placed it on the opposite half of the brackets as AllieStein and Stockfish. Leela was able to qualify for the finals, where it faced Stockfish. After seven draws, Stockfish won the eighth game to win the match. In Season 17 of TCEC, held in January–April 2020, Leela regained the championship by defeating Stockfish 52.5–47.5, scoring a remarkable six wins in the final ten games, including winning as both white and black in the same predetermined opening in games 95 and 96. It qualified for the superfinal again in Season 18, but this time was defeated by Stockfish 53.5–46.5. In the TCEC Cup 6 final, Leela lost to AllieStein, finishing second. Season 19 of TCEC saw Leela qualify for the superfinal again. This time it played against a new Stockfish version with support for
NNUE An efficiently updatable neural network (NNUE, a Japanese wordplay on Nue, sometimes stylised as ƎUИИ) is a neural network-based evaluation function whose inputs are piece-square tables, or variants thereof like the king-piece-square table. ...
, a shallow neural network–based
evaluation function An evaluation function, also known as a heuristic evaluation function or static evaluation function, is a function used by game-playing computer programs to estimate the value or goodness of a position (usually at a leaf or terminal node) in a g ...
used primarily for the leaf nodes of the search tree. It defeated Leela convincingly with a final score of 54.5–45.5 (+18 -9 =73). Since then, Leela has qualified three times for the superfinal, only to lose every time to Stockfish: +14 -8 =78 in Season 20, +19 -7 =74 in Season 21 and +27 -10 =63 in Season 23.


Results summary


Notable games


Leela vs Stockfish, CCCC bonus games, 1–0
Leela beats the world champion Stockfish engine despite a one-pawn handicap.

Leela completely outplayed Stockfish with black pieces in Trompovsky attack, Leela's eval went from 0.1 to −1.2 in one move, and Stockfish's eval did not go negative until 15 moves later.


References


External links

*{{Official, https://lczero.org
Leela Chess Zero
on
GitHub GitHub, Inc. () is an Internet hosting service for software development and version control using Git. It provides the distributed version control of Git plus access control, bug tracking, software feature requests, task management, continuous ...

Neural network training clientEngineNeural netsChessprogramming wiki on Leela Chess Zero
Chess engines Free and open-source software Volunteer computing projects 2018 software Applied machine learning