This section contains unsorted documents imported from our GitHub wiki.
This section contains unsorted documents imported from our GitHub wiki.
1024 bs 256 nc (cccc 2 params) vs default so far.
Score of lc0 v18 1024 vs lc0 v18: 33 - 45 - 89 [0.464]
Elo difference: -25.01 +/- 36.08
Trying medium vs large batchsize. 1024 batchsize and 256 node cccc vs TCEC 512/32.
This is intended to be a beginner-friendly guide on how to train and bench (against the CCRL Baseline Net) lczero neural networks. This is probably the place to look if you’re stuck with no GPU / an AMD one (as I am), and if you don’t use linux. If you don’t fall into both or either of these categories, I still hope you’ll be able to take away something from this.
Run go infinite from start position and abort after depth 26 and report NPS output.
I put some sample ones from memory. Please put your own bench scores here in sorted NPS order if you can. If you don’t know what engine type, gpu is opencl and cpu is openblas
The networks below are our strongest available. In general, the largest network compatible with your hardware is recommended. To download, right click the corresponding link and select “Save link as…”
Important: If you just want to run lc0 on your Android device then the steps described here are no longer needed. Now there’s an easier way to run lc0 without having to build it your own. Check it out:
Important: If you just want to run lc0 on your Android device then the steps described here are no longer needed. Now there’s an easier way to run lc0 without having to build it your own. Check it out:
The Chess.com Computer Chess Championship (CCCC) is continuously running computer chess events and is maintained by Chess.com at https://www.chess.com/computer-chess-championship.
Executable | Network | Placement | Result |
---|---|---|---|
v0.30.0-dev dag-bord (5e84a3) | 784038 | 2/2 | 192/480 |
# ENGINE : RATING ERROR CFS(%) W D L GAMES DRAWS(%)
1 Stockfish : 71 22 100.0 134 308 38 480 64.2
2 Lc0 : 0 ---- --- 38 308 134 480 64.2
Executable | Network | Placement | Result |
---|---|---|---|
v0.27.0-rc1 | J94-100 | 2/2 | 219.5/500 |
# ENGINE : RATING ERROR CFS(%) W D L GAMES DRAWS(%)
1 Stockfish : 43 22 100.0 122 317 61 500 63.4
2 Lc0 : 0 ---- --- 61 317 122 500 63.4
CLOP Tuning is a tool used for figuring out what differentiable constants, functions, or other things should gain lc0 the most Elo/improve performance.
CLOP Tuner:
https://github.com/kiudee/bayes-skopt
Master (and thus div4) is result of a CLOP tune at 4000 games. Div3 updated the prune factor and time scale based on results at 8000 games as CLOP continued to run during div4.
Use the below resources to contribute to the project through FREE google credits
Use the link below if you want to contribute with your own hardware
./cutechess-cli -rounds 100 -tournament gauntlet -concurrency 2 -pgnout results.pgn \
-engine name=lc_gen13_fpu cmd=lczero_fpu arg="--threads=1" arg="--weights=$WDR/cd1a1e" arg="--playouts=800" arg="--noponder" arg="--noise" tc=inf \
-engine name=lc_gen13 cmd=lczero arg="--threads=1" arg="--weights=$WDR/cd1a1e" arg="--playouts=800" arg="--noponder" arg="--noise" tc=inf \
-each proto=uci
(I think with some upcoming patches we can do this in GUIs together with -l lzcdebug.log
).
So, the major assumption that goes into this is that the Tensorboard data is a reliable sole source of information for the information it provides. That’s quite reasonable (if it wasn’t we may as well give up the project), but it’s also in contrast to the various Elo measurements we have, none of which are a single sole reliable source of playing strenth – instead they all need to be considered collectively.
Many people test the strength of Lc0 nets but the main exchange of results is now the test-results channel of the Lc0 chat on discord. Most web pages are no longer or only occasionally updated. Status as of 2019-10-14:
The client will run games and arrange all communication with the learning server. Thanks for contributing!
This guide will allow you to have Leela Chess Zero clients running in the cloud in 10 minutes or less. These clients will run self-play training games and help make Leela stronger. This guide is aimed at everyone and assumes no technical understanding.
Below a list of Guides / instructions for various task.
There are several known issues with Leela Chess Zero play and training runs. Some of them have already been solved and are listed for “historical documentation”, while others are still being investigated.
Below is a brief summary of our investigation into the issues that started showing up in the Elo graph from ID253. We have been performing a number of tests that require changing parameters, generating new self-play games, and training on those. This process requires many self-play games, so your help generating them is still invaluable! We have some promising leads, but still require more time and testing. Thank you for your patience.
32 bit processors and operating systems are supported by lc0. However CUDNN is not available for such platforms, so only the blas
and opencl
backends can be built.
There are no official builds distributed, but the latest version available can be found using the following links:
Flag | UCI option | Description |
---|---|---|
–help, -h | Show help and exit. | |
–weights, -w | WeightsFile | Path from which to load network weights.Setting it to <autodiscover> makes it search in ./ and ./weights/ subdirectories for the latest (by file date) file which looks like weights.Default value: <autodiscover> |
–backend, -b | Backend | Neural network computational backend to use.Default value: cudnn Allowed values: cudnn , cudnn-fp16 , opencl , blas , check , random , roundrobin , multiplexing , demux |
–backend-opts, -o | BackendOptions | Parameters of neural network backend. Exact parameters differ per backend. |
–threads, -t | Threads | Number of (CPU) worker threads to use.Default value: 2 Minimum value: 1 Maximum value: 128 |
–nncache | NNCacheSize | Number of positions to store in a memory cache. A large cache can speed up searching, but takes memory.Default value: 200000 Minimum value: 0 Maximum value: 999999999 |
Flag | UCI option | Description |
---|---|---|
–minibatch-size | MinibatchSize | How many positions the engine tries to batch together for parallel NN computation. Larger batches may reduce strength a bit, especially with a small number of playouts.Default value: 256 Minimum value: 1 Maximum value: 1024 |
–max-prefetch | MaxPrefetch | When the engine cannot gather a large enough batch for immediate use, try to prefetch up to X positions which are likely to be useful soon, and put them into cache.Default value: 32 Minimum value: 0 Maximum value: 1024 |
–cpuct | CPuct | cpuct_init constant from “UCT search” algorithm. Higher values promote more exploration/wider search, lower values promote more confidence/deeper search.Default value: 3.00 Minimum value: 0.00 Maximum value: 100.00 |
–cpuct-base | CPuctBase | cpuct_base constant from “UCT search” algorithm. Lower value means higher growth of Cpuct as number of node visits grows.Default value: 19652.00 Minimum value: 1.00 Maximum value: 1000000000.00 |
–cpuct-factor | CPuctFactor | Multiplier for the cpuct growth formula.Default value: 2.00 Minimum value: 0.00 Maximum value: 1000.00 |
–temperature | Temperature | Tau value from softmax formula for the first move. If equal to 0, the engine picks the best move to make. Larger values increase randomness while making the move.Default value: 0.00 Minimum value: 0.00 Maximum value: 100.00 |
–tempdecay-moves | TempDecayMoves | Reduce temperature for every move from the game start to this number of moves, decreasing linearly from initial temperature to 0. A value of 0 disables tempdecay.Default value: 0 Minimum value: 0 Maximum value: 100 |
–temp-cutoff-move | TempCutoffMove | Move number, starting from which endgame temperature is used rather than initial temperature. Setting it to 0 disables cutoff.Default value: 0 Minimum value: 0 Maximum value: 1000 |
–temp-endgame | TempEndgame | Temperature used during endgame (starting from cutoff move). Endgame temperature doesn’t decay.Default value: 0.00 Minimum value: 0.00 Maximum value: 100.00 |
–temp-value-cutoff | TempValueCutoff | When move is selected using temperature, bad moves (with win probability less than X than the best move) are not considered at all.Default value: 100.00 Minimum value: 0.00 Maximum value: 100.00 |
–temp-visit-offset | TempVisitOffset | Reduces visits by this value when picking a move with a temperature. When the offset is less than number of visits for a particular move, that move is not picked at all.Default value: 0.00 Minimum value: -1.00 Maximum value: 1000.00 |
–noise, -n | DirichletNoise | Add Dirichlet noise to root node prior probabilities. This allows the engine to discover new ideas during training by exploring moves which are known to be bad. Not normally used during play.Default value: false |
–verbose-move-stats | VerboseMoveStats | Display Q, V, N, U and P values of every move candidate after each move.Default value: false |
–smart-pruning-factor | SmartPruningFactor | Do not spend time on the moves which cannot become bestmove given the remaining time to search. When no other move can overtake the current best, the search stops, saving the time. Values greater than 1 stop less promising moves from being considered even earlier. Values less than 1 causes hopeless moves to still have some attention. When set to 0, smart pruning is deactivated.Default value: 1.33 Minimum value: 0.00 Maximum value: 10.00 |
–fpu-strategy | FpuStrategy | How is an eval of unvisited node determined. “reduction” subtracts --fpu-reduction value from the parent eval. “absolute” sets eval of unvisited nodes to the value specified in --fpu-value.Default value: reduction Allowed values: reduction , absolute |
–fpu-reduction | FpuReduction | “First Play Urgency” reduction (used when FPU strategy is “reduction”). Normally when a move has no visits, it’s eval is assumed to be equal to parent’s eval. With non-zero FPU reduction, eval of unvisited move is decreased by that value, discouraging visits of unvisited moves, and saving those visits for (hopefully) more promising moves.Default value: 1.20 Minimum value: -100.00 Maximum value: 100.00 |
–fpu-value | FpuValue | “First Play Urgency” value. When FPU strategy is “absolute”, value of unvisited node is assumed to be equal to this value, and does not depend on parent eval.Default value: -1.00 Minimum value: -1.00 Maximum value: 1.00 |
–cache-history-length | CacheHistoryLength | Length of history, in half-moves, to include into the cache key. When this value is less than history that NN uses to eval a position, it’s possble that the search will use eval of the same position with different history taken from cache.Default value: 0 Minimum value: 0 Maximum value: 7 |
–policy-softmax-temp | PolicyTemperature | Policy softmax temperature. Higher values make priors of move candidates closer to each other, widening the search.Default value: 2.20 Minimum value: 0.10 Maximum value: 10.00 |
–max-collision-events | MaxCollisionEvents | Allowed node collision events, per batch.Default value: 32 Minimum value: 1 Maximum value: 1024 |
–max-collision-visits | MaxCollisionVisits | Total allowed node collision visits, per batch.Default value: 9999 Minimum value: 1 Maximum value: 1000000 |
–out-of-order-eval | OutOfOrderEval | During the gathering of a batch for NN to eval, if position happens to be in the cache or is terminal, evaluate it right away without sending the batch to the NN. When off, this may only happen with the very first node of a batch; when on, this can happen with any node.Default value: true |
–syzygy-fast-play | SyzygyFastPlay | With DTZ tablebase files, only allow the network pick from winning moves that have shortest DTZ to play faster (but not necessarily optimally).Default value: true |
–multipv | MultiPV | Number of game play lines (principal variations) to show in UCI info output.Default value: 1 Minimum value: 1 Maximum value: 500 |
–score-type | ScoreType | What to display as score. Either centipawns (the UCI default), win percentage or Q (the actual internal score) multiplied by 100.Default value: centipawn Allowed values: centipawn , win_percentage , Q |
–history-fill | HistoryFill | Neural network uses 7 previous board positions in addition to the current one. During the first moves of the game such historical positions don’t exist, but they can be synthesized. This parameter defines when to synthesize them (always, never, or only at non-standard fen position).Default value: fen_only Allowed values: no , fen_only , always |
–kldgain-average-interval | KLDGainAverageInterval | Used to decide how frequently to evaluate the average KLDGainPerNode to check the MinimumKLDGainPerNode, if specified.Default value: 100 Minimum value: 1 Maximum value: 10000000 |
–minimum-kldgain-per-node | MinimumKLDGainPerNode | If greater than 0 search will abort unless the last KLDGainAverageInterval nodes have an average gain per node of at least this much.Default value: 0.00 Minimum value: 0.00 Maximum value: 1.00 |
Flag | UCI option | Description |
---|---|---|
–slowmover | Slowmover | Budgeted time for a move is multiplied by this value, causing the engine to spend more time (if value is greater than 1) or less time (if the value is less than 1).Default value: 1.00 Minimum value: 0.00 Maximum value: 100.00 |
–move-overhead | MoveOverheadMs | Amount of time, in milliseconds, that the engine subtracts from it’s total available time (to compensate for slow connection, interprocess communication, etc).Default value: 200 Minimum value: 0 Maximum value: 100000000 |
–time-midpoint-move | TimeMidpointMove | The move where the time budgeting algorithm guesses half of all games to be completed by. Half of the time allocated for the first move is allocated at approximately this move.Default value: 51.50 Minimum value: 1.00 Maximum value: 100.00 |
–time-steepness | TimeSteepness | “Steepness” of the function the time budgeting algorithm uses to consider when games are completed. Lower values leave more time for the endgame, higher values use more time for each move before the midpoint.Default value: 7.00 Minimum value: 1.00 Maximum value: 100.00 |
–syzygy-paths, -s | SyzygyPath | List of Syzygy tablebase directories, list entries separated by system separator (";" for Windows, “:” for Linux). |
–ponder | Ponder | This option is ignored. Here to please chess GUIs.Default value: true |
–immediate-time-use | ImmediateTimeUse | Fraction of time saved by smart pruning, which is added to the budget to the next move rather than to the entire game. When 1, all saved time is added to the next move’s budget; when 0, saved time is distributed among all future moves.Default value: 1.00 Minimum value: 0.00 Maximum value: 1.00 |
–ramlimit-mb | RamLimitMb | Maximum memory usage for the engine, in megabytes. The estimation is very rough, and can be off by a lot. For example, multiple visits to a terminal node counted several times, and the estimation assumes that all positions have 30 possible moves. When set to 0, no RAM limit is enforced.Default value: 0 Minimum value: 0 Maximum value: 100000000 |
–config, -c | ConfigFile | Path to a configuration file. The format of the file is one command line parameter per line, e.g.:--weights=/path/to/weightsDefault value: lc0.config |
–logfile, -l | LogFile | Write log to that file. Special value <stderr> to output the log to the console. |
Flag | UCI option | Description |
---|---|---|
–share-trees | ShareTrees | When on, game tree is shared for two players; when off, each side has a separate tree.Default value: true |
–games | Games | Number of games to play.Default value: -1 Minimum value: -1 Maximum value: 999999 |
–parallelism | Parallelism | Number of games to play in parallel.Default value: 8 Minimum value: 1 Maximum value: 256 |
–playouts | Playouts | Number of playouts per move to search.Default value: -1 Minimum value: -1 Maximum value: 999999999 |
–visits | Visits | Number of visits per move to search.Default value: -1 Minimum value: -1 Maximum value: 999999999 |
–movetime | MoveTime | Time per move, in milliseconds.Default value: -1 Minimum value: -1 Maximum value: 999999999 |
–training | Training | Enables writing training data. The training data is stored into a temporary subdirectory that the engine creates.Default value: false |
–verbose-thinking | VerboseThinking | Show verbose thinking messages.Default value: false |
–resign-playthrough | ResignPlaythrough | The percentage of games which ignore resign.Default value: 0.00 Minimum value: 0.00 Maximum value: 100.00 |
–reuse-tree | ReuseTree | Reuse the search tree between moves.Default value: false |
–resign-wdlstyle | ResignWDLStyle | If set, resign percentage applies to any output state being above 100% minus the percentage instead of winrate being below.Default value: false |
–resign-percentage | ResignPercentage | Resign when win percentage drops below specified value.Default value: 0.00 Minimum value: 0.00 Maximum value: 100.00 |
–resign-earliest-move | ResignEarliestMove | Earliest move that resign is allowed.Default value: 0 Minimum value: 0 Maximum value: 1000 |
–interactive | Run in interactive mode with UCI-like interface.Default value: false |
Flag | UCI option | Description |
---|---|---|
–nodes | Number of nodes to run as a benchmark.Default value: -1 Minimum value: -1 Maximum value: 999999999 |
|
–movetime | Benchmark time allocation, in milliseconds.Default value: 10000 Minimum value: -1 Maximum value: 999999999 |
|
–fen | Benchmark initial position FEN.Default value: rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1 |
Install docker for your box
Create temporary folder
mkdir ~/temp && cd ~/temp
Create a file named Dockerfile
and paste the following
From ubuntu:latest
RUN apt-get update
RUN apt install g++ git libboost-all-dev libopenblas-dev opencl-headers ocl-icd-libopencl1 ocl-icd-opencl-dev zlib1g-dev cmake wget supervisor curl -y
RUN apt install clinfo && clinfo
RUN mkdir /lczero && \
cd ~ && \
git clone https://github.com/glinscott/leela-chess.git &&\
cd leela-chess &&\
git submodule update --init --recursive &&\
mkdir build && cd build &&\
#for gpu build
#cmake .. && \
#for cpu build
cmake -DFEATURE_USE_CPU_ONLY=1 .. &&\
make -j$(nproc) && \
cp lczero /lczero && \
cd /lczero && \
curl -s -L https://github.com/glinscott/leela-chess/releases/latest | egrep -o '/glinscott/leela-chess/releases/download/v.*/client_linux' | head -n 1 | wget --base=http://github.com/ -i - && \
chmod +x client_linux
COPY supervisord.conf /etc/supervisor/conf.d/
RUN service supervisor start
if you want to build for GPU comment out cmake -DFEATURE_....
and uncomment cmake ..
Welcome to the Leela Chess Zero wiki!
Lc0 is a UCI-compliant chess engine designed to play chess via neural network
Use the below resources to contribute to the project through FREE google credits
Leela Chess is known to miss relatively shallow tactics in relation to the high level of play. The below is a rough draft explaining how the search works and why it misses shallow tactics.
The networks below are our strongest available, and the first listed (BT4-spsa-1740) is what is currently being sent to engine competitions like the TCEC and CCC. In general, the largest network compatible with your hardware is recommended. To download, right click the corresponding link and select “Save link as…”
The self-play games your client creates are used by the central server to improve the neural net. This process is called training (many people call the process of running the client to produce self-play games training, but in machine learning these games are only the input data for the actual training process).
2018-06-10, ID 396: Switch main training pipeline to use scs-ben’s bootstrapped net to try to shortcut fixing rule50 issues. Elo dropped for awhile because of this transition, but tests show the rule50 issues are mostly gone.
Google Colaboratory (Colab) is a free tool for machine learning research. It is a Python notebook running in a Virtual Machine using an NVIDIA Tesla K80, T4, V100 and A100 GPU (a graphics processors developed by the NVIDIA Corporation).
After abusing colab by some Lc0 clone projects, running a chess training on free tier of Colab is a disallowed activity and is throttled/banned.
Do not run the Lc0 training on Colab (at least in the free runtime).
Open the directory where Lc0 is located.
Hold Shift key on your keyboard and right-click on the empty space of the window, so that directory popup menu appears.
Follow these simple steps and you’ll be running lc0 on your Android device. No root needed. Just the right engine, a weights file and a supported Chess App.
Since version 0.24, Leela Chess Zero has official support for Android. Get the APK from here:
https://github.com/LeelaChessZero/lc0/releases/latest
After installing the APK you will need a chess app that supports the Open Exchange protocol, like the following:
If you have nvidia-docker2 installed, it’s possible to run lczero under a docker container while still leveraging all the speed advantages of CUDA and cuDNN. See https://github.com/NVIDIA/nvidia-docker for instructions on setting up nvidia-docker2
Lichess has developed a BOT API that allows anybody to hook up a chess engine to Lichess. Leela Chess Zero will make use of this API via the official bridge lichess-bot.
Update: While the MKL version may be useful for analysis, it will be quite slow when generating training games.
If you have an newer intel CPU and no dedicated GPU, you can boost your nps by using Intel’s Math Kernel Library. Please note that you should keep track of your CPU temperature to avoid overheating, especially if you have an older Haswell CPU. Windows Task Manager and HWInfo64 are great tools for tracking your resource usage. This guide takes into account that you have already downloaded the most recent version (http://lczero.org/play/download/) and extracted it.
Use this script to run tournaments between different versions of Leela.
https://cdn.discordapp.com/attachments/430695662108278784/436501331046563861/TestLeelaNets_v7.ipynb
For TCEC Season 13, division 3, LC0 performed much worse than expected. This page summarizes the issues:
Quote from mps19 2018-08-16 at 5:17 PM. lc0_tcec shows 6 elo regression from lc0 v0.16 against AB engines instead of improvement found in self play.
A nice pictorial overview
Leela searches a tree of moves and game states. Each game state is a node in the tree, with an estimated value for that position, and a prioritized list of moves to consider (called the policy for that position). Traditional chess engines have a very-finely-crafted-by-humans value and policy generation system; unlike traditional engines, Leela uses its neural network trained without human knowledge for both value and policy generation. Then, Leela expands the tree to get a better understanding of the root node, the current position.
So you’d like to do some engine testing with Lc0 (testing different nets, checking if your improvement increases elo, etc.). Here’s how:
cutechess-cli
. Repository here. cutechess-cli
is a tool that allows you to play games and tournaments between multiple engines. It is highly recommended to build from source in order to enable some openings-specific flags that are not present in the prebuilt binary releases (namely, policy=round
, which allows multiple engines to play the same openings in one round). For Windows users, it is possible to grab the compiled binaries (both GUI and command line executables) posted at Leela Discord (do not download from the Releases section on github). The GUI binary is compiled with the book openings option policy=round
by default. For Linux users, it is possible to compile with:cd cutechess/projects
qmake -after "SUBDIRS = lib cli"
make
Install ordo
. Repository here. ordo
is a tool for computing elo ratings.
Outside the official training nets on https://training.lczero.org/networks/, some people are downloading the game data and training their own nets. See also: http://data.lczero.org/files/networks-contrib/
The training data generated by lc0 started with version 3 and were progressively updated to the current version 6. Earlier formats were generated by lczero.
Version 3 training data format:
Run # | Reference | Summary | Currently Active | Net Numbers | Best nets |
---|---|---|---|---|---|
NA | Old Main | Original 192x15 “main” run | No | 1 to 601 | ID595 |
test10 | [[Lc0 Transition]] | Original 256x20 test run | No | 10'000 to 11'262 | 11250 11248 |
test20 | Training run reset | Many changes, see blog. | No | 20'001 to 22'201 | 22018 |
test30 | TB rescoring | Experiment with network initialization strategy, trying to solve spike issues. Experiment with Tablebase rescoring | No | 30'001 to 33'005 | 32930 |
Training Run | 1st LR drop | Elo | 2nd LR drop | Elo | 3rd LR drop | Elo | Best Net | Elo | Current best |
---|---|---|---|---|---|---|---|---|---|
Old Main | ID 595 | 3148 | |||||||
Test 10 | ID 10077 | ID 10320 | ID 11013 | ID 11248 | 3282 | * | |||
Test 20 | ID 20247 | 2318 | ID 20493 | ID 21281 | ID 22018 | 3118 | |||
Test 30 | ID 30854 |
ID for test 20 to be checked
go nodes 1000000
from startpos with network id42482
.go nodes 1000000
from (middlegame) r2br3/ppq2pkp/2b2np1/4p3/6N1/2PB1Q1P/PP1B1PP1/R3R1K1 w - - 1 2 with network id42482
.go nodes 1000000
from (late middlegame) br5r/2bpkp2/B6p/2p4q/1PN1np2/P3P3/1BQ3PP/R4RK1 w - - 0 26 with network id42482
.go nodes 1000000
from (endgame) 8/8/1p1n2k1/3P2p1/3P1b2/1P1K1B2/8/4B3 w - - 2 55 with network id42482
.go nodes 1000000
with network id42482
.go nodes 1000000
with network id42482
.Note: Ignore MaterialKey for now, that was crem’s idea for Lc2
For Ubuntu 17.10 and 18.04 see Troubleshooting section
Should now work on 16.04 and 17.10, tested on fresh 16.04 docker and 17.10 docker
In docker image remove sudo
from start of all commands that have it.
Note: These instructions apply to the lczero.exe client, not lc0.exe. See [[lc0-transition]]
sudo apt install docker.io
sudo docker pull rcostheta/leelachesszero-mkl
sudo sudo docker exec -it $(sudo docker run -it -d -p 52062:22 rcostheta/leelachesszero-mkl) /file.sh <username> <password> <number of processes>
Note that the docker image is fairly large (911 MB)
Lc0 (Leela Chess Zero) is a chess engine that uses a trained neural network to play chess. It was inspired by Google’s Alpha Zero project and Lc0’s current networks have since surpassed Alpha Zero’s strength.
Lc0 is often referred to as a chess engine, however, Lc0 is more like a chess engine shell than an actual chess engine. This is often called “the binary”. Lc0 needs a neural network (also called a “weights file”) in order to play, just as a car requires a driver, or a mech requires a pilot.
“Zero” means that no human knowledge have been added, with the exception of the rules of the game (piece movement and victory condition). Even very simple strategic concepts (e.g. it’s good to have more pieces instead of less) were not taught to Leela Chess Zero.
Lc0 XLA backend uses OpenXLA compiler to produce code that executes a neural network. The backend takes ONNX as an input, and converts it to the HLO format that XLA understands.