Edit on GitHub

Outdated documents

The Leela Chess Zero’s neural network is largely based on the DeepMind’s AlphaGo Zero1 and AlphaZero2 architecture. There are however some changes.

Network topology

The core of the network is a residual tower with Squeeze and Excitation3 (SE) layers.
The number of the residual BLOCKS and FILTERS (channels) per block differs between networks. Typical values for BLOCKS×FILTERS are 10×128, 20×256, 24×320.
SE layers have SE_CHANNELS channels (typically 32 or so).

Implement your own Lc0 backend in four simple steps:

  1. Implement Network and NetworkComputation interface.
  2. Write a factory function to create your backend.
  3. Register your factory function using REGISTER_NETWORK macro.
  4. Link your implementation with Lc0.

Some details:

Various stats from the Lc0 participation at WCCC in 2022.

Logs location

The logs can be found here.

Per game stats

Legend

Images are possible to open in new tab and zoom in.

This page contains summary of changes in PR 1195.

It’s basically the same idea as before, just (hopefully) more carefully done.

Idea

The basic idea is that:

  • By the end of the game, all time should be used.
  • At the moment of making every move of a game, a search tree should have the same total number of nodes (including reused nodes).

Note: that may be not the best way though, maybe trying to make bestmove to have the same number of visits (rather than tree total) would be actually better.

The NN weights file is in Google Protocol Buffers1 format.

The schema definition is located in the lczero-common repository.

NN format description

The network format description is contained in weights.format().network_format() submessage.