Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update default main net to nn-c721dfca8cd3.nnue #5254

Closed

Conversation

linrock
Copy link
Contributor

@linrock linrock commented May 17, 2024

Created by first retraining the spsa-tuned main net nn-ae6a388e4a1a.nnue with:

  • using v6-dd data without bestmove captures removed
  • addition of T80 mar2024 data
  • increasing loss by 20% when Q is too high
  • torch.compile changes for marginal training speed gains

And then SPSA tuning weights of epoch 899 following methods described in:
#5149

This net was reached at 92k out of 120k steps in this 70+0.7 th 7 SPSA tuning run: https://tests.stockfishchess.org/tests/view/66413b7df9f4e8fc783c9bbb

Thanks to @Viren6 for suggesting usage of:

  • c value 4 for the weights
  • c value 128 for the biases

Scripts for automating applying fishtest spsa params to exporting tuned .nnue are in:
https://github.com/linrock/nnue-tools/tree/master/spsa

Before spsa tuning, epoch 899 was nn-f85738aefa84.nnue https://tests.stockfishchess.org/tests/view/663e5c893a2f9702074bc167

After initially training with max-epoch 800, training was resumed with max-epoch 1000.

experiment-name: 3072--S11--more-data-v6-dd-t80-mar2024--see-ge0-20p-more-loss-high-q-sk28-l8
nnue-pytorch-branch: linrock/nnue-pytorch/3072-r21-skip-more-wdl-see-ge0-20p-more-loss-high-q-torch-compile-more

start-from-engine-test-net: False
start-from-model: /data/config/apr2024-3072/nn-ae6a388e4a1a.nnue

early-fen-skipping: 28
training-dataset:
  /data/S11-mar2024/:
    - leela96.v2.min.binpack

    - test60-2021-11-12-novdec-12tb7p.v6-dd.min.binpack
    - test78-2022-01-to-05-jantomay-16tb7p.v6-dd.min.binpack

    - test80-2022-06-jun-16tb7p.v6-dd.min.binpack

    - test80-2022-08-aug-16tb7p.v6-dd.min.binpack
    - test80-2022-09-sep-16tb7p.v6-dd.min.binpack

    - test80-2023-01-jan-16tb7p.v6-sk20.min.binpack
    - test80-2023-02-feb-16tb7p.v6-sk20.min.binpack
    - test80-2023-03-mar-2tb7p.v6-sk16.min.binpack
    - test80-2023-04-apr-2tb7p.v6-sk16.min.binpack
    - test80-2023-05-may-2tb7p.v6.min.binpack

    # https://github.com/official-stockfish/Stockfish/pull/4782
    - test80-2023-06-jun-2tb7p.binpack
    - test80-2023-07-jul-2tb7p.binpack

    # https://github.com/official-stockfish/Stockfish/pull/4972
    - test80-2023-08-aug-2tb7p.v6.min.binpack
    - test80-2023-09-sep-2tb7p.binpack
    - test80-2023-10-oct-2tb7p.binpack

    # S9 new data: https://github.com/official-stockfish/Stockfish/pull/5056
    - test80-2023-11-nov-2tb7p.binpack
    - test80-2023-12-dec-2tb7p.binpack

    # S10 new data: https://github.com/official-stockfish/Stockfish/pull/5149
    - test80-2024-01-jan-2tb7p.binpack
    - test80-2024-02-feb-2tb7p.binpack

    # S11 new data
    - test80-2024-03-mar-2tb7p.binpack

  /data/filt-v6-dd/:
    - test77-dec2021-16tb7p-filter-v6-dd.binpack
    - test78-juntosep2022-16tb7p-filter-v6-dd.binpack
    - test79-apr2022-16tb7p-filter-v6-dd.binpack
    - test79-may2022-16tb7p-filter-v6-dd.binpack
    - test80-jul2022-16tb7p-filter-v6-dd.binpack
    - test80-oct2022-16tb7p-filter-v6-dd.binpack
    - test80-nov2022-16tb7p-filter-v6-dd.binpack

num-epochs: 1000

lr: 4.375e-4
gamma: 0.995
start-lambda: 0.8
end-lambda: 0.7

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move:
nn-epoch899.nnue : 4.6 +/- 1.4

Passed STC:
https://tests.stockfishchess.org/tests/view/6645454893ce6da3e93b31ae
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 95232 W: 24598 L: 24194 D: 46440
Ptnml(0-2): 294, 11215, 24180, 11647, 280

Passed LTC:
https://tests.stockfishchess.org/tests/view/6645522d93ce6da3e93b31df
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 320544 W: 81432 L: 80524 D: 158588
Ptnml(0-2): 164, 35659, 87696, 36611, 142

bench 1995552

Created by first retraining the spsa-tuned main net `nn-ae6a388e4a1a.nnue` with:
- using v6-dd data without bestmove captures removed
- addition of T80 mar2024 data
- increasing loss by 20% when Q is too high
- torch.compile changes for marginal training speed gains

And then SPSA tuning weights of epoch 899 following methods described in:
official-stockfish#5149

This net was reached at 92k out of 120k steps in this 70+0.7 th 7 SPSA tuning run:
https://tests.stockfishchess.org/tests/view/66413b7df9f4e8fc783c9bbb
Thanks to @Viren6 for suggesting usage of:
- c value 4 for the weights
- c value 128 for the biases

Scripts for automating applying fishtest spsa params to exporting tuned .nnue are in:
https://github.com/linrock/nnue-tools/tree/master/spsa

Before spsa tuning, epoch 899 was nn-f85738aefa84.nnue
https://tests.stockfishchess.org/tests/view/663e5c893a2f9702074bc167

After initially training with max-epoch 800, training was resumed with max-epoch 1000.

```
experiment-name: 3072--S11--more-data-v6-dd-t80-mar2024--see-ge0-20p-more-loss-high-q-sk28-l8
nnue-pytorch-branch: linrock/nnue-pytorch/3072-r21-skip-more-wdl-see-ge0-20p-more-loss-high-q-torch-compile-more

start-from-engine-test-net: False
start-from-model: /data/config/apr2024-3072/nn-ae6a388e4a1a.nnue

early-fen-skipping: 28
training-dataset:
  /data/S11-mar2024/:
    - leela96.v2.min.binpack

    - test60-2021-11-12-novdec-12tb7p.v6-dd.min.binpack
    - test78-2022-01-to-05-jantomay-16tb7p.v6-dd.min.binpack

    - test80-2022-06-jun-16tb7p.v6-dd.min.binpack

    - test80-2022-08-aug-16tb7p.v6-dd.min.binpack
    - test80-2022-09-sep-16tb7p.v6-dd.min.binpack

    - test80-2023-01-jan-16tb7p.v6-sk20.min.binpack
    - test80-2023-02-feb-16tb7p.v6-sk20.min.binpack
    - test80-2023-03-mar-2tb7p.v6-sk16.min.binpack
    - test80-2023-04-apr-2tb7p.v6-sk16.min.binpack
    - test80-2023-05-may-2tb7p.v6.min.binpack

    # official-stockfish#4782
    - test80-2023-06-jun-2tb7p.binpack
    - test80-2023-07-jul-2tb7p.binpack

    # official-stockfish#4972
    - test80-2023-08-aug-2tb7p.v6.min.binpack
    - test80-2023-09-sep-2tb7p.binpack
    - test80-2023-10-oct-2tb7p.binpack

    # S9 new data: official-stockfish#5056
    - test80-2023-11-nov-2tb7p.binpack
    - test80-2023-12-dec-2tb7p.binpack

    # S10 new data: official-stockfish#5149
    - test80-2024-01-jan-2tb7p.binpack
    - test80-2024-02-feb-2tb7p.binpack

    # S11 new data
    - test80-2024-03-mar-2tb7p.binpack

  /data/filt-v6-dd/:
    - test77-dec2021-16tb7p-filter-v6-dd.binpack
    - test78-juntosep2022-16tb7p-filter-v6-dd.binpack
    - test79-apr2022-16tb7p-filter-v6-dd.binpack
    - test79-may2022-16tb7p-filter-v6-dd.binpack
    - test80-jul2022-16tb7p-filter-v6-dd.binpack
    - test80-oct2022-16tb7p-filter-v6-dd.binpack
    - test80-nov2022-16tb7p-filter-v6-dd.binpack

num-epochs: 1000

lr: 4.375e-4
gamma: 0.995
start-lambda: 0.8
end-lambda: 0.7
```

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move:
nn-epoch899.nnue : 4.6 +/- 1.4

Passed STC:
https://tests.stockfishchess.org/tests/view/6645454893ce6da3e93b31ae
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 95232 W: 24598 L: 24194 D: 46440
Ptnml(0-2): 294, 11215, 24180, 11647, 280

Passed LTC:
https://tests.stockfishchess.org/tests/view/6645522d93ce6da3e93b31df
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 320544 W: 81432 L: 80524 D: 158588
Ptnml(0-2): 164, 35659, 87696, 36611, 142

bench 1995552
@vondele vondele added 🚀 gainer Gains elo functional-change to be merged Will be merged shortly labels May 18, 2024
@vondele vondele closed this in 1b7dea3 May 18, 2024
linrock added a commit to linrock/Stockfish that referenced this pull request May 30, 2024
Created by further tuning the spsa-tuned main net `nn-c721dfca8cd3.nnue`
with the same methods described in official-stockfish#5254

This net was reached at 61k / 120k spsa games at 70+0.7 th 7:
https://tests.stockfishchess.org/tests/view/665639d0a86388d5e27dd259

Passed STC:
https://tests.stockfishchess.org/tests/view/6657d44e6b0e318cefa8d771
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 114688 W: 29775 L: 29344 D: 55569
Ptnml(0-2): 274, 13633, 29149, 13964, 324

Passed LTC:
https://tests.stockfishchess.org/tests/view/6657e1e46b0e318cefa8d7a6
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 88152 W: 22412 L: 21988 D: 43752
Ptnml(0-2): 56, 9560, 24409, 10006, 45

Bench: 1288612
vondele pushed a commit to vondele/Stockfish that referenced this pull request May 30, 2024
Created by further tuning the spsa-tuned main net `nn-c721dfca8cd3.nnue`
with the same methods described in official-stockfish#5254

This net was reached at 61k / 120k spsa games at 70+0.7 th 7:
https://tests.stockfishchess.org/tests/view/665639d0a86388d5e27dd259

Passed STC:
https://tests.stockfishchess.org/tests/view/6657d44e6b0e318cefa8d771
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 114688 W: 29775 L: 29344 D: 55569
Ptnml(0-2): 274, 13633, 29149, 13964, 324

Passed LTC:
https://tests.stockfishchess.org/tests/view/6657e1e46b0e318cefa8d7a6
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 88152 W: 22412 L: 21988 D: 43752
Ptnml(0-2): 56, 9560, 24409, 10006, 45

closes official-stockfish#5308

Bench: 1434678
linrock added a commit to linrock/Stockfish that referenced this pull request Jul 6, 2024
Created by setting output weights (256) and biases (8) of the previous main net
nn-ddcfb9224cdb.nnue to values found around 12k / 120k spsa games at 120+1.2

This used modified fishtest dev workers to construct .nnue files from
spsa params, then load them with EvalFile when running tests:
https://github.com/linrock/fishtest/tree/spsa-file-modified-nnue/worker

Inspired by researching loading spsa params from files:
official-stockfish/fishtest#1926

Scripts for modifying nnue files and preparing params:
https://github.com/linrock/nnue-pytorch/tree/no-gpu-modify-nnue

spsa params:
  weights: [-127, 127], c_end = 6
  biases: [-8192, 8192], c_end = 64

Example of reading output weights and biases from the previous main net using
nnue-pytorch and printing spsa params in a format compatible with fishtest:

```
import features
from serialize import NNUEReader

feature_set = features.get_feature_set_from_name("HalfKAv2_hm")
with open("nn-ddcfb9224cdb.nnue", "rb") as f:
    model = NNUEReader(f, feature_set).model

c_end_weights = 6
c_end_biases = 64

for i in range(8):
    for j in range(32):
        value = round(int(model.layer_stacks.output.weight[i, j] * 600 * 16) / 127)
        print(f"oW[{i}][{j}],{value},-127,127,{c_end_weights},0.0020")

for i in range(8):
    value = int(model.layer_stacks.output.bias[i] * 600 * 16)
    print(f"oB[{i}],{value},-8192,8192,{c_end_biases},0.0020")
```

For more info on spsa tuning params in nets:
official-stockfish#5149
official-stockfish#5254

Passed STC:
https://tests.stockfishchess.org/tests/view/66894d64e59d990b103f8a37
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 32000 W: 8443 L: 8137 D: 15420
Ptnml(0-2): 80, 3627, 8309, 3875, 109

Passed LTC:
https://tests.stockfishchess.org/tests/view/6689668ce59d990b103f8b8b
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 172176 W: 43822 L: 43225 D: 85129
Ptnml(0-2): 97, 18821, 47633, 19462, 75

bench 993416
vondele pushed a commit to vondele/Stockfish that referenced this pull request Jul 9, 2024
Created by setting output weights (256) and biases (8) of the previous main net
nn-ddcfb9224cdb.nnue to values found around 12k / 120k spsa games at 120+1.2

This used modified fishtest dev workers to construct .nnue files from
spsa params, then load them with EvalFile when running tests:
https://github.com/linrock/fishtest/tree/spsa-file-modified-nnue/worker

Inspired by researching loading spsa params from files:
official-stockfish/fishtest#1926

Scripts for modifying nnue files and preparing params:
https://github.com/linrock/nnue-pytorch/tree/no-gpu-modify-nnue

spsa params:
  weights: [-127, 127], c_end = 6
  biases: [-8192, 8192], c_end = 64

Example of reading output weights and biases from the previous main net using
nnue-pytorch and printing spsa params in a format compatible with fishtest:

```
import features
from serialize import NNUEReader

feature_set = features.get_feature_set_from_name("HalfKAv2_hm")
with open("nn-ddcfb9224cdb.nnue", "rb") as f:
    model = NNUEReader(f, feature_set).model

c_end_weights = 6
c_end_biases = 64

for i in range(8):
    for j in range(32):
        value = round(int(model.layer_stacks.output.weight[i, j] * 600 * 16) / 127)
        print(f"oW[{i}][{j}],{value},-127,127,{c_end_weights},0.0020")

for i in range(8):
    value = int(model.layer_stacks.output.bias[i] * 600 * 16)
    print(f"oB[{i}],{value},-8192,8192,{c_end_biases},0.0020")
```

For more info on spsa tuning params in nets:
official-stockfish#5149
official-stockfish#5254

Passed STC:
https://tests.stockfishchess.org/tests/view/66894d64e59d990b103f8a37
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 32000 W: 8443 L: 8137 D: 15420
Ptnml(0-2): 80, 3627, 8309, 3875, 109

Passed LTC:
https://tests.stockfishchess.org/tests/view/6689668ce59d990b103f8b8b
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 172176 W: 43822 L: 43225 D: 85129
Ptnml(0-2): 97, 18821, 47633, 19462, 75

closes official-stockfish#5459

bench 1120091
yl25946 pushed a commit to yl25946/Stockfish that referenced this pull request Jul 9, 2024
Created by setting output weights (256) and biases (8) of the previous main net
nn-ddcfb9224cdb.nnue to values found around 12k / 120k spsa games at 120+1.2

This used modified fishtest dev workers to construct .nnue files from
spsa params, then load them with EvalFile when running tests:
https://github.com/linrock/fishtest/tree/spsa-file-modified-nnue/worker

Inspired by researching loading spsa params from files:
official-stockfish/fishtest#1926

Scripts for modifying nnue files and preparing params:
https://github.com/linrock/nnue-pytorch/tree/no-gpu-modify-nnue

spsa params:
  weights: [-127, 127], c_end = 6
  biases: [-8192, 8192], c_end = 64

Example of reading output weights and biases from the previous main net using
nnue-pytorch and printing spsa params in a format compatible with fishtest:

```
import features
from serialize import NNUEReader

feature_set = features.get_feature_set_from_name("HalfKAv2_hm")
with open("nn-ddcfb9224cdb.nnue", "rb") as f:
    model = NNUEReader(f, feature_set).model

c_end_weights = 6
c_end_biases = 64

for i in range(8):
    for j in range(32):
        value = round(int(model.layer_stacks.output.weight[i, j] * 600 * 16) / 127)
        print(f"oW[{i}][{j}],{value},-127,127,{c_end_weights},0.0020")

for i in range(8):
    value = int(model.layer_stacks.output.bias[i] * 600 * 16)
    print(f"oB[{i}],{value},-8192,8192,{c_end_biases},0.0020")
```

For more info on spsa tuning params in nets:
official-stockfish#5149
official-stockfish#5254

Passed STC:
https://tests.stockfishchess.org/tests/view/66894d64e59d990b103f8a37
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 32000 W: 8443 L: 8137 D: 15420
Ptnml(0-2): 80, 3627, 8309, 3875, 109

Passed LTC:
https://tests.stockfishchess.org/tests/view/6689668ce59d990b103f8b8b
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 172176 W: 43822 L: 43225 D: 85129
Ptnml(0-2): 97, 18821, 47633, 19462, 75

closes official-stockfish#5459

bench 1120091
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
functional-change 🚀 gainer Gains elo to be merged Will be merged shortly
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants