Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcdserver: significantly reduces start-up time #11779

Merged
merged 3 commits into from
Apr 17, 2020

Conversation

tangcong
Copy link
Contributor

@tangcong tangcong commented Apr 11, 2020

if there are too much keys(> 1 millions),etcd is slow to start. After my investigation, 90% of the time is spent on mvcc restore(rebuild index tree), 9% is spent on open backend db, and because the index tree has a global lock(idx.Insert), there is no room for optimization. however,restore index is called twice, it is unreasonable.we can benefit from pr #11699 and cut time in half.

@codecov-io
Copy link

codecov-io commented Apr 11, 2020

Codecov Report

Merging #11779 into master will decrease coverage by 0.46%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #11779      +/-   ##
==========================================
- Coverage   66.65%   66.18%   -0.47%     
==========================================
  Files         403      403              
  Lines       36881    36879       -2     
==========================================
- Hits        24582    24408     -174     
- Misses      10811    10969     +158     
- Partials     1488     1502      +14     
Impacted Files Coverage Δ
etcdserver/backend.go 54.34% <100.00%> (-1.91%) ⬇️
auth/store.go 52.38% <0.00%> (-25.99%) ⬇️
auth/jwt.go 56.17% <0.00%> (-21.35%) ⬇️
etcdserver/api/v3rpc/lease.go 77.21% <0.00%> (-5.07%) ⬇️
clientv3/leasing/cache.go 87.22% <0.00%> (-4.45%) ⬇️
clientv3/leasing/util.go 95.00% <0.00%> (-3.34%) ⬇️
clientv3/leasing/txn.go 88.09% <0.00%> (-3.18%) ⬇️
etcdserver/api/v3election/election.go 61.11% <0.00%> (-2.78%) ⬇️
clientv3/watch.go 89.56% <0.00%> (-2.72%) ⬇️
pkg/adt/interval_tree.go 84.71% <0.00%> (-2.51%) ⬇️
... and 17 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 59f5fb2...73c8d71. Read the comment docs.

@tangcong
Copy link
Contributor Author

With 40 million key test data, this pr can reduce the startup time from 5 minutes to 2.5 minutes. @jingyih PTAL. thanks.

Copy link
Contributor

@jingyih jingyih left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jingyih
Copy link
Contributor

jingyih commented Apr 16, 2020

cc @gyuho

@gyuho gyuho self-assigned this Apr 16, 2020
CHANGELOG-3.5.md Outdated Show resolved Hide resolved
@gyuho
Copy link
Contributor

gyuho commented Apr 16, 2020

This feels more like a bug... Let's backport this.

As always, awesome work @tangcong !

Thanks.

@gyuho
Copy link
Contributor

gyuho commented Apr 16, 2020

With 40 million key test data, this pr can reduce the startup time from 5 minutes to 2.5 minutes. @jingyih PTAL. thanks.

Can we also include this in CHANGELOG? :)

@tangcong
Copy link
Contributor Author

With 40 million key test data, this pr can reduce the startup time from 5 minutes to 2.5 minutes. @jingyih PTAL. thanks.

Can we also include this in CHANGELOG? :)

updated.

@tangcong
Copy link
Contributor Author

This feels more like a bug... Let's backport this.

As always, awesome work @tangcong !

Thanks.

etcd 3.4/3.3 do not include #11699 ,it is a refactor feature. how do we backport it?

kv := mvcc.New(cfg.Logger, oldbe, &lease.FakeLessor{}, ci, mvcc.StoreConfig{CompactionBatchLimit: cfg.CompactionBatchLimit})
defer kv.Close()
if snapshot.Metadata.Index <= kv.ConsistentIndex() {
if snapshot.Metadata.Index <= ci.ConsistentIndex() {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have to make sure the backend has consistentIndexKeyName key,Otherwise, boltdb will throw an exception. recoverSnapshotBackend is only called when wal exists. I think it is safe. how do you think so? @gyuho

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this is called before we call mvcc.NewStore...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we do something like

	if beExist {
		kvindex := srv.kv.ConsistentIndex()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any exception that wal file exists but db does not?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we do something like

	if beExist {
		kvindex := srv.kv.ConsistentIndex()

done,it will make recoverSnapshotBackend more robust,thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any exception that wal file exists but db does not?

I don't think so, unless user sets up a separate WAL dir, and deletes data dir intentionally.

@gyuho
Copy link
Contributor

gyuho commented Apr 16, 2020

@tangcong Ok, then let's just keep it in master.

@gyuho gyuho merged commit b368be5 into etcd-io:master Apr 17, 2020
@tangcong tangcong deleted the optimize-etcd-startup-time branch February 26, 2021 18:59
@ahrtr ahrtr mentioned this pull request Jul 19, 2022
25 tasks
@ahrtr
Copy link
Member

ahrtr commented Jul 19, 2022

@tangcong can you please backport this PR to 3.4? FYI. #14232

@tangcong
Copy link
Contributor Author

It depends on pr #11699, the changes are relatively large, and I recommend not backporting this pr. @ahrtr

@ahrtr
Copy link
Member

ahrtr commented Jul 20, 2022

It depends on pr #11699, the changes are relatively large, and I recommend not backporting this pr. @ahrtr

Agreed. We should only backport bug fixes, while #11699 is an improvement/refactoring.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

5 participants