Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client: Speed up row access by index #174

Merged
merged 1 commit into from
Jul 1, 2021

Conversation

dave-tucker
Copy link
Collaborator

@dave-tucker dave-tucker commented Jun 21, 2021

This adds a new api to the cache - RowByMdoel which will calculate the
will look up by the UUID of the model, or by the matching the model
against the table indexes. This is much faster as it doesn't need to
iterate across the entire cache.

The Get API has been updated to use this new function

Fixes: #150

Signed-off-by: Dave Tucker dave@dtucker.co.uk

@coveralls
Copy link

coveralls commented Jun 21, 2021

Pull Request Test Coverage Report for Build 990617578

  • 20 of 25 (80.0%) changed or added relevant lines in 2 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+0.1%) to 76.823%

Changes Missing Coverage Covered Lines Changed/Added Lines %
cache/cache.go 15 20 75.0%
Totals Coverage Status
Change from base Build 990592167: 0.1%
Covered Lines: 3202
Relevant Lines: 4168

💛 - Coveralls

cache/cache.go Outdated Show resolved Hide resolved

func (r *RowCache) rowByIndex(m model.Model) model.Model {
mpr, _ := mapper.NewInfo(&r.schema, m)
for index := range r.indexes {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to really verify that the data contained in model is valid and nondefault, consider using mapperInfo.getValidIndexes() (which btw, includes the "_uuid")

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would reqiure exporting that function. The loop in getValidIndexes looks complicated/expensive for every time we Get from the cache - especially as we already know the indexes for the table are valid as we got them from the schema directly. I've pushed an update that is ~functionally equivalent - RowByModel can cope with the UUID, or other indexes but the implementation is a bit different.

This adds a new api to the cache - RowByMdoel which will calculate the
will look up by the UUID of the model, or by the matching the model
against the table indexes. This is much faster as it doesn't need to
iterate across the entire cache.

The Get API has been updated to use this new function

Fixes: ovn-org#150

Signed-off-by: Dave Tucker <dave@dtucker.co.uk>
@dave-tucker dave-tucker merged commit 7322501 into ovn-org:main Jul 1, 2021
dcbw added a commit to dcbw/libovsdb that referenced this pull request Dec 16, 2021
RWLocks are not supposed to be used recursively, and if something
else Locks the row cache then we deadlock.

In this example, goroutine 59 RLocks the RowCache, and then
goroutine 174 attempts to Lock the cache but blocks on ovn-org#59's
RLock. Since there is a writer waiting to acquire the lock,
any further RLocks are effectively Locks, and thus when
goroutine ovn-org#59 attempts to acquire the second RLock in Row()
it deadlocks against goroutine ovn-org#174.

goroutine 59 [semacquire, 12 minutes]:
sync.runtime_SemacquireMutex(0xc000bf2e2c, 0xc0025a3800, 0x0)
	/usr/lib/golang/src/runtime/sema.go:71 +0x47
sync.(*RWMutex).RLock(...)
	/usr/lib/golang/src/sync/rwmutex.go:63
github.com/ovn-org/libovsdb/cache.(*RowCache).Row(0xc000bf2d80, 0xc00342de60, 0x24, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/cache/cache.go:101 +0x139
github.com/ovn-org/libovsdb/cache.(*RowCache).RowByModel(0xc000bf2d80, 0x1a8e8e0, 0xc0046c0000, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/cache/cache.go:130 +0x417
github.com/ovn-org/libovsdb/client.api.Get(0xc000bf3680, 0x0, 0x0, 0xc000183f98, 0x21135a0, 0xc003394600, 0x1a8e8e0, 0xc0046c0000, 0xc00416e300, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/client/api.go:221 +0xd0
github.com/ovn-org/libovsdb/client.(*ovsdbClient).Get(0xc0001c48f0, 0x21135a0, 0xc003394600, 0x1a8e8e0, 0xc0046c0000, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/client/client.go:1189 +0x139
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn.ovnNBLSPDel(0x2138198, 0xc00002d640, 0xc0038731b8, 0x17, 0xc002501b60, 0x29, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/ovn/pods.go:649 +0x265
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn.(*Controller).deleteLogicalPort(0xc00000e1e0, 0xc002f40400)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/ovn/pods.go:145 +0x46a

goroutine 174 [semacquire, 12 minutes]:
sync.runtime_SemacquireMutex(0xc000bf2e28, 0x4f1600, 0x0)
	/usr/lib/golang/src/runtime/sema.go:71 +0x47
sync.(*RWMutex).Lock(0xc000bf2e20)
	/usr/lib/golang/src/sync/rwmutex.go:116 +0x85
github.com/ovn-org/libovsdb/cache.(*RowCache).Update(0xc000bf2d80, 0xc002280570, 0x24, 0x1a8e8e0, 0xc0046c0600, 0xc003319200, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/cache/cache.go:176 +0x6f
github.com/ovn-org/libovsdb/cache.(*TableCache).Populate2(0xc000bf3680, 0xc003766210, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/cache/cache.go:659 +0x7d6
github.com/ovn-org/libovsdb/cache.(*TableCache).Update2(0xc000bf3680, 0x1c71a40, 0xc003331980, 0xc003766210, 0x2e, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/cache/cache.go:536 +0x6b
github.com/ovn-org/libovsdb/client.(*ovsdbClient).update3(0xc0001c48f0, 0xc00451a660, 0x3, 0x4, 0xc003e32738, 0x7f225dc865b8, 0x41b69b)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/client/client.go:637 +0x55a
github.com/ovn-org/libovsdb/client.(*ovsdbClient).createRPC2Client.func4(0xc000289500, 0xc00451a660, 0x3, 0x4, 0xc003e32738, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/client/client.go:395 +0x52

Signed-off-by: Dan Williams <dcbw@redhat.com>
dcbw added a commit to dcbw/libovsdb that referenced this pull request Dec 16, 2021
RWLocks are not supposed to be used recursively, and if something
else Locks the row cache then we deadlock.

In this example, goroutine 59 RLocks the RowCache, and then
goroutine 174 attempts to Lock the cache but blocks on ovn-org#59's
RLock. Since there is a writer waiting to acquire the lock,
any further RLocks are effectively Locks, and thus when
goroutine ovn-org#59 attempts to acquire the second RLock in Row()
it deadlocks against goroutine ovn-org#174.

goroutine 59 [semacquire, 12 minutes]:
sync.runtime_SemacquireMutex(0xc000bf2e2c, 0xc0025a3800, 0x0)
	/usr/lib/golang/src/runtime/sema.go:71 +0x47
sync.(*RWMutex).RLock(...)
	/usr/lib/golang/src/sync/rwmutex.go:63
github.com/ovn-org/libovsdb/cache.(*RowCache).Row(0xc000bf2d80, 0xc00342de60, 0x24, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/cache/cache.go:101 +0x139
github.com/ovn-org/libovsdb/cache.(*RowCache).RowByModel(0xc000bf2d80, 0x1a8e8e0, 0xc0046c0000, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/cache/cache.go:130 +0x417
github.com/ovn-org/libovsdb/client.api.Get(0xc000bf3680, 0x0, 0x0, 0xc000183f98, 0x21135a0, 0xc003394600, 0x1a8e8e0, 0xc0046c0000, 0xc00416e300, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/client/api.go:221 +0xd0
github.com/ovn-org/libovsdb/client.(*ovsdbClient).Get(0xc0001c48f0, 0x21135a0, 0xc003394600, 0x1a8e8e0, 0xc0046c0000, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/client/client.go:1189 +0x139
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn.ovnNBLSPDel(0x2138198, 0xc00002d640, 0xc0038731b8, 0x17, 0xc002501b60, 0x29, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/ovn/pods.go:649 +0x265
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn.(*Controller).deleteLogicalPort(0xc00000e1e0, 0xc002f40400)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/ovn/pods.go:145 +0x46a

goroutine 174 [semacquire, 12 minutes]:
sync.runtime_SemacquireMutex(0xc000bf2e28, 0x4f1600, 0x0)
	/usr/lib/golang/src/runtime/sema.go:71 +0x47
sync.(*RWMutex).Lock(0xc000bf2e20)
	/usr/lib/golang/src/sync/rwmutex.go:116 +0x85
github.com/ovn-org/libovsdb/cache.(*RowCache).Update(0xc000bf2d80, 0xc002280570, 0x24, 0x1a8e8e0, 0xc0046c0600, 0xc003319200, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/cache/cache.go:176 +0x6f
github.com/ovn-org/libovsdb/cache.(*TableCache).Populate2(0xc000bf3680, 0xc003766210, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/cache/cache.go:659 +0x7d6
github.com/ovn-org/libovsdb/cache.(*TableCache).Update2(0xc000bf3680, 0x1c71a40, 0xc003331980, 0xc003766210, 0x2e, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/cache/cache.go:536 +0x6b
github.com/ovn-org/libovsdb/client.(*ovsdbClient).update3(0xc0001c48f0, 0xc00451a660, 0x3, 0x4, 0xc003e32738, 0x7f225dc865b8, 0x41b69b)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/client/client.go:637 +0x55a
github.com/ovn-org/libovsdb/client.(*ovsdbClient).createRPC2Client.func4(0xc000289500, 0xc00451a660, 0x3, 0x4, 0xc003e32738, 0x0, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/ovn-org/libovsdb/client/client.go:395 +0x52

Signed-off-by: Dan Williams <dcbw@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Optimise index checking in API.Get()
3 participants