Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track Repository Gen. in BlobStoreRepository #48944

Merged

Conversation

original-brownbear
Copy link
Member

@original-brownbear original-brownbear commented Nov 11, 2019

This is intended as a stop-gap solution/improvement to #38941 that
prevents repo modifications without an intermittent master failover
from causing inconsistent (outdated due to inconsistent listing of index-N blobs)
RepositoryData to be written.

Tracking the latest repository generation will move to the cluster state in a
separate pull request. This is intended as a low-risk change to be backported as
far as possible and motived by the recently increased chance of #38941
causing trouble via SLM (see #47520).

Closes #47834
Closes #49048

This is intended as a stop-gap solution/improvement to elastic#38941 that
prevents repo modifications without an intermittent master failover
from causing inconsistent (outdated due to inconsistent listing of index-N blobs)
`RepositoryData` to be written.

Tracking the latest repository generation will move to the cluster state in a
separate pull request. This is intended to be backported as far as possible and
motived by the recently increased chance of elastic#38941 causing trouble via SLM.
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (:Distributed/Snapshot/Restore)

@original-brownbear
Copy link
Member Author

Jenkins run elasticsearch-ci/packaging-sample-matrix (seems to hang on uploading build result)

@original-brownbear
Copy link
Member Author

I adjusted this PR to gracefully/automatically hande concurrent repository modifications as discussed earlier today. See c540d39 (in particular the revert of test changes I initially added here to make the change work with tests clearing out repos that are now unnecessary)

This also automatically resolves #47834 since gracefully retrying on an external delete of index-N blob is functionally equivalent to concurrent modification issues.

@original-brownbear
Copy link
Member Author

As discussed with Yannick on another channel, adding a test for eventual consistent listing here as well. Will re-request reviews once that's in.

// It's always a possibility to not see the latest index-N in the listing here on an eventually consistent blob store, just
// debug log it. Any blobs leaked as a result of an inconsistent listing here will be cleaned up in a subsequent cleanup or
// snapshot delete run anyway.
logger.debug("Determined repository's generation from its contents to [" + generation + "] but " +
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may be a little controversial:

By tracking the latest gen in the field, we can now identify out of sync listings that we would have previous missed and that would just have failed in a subsequent step where the repo gen is compared. WIth this change, if we miss to list the latest index-N, we can still complete a delte or cleanup just fine (assuming the value in latestKnownRepoGen is correct).

I think it's better user experience to not do a perfect cleanup in this edge case but proceed with the delete/cleanup as if nothing happened. On an eventually consistent repo, the fact that we list out the correct index-N does not guarantee that we didn't miss any other root blobs in the listing anyway.
Also, apart from maybe missing some stale blobs, the delete will work out perfectly fine otherwise.


// Randomly filter out the latest /index-N blob from a listing to test that tracking of it in latestKnownRepoGen
// overrides an inconsistent listing
private Map<String, BlobMetaData> maybeMissLatestIndexN(Map<String, BlobMetaData> listing) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am aware that this does not fully cover all possible inconsistent listing scenarios, but only the scenario of missing a known (in the latestKnownRepoGen field) index-N, but correctly handling this scenario is the only thing fixed here for now. It's the most likely scenario in practice though in my opinion (inconsistent listing after back-to-back operations without master failover).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks sufficient to me

@original-brownbear
Copy link
Member Author

This should be good for review now :)

Copy link
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've left some comments

}
final long genToLoad = latestKnownRepoGen.updateAndGet(known -> Math.max(known, generation));
if (genToLoad != generation) {
logger.warn("Determined repository generation [" + generation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be warn level? In safeRepositoryData you've just logged this as debug.

Also, this warning is confusing to a user. Perhaps we could talk about eventually consistent repositories here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. Let's just make this debug. I wouldn't necessarily start talking about eventual consistency here. It's not the only thing that might lead to this warning, concurrent modifications of the repo will have the same result.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In hindsight, I wonder if we should log this at info level, just so that we get some stats on how often this logic saves the day on Cloud

Copy link
Member Author

@original-brownbear original-brownbear Nov 14, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now I'd assume/hope the answer here is "never" :D (with standard snapshotting ... other functionality/manual action/... may trigger this obviously) but yea. Let's do info and verify :)

Copy link
Member Author

@original-brownbear original-brownbear left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ywelsch all addressed I think :)

}
final long genToLoad = latestKnownRepoGen.updateAndGet(known -> Math.max(known, generation));
if (genToLoad != generation) {
logger.warn("Determined repository generation [" + generation
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. Let's just make this debug. I wouldn't necessarily start talking about eventual consistency here. It's not the only thing that might lead to this warning, concurrent modifications of the repo will have the same result.

@@ -920,6 +963,12 @@ private RepositoryData getRepositoryData(long indexGen) {
return RepositoryData.snapshotsFromXContent(parser, indexGen);
}
} catch (IOException ioe) {
// If we fail to load the generation we tracked in latestKnownRepoGen we reset it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if resetting is the right thing to do here. If the content of the repo has been deleted (or bucket/folder moved, or permissions changed etc) maybe we should keep the last generation seen around, and let the user sort the issue and re-register the repository?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We talked about that yesterday and I figured that we decided not to do that (yet). I'm of the same opinion but it's quite the change in behavior if we want to just do this as a short-term fix.
Maybe we should just move to that kind of stricter approach in 7.x once we start tracking the repo generation in the CS permanently but for now not do any big experiments? :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rah, I've already forgot about this discussion, sorry. But I'm good with the plan.

Copy link
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM (left one comment about logging)

@original-brownbear
Copy link
Member Author

Jenkins run elasticsearch-ci/2 (random X-pack failure)

@original-brownbear
Copy link
Member Author

Thanks Yannick & Tanguy!

@original-brownbear original-brownbear merged commit 37c58ca into elastic:master Nov 14, 2019
@original-brownbear original-brownbear deleted the stopgap-repo-gen-solution branch November 14, 2019 21:30
original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Nov 14, 2019
This is intended as a stop-gap solution/improvement to elastic#38941 that
prevents repo modifications without an intermittent master failover
from causing inconsistent (outdated due to inconsistent listing of index-N blobs)
`RepositoryData` to be written.

Tracking the latest repository generation will move to the cluster state in a
separate pull request. This is intended as a low-risk change to be backported as
far as possible and motived by the recently increased chance of elastic#38941
causing trouble via SLM (see elastic#47520).

Closes elastic#47834
Closes elastic#49048
original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Nov 14, 2019
This is intended as a stop-gap solution/improvement to elastic#38941 that
prevents repo modifications without an intermittent master failover
from causing inconsistent (outdated due to inconsistent listing of index-N blobs)
`RepositoryData` to be written.

Tracking the latest repository generation will move to the cluster state in a
separate pull request. This is intended as a low-risk change to be backported as
far as possible and motived by the recently increased chance of elastic#38941
causing trouble via SLM (see elastic#47520).

Closes elastic#47834
Closes elastic#49048
original-brownbear added a commit that referenced this pull request Nov 15, 2019
This is intended as a stop-gap solution/improvement to #38941 that
prevents repo modifications without an intermittent master failover
from causing inconsistent (outdated due to inconsistent listing of index-N blobs)
`RepositoryData` to be written.

Tracking the latest repository generation will move to the cluster state in a
separate pull request. This is intended as a low-risk change to be backported as
far as possible and motived by the recently increased chance of #38941
causing trouble via SLM (see #47520).

Closes #47834
Closes #49048
original-brownbear added a commit that referenced this pull request Nov 15, 2019
This is intended as a stop-gap solution/improvement to #38941 that
prevents repo modifications without an intermittent master failover
from causing inconsistent (outdated due to inconsistent listing of index-N blobs)
`RepositoryData` to be written.

Tracking the latest repository generation will move to the cluster state in a
separate pull request. This is intended as a low-risk change to be backported as
far as possible and motived by the recently increased chance of #38941
causing trouble via SLM (see #47520).

Closes #47834
Closes #49048
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
6 participants