Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memcache Support #2096

Merged
merged 1 commit into from
Dec 8, 2022
Merged

Conversation

ilixiaocui
Copy link
Contributor

What problem does this PR solve?

Issue Number: #xxx

Problem Summary:

What is changed and how it works?

What's Changed:

How it Works:

Side effects(Breaking backward compatibility? Performance regression?):

Check List

  • Relevant documentation/comments is changed or added
  • I acknowledge that all my contributions will be made under the project's license

@ilixiaocui ilixiaocui force-pushed the add_global_cache branch 15 times, most recently from e5a62fe to 81efaa0 Compare December 1, 2022 12:24
@ilixiaocui ilixiaocui force-pushed the add_global_cache branch 11 times, most recently from 676140c to 359e7ca Compare December 6, 2022 06:29
@ilixiaocui
Copy link
Contributor Author

recheck

@ilixiaocui ilixiaocui force-pushed the add_global_cache branch 2 times, most recently from e7512e8 to dc4fb31 Compare December 6, 2022 12:27
@@ -59,7 +60,7 @@ list:
@bash util/build.sh --stor=$(stor) --list

build:
@bash util/build.sh --stor=$(stor) --only=$(only) --dep=$(dep) --release=$(release) --os=$(os)
@bash util/build.sh --stor=${stor} --only=$(only) --dep=$(dep) --release=$(release) --ci=$(ci) --os=$(os)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wthat's the 'ci' option used for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

if (!mdsClient_->AllocOrGetMemcacheCluster(fsInfo_->fsid(),
&kvcachecluster)) {
LOG(ERROR) << "FLAGS_supportKVcache = " << FLAGS_supportKVcache
<< ", but AllocOrGetMemcacheCluster fail";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more space before "fail"

return false;
}

// std::string ip[3] = {"10.182.2.46", "10.182.2.47", "10.182.2.48"};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete code needn't

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

}

bool KVClientManager::Get(std::shared_ptr<GetKvCacheContext> task) {
LatencyGuard guard(&g_kvClientMetric->kvClientGet.latency);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This 'Get' function can request the 'Get' function in line84.

assert(nullptr != task->value);
return Get(task->key, task->value, task->offset, task->length);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


bool MdsClientImpl::AllocOrGetMemcacheCluster(
uint32_t fsId, curvefs::mds::topology::MemcacheCluster *cluster) {
return false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why use a fake one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will be implemented by #2108

GenerateKVReuqest(inodeWrapper, memCacheMissRequest, dataBuf,
&kvRequest);

// read from s3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

read from diskcache or s3?
The read order is kvcache -> diskcache -> s3?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

localcache/remote kv cluster fail will not return error code.
Failure to read from s3 will eventually return failure.
done.

// meaningful error code
// read from s3 not exist
// 1. may be the metaserver compaction update inode is not
// synchronized to the client. clear inodecache && get agin
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get again

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. ReadKVRequest is already handled with while.
The situations of s3 return error will retry:If the error is not exist, it may be that the data fragment has not been transmitted to s3, and it will wait for retry according to HandleReadS3NotExist. Other errors indicate that reading from s3 failed and will not be retried.

copy_file ./thirdparties/aws/aws-sdk-cpp/build/aws-cpp-sdk-s3-crt/libaws-cpp-sdk-s3-crt.so $docker_prefix
copy_file ./thirdparties/memcache/libmemcached-1.1.2/build-libmemcached/src/libmemcached/libmemcached.so $docker_prefix
copy_file ./thirdparties/memcache/libmemcached-1.1.2/build-libmemcached/src/libmemcached/libmemcached.so.11 $docker_prefix
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lrwxrwxrwx 1 xxxx 18 Dec 7 14:05 libmemcached.so -> libmemcached.so.11
lrwxrwxrwx 1 xxxx 22 Dec 7 14:05 libmemcached.so.11 -> libmemcached.so.11.0.0
-rwxr-xr-x 1 xxxx 225728 Dec 7 14:05 libmemcached.so.11.0.0

need copy libmemcached.so.11.0.0 ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the perspective of ldd, it directly depends on so.11, and the actual construction environment test only needs to rely on so.11.

return false;
}

// std::string ip[3] = {"10.182.2.46", "10.182.2.47", "10.182.2.48"};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete unused code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

}

// init kvcacheclient manager
// KVClientManagerOpt config;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete unused code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


bool MdsClientImpl::AllocOrGetMemcacheCluster(
uint32_t fsId, curvefs::mds::topology::MemcacheCluster *cluster) {
return false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return false derectly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will be implemented by #2108

#define ONRETURN(TYPE, RES, KEY, ERRORLOG) \
if (RES) { \
g_kvClientMetric->kvClient##TYPE.qps.count << 1; \
VLOG(9) << "Set key = " << KEY << " OK"; \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set key -> type key

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

VLOG(9) << "Set key = " << KEY << " OK"; \
} else { \
g_kvClientMetric->kvClient##TYPE.eps.count << 1; \
LOG(ERROR) << "Set key = " << KEY << " error = " << ERRORLOG; \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set key -> type key

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

char *databuf, uint64_t offset,
uint64_t length, int *ret) {
uint64_t start = butil::cpuwide_time_us();
*ret = s3ClientAdaptor_->GetS3Client()->Download(name, databuf, offset,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is only the logic of sync download here, do you not consider async download?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's a remaining problem and is fixed by #2080

@@ -80,6 +80,11 @@ struct SpaceAllocServerOption {
uint64_t rpcTimeoutMs;
};

struct KVClientManagerOpt {
int setThreadPooln = 4;
int getThreadPooln = 4;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getThreadPooln where does this param use?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is a reserved parameter, libmemcached, the current overall reading process is synchronous. Also waiting for pr #2080

@cw123 cw123 mentioned this pull request Dec 7, 2022
2 tasks
@ilixiaocui ilixiaocui marked this pull request as ready for review December 7, 2022 12:03
@ilixiaocui ilixiaocui force-pushed the add_global_cache branch 3 times, most recently from 2bdaf12 to f5a0193 Compare December 8, 2022 03:08
Signed-off-by: ilixiaocui <ilixiaocui@163.com>
@cw123
Copy link
Contributor

cw123 commented Dec 8, 2022

this pr should wait #2108 #2080 ?

@ilixiaocui ilixiaocui merged commit 2c851ac into opencurve:master Dec 8, 2022
@ilixiaocui
Copy link
Contributor Author

this pr should wait #2108 #2080 ?

After #2108 is merged, we can test it.

#2080 #2100 involves performance issues,we can wait it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants