Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Periodic flush record files instead of flushing them on std::endl #1241

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

mint570
Copy link
Contributor

@mint570 mint570 commented May 17, 2023

This PR couples with sonic-net/sonic-swss#2782.
This is done for performance purposes. In our testing, this provides 8% end-to-end performance improvement.

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented May 17, 2023

CLA Signed

The committers listed above are authorized under a signed CLA.

@@ -177,7 +177,7 @@ void Recorder::recordLine(

if (m_ofstream.is_open())
{
m_ofstream << getTimestamp() << "|" << line << std::endl;
m_ofstream << getTimestamp() << "|" << line << '\n';
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is bad as it can get, please revert

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this bad? This is the main change of this PR. The std::endl will trigger flush. Every line it writes trigger the flush, which impacts performance.
(The PR is not merged, nothing need to revert.)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, we want to trigger flush

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason we need to flush on every line? That impacts performance. This PR changes to flush every second. The drawback is that if the program crashes, some record lines might miss.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, reason is to keep logs to last line in case of process crash

PS. make some performance measurements with and without this to check actual gain and present numbers here. normally during warm boot performance is not an issue since very high number of commands are via route entry creations, but many of those operations during warm boot are packed into bulk request like 1k routes per single command, so during warm boot there is about 2-3k log lines logged, i don't think that's much performance impact, but i never measured this, at some point long time ago we achieved 16k routes per second using bulk api so our goal of 10k/sec was reached

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It had about 8% end-to-end improvement on our testing. Since our hardware/test could be very different, this is only for reference.

We have tests that program 1k routes & nexthop groups (with batch size 100, and nexthop group size 5).
Before the change:
Insert: 536 ms
Modify: 339 ms
Delete: 329 ms

After the change (together with sonic-net/sonic-swss#2782):
Insert: 472 ms
Modify: 333 ms
Delete: 299 ms

The modify did not have much improvement.

Also, the std library will auto flush when the buffer is full. In our case, the buffer is about 1k which holds about 10 log lines.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not a significant difference, i would propose to leave code as is

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants