Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BTreeSet difference, symmetric_difference & union optimized #65375

Closed
wants to merge 1 commit into from

Conversation

ssomers
Copy link
Contributor

@ssomers ssomers commented Oct 13, 2019

Alternative to #65226: instead of peeking specialized to MergeIter, an additional Peeking intermediate. More lines of code but more readable, I think. Though it stirs a can of worms about Peekable.

It obtains the same gains for symmetric_difference and union, and also boosts difference a few percent (if the left set is not much smaller than the right set), e.g.

test difference_random_100::vs_100                     ... bench:         680 ns/iter (+/- 5)
test difference_random_100::vs_100_future              ... bench:         675 ns/iter (+/- 9)
test difference_random_10k::vs_10k                     ... bench:     166,274 ns/iter (+/- 2,515)
test difference_random_10k::vs_10k_future              ... bench:     162,993 ns/iter (+/- 1,757)
test difference_subsets::_10_vs_100                    ... bench:         213 ns/iter (+/- 1)
test difference_subsets::_10_vs_100_future             ... bench:         194 ns/iter (+/- 5)

In addition, some cleanup of earlier changes suggested by clippy, which I can push to #65226 too.

r? @bluss

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Oct 13, 2019
Peeking { head, tail: iter }
}
fn next(&mut self) -> Option<I::Item> {
match self.head {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this match necessary for correctness? I'd say it's not, since all set iterators should be fused already. So it could be removed if that's better for performance, and you'd have just this?

item = self.head;
self.head = self.tail.next();
item

Not sure if it's a win, at all.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For correctness, requiring FusedIterator (like I unintentionally left it in #65226) seems solid to me. But relying on that is bad for the performance I measure. That's also why the enum in #65226 is A(Option<I::Item>) and not A(I::Item).

I don't quite understand why. A philosophical argument might me: a FusedIterator indeed keeps returning None, but nobody said it has to do so efficiently. But as far as I can tell, the map Keys iterator can't be any quicker: it just zero-compares the length that is red hot in the cache after updating it every iteration.

Maybe it's because I'm developing and benching this in a separate crate, which means the map Keys code cannot be inlined, and I'm mostly measuring (cross-DLL?) function calls?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The keys code can inlined (it is generic, instantiated into the crate where it is used). One might tweak down to codegen-units=1 to experiment if that make a difference.

If it doesn't help, we'll just not do that change :)

Copy link
Contributor Author

@ssomers ssomers Oct 14, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, I remembered wrong. Your shortcut helps and it helped in earlier versions. Just don't use a mem::replace oneliner instead, that's way slower.

.field(&self.0.a.head)
.field(&self.0.a.tail)
.field(&self.0.b.head)
.field(&self.0.b.tail)
Copy link
Member

@bluss bluss Oct 13, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will look messier, but I'm not sure if we have any reason to dress up the internals in any way. It should serve us best to just debug print what we have. That would be an argument for printing this as .field(&self.0) or .field(&self.0.a).field(&self.0.b) though, and giving Peeking a debug impl.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The remaining fmt::Debug for Iter uses the older alignment style here (aligning methods to the .). Best to keep the style consistent in the file (keep the style as it was).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I never understood that part, just stole the debug implementation from some other liballoc struct. I guess part of the reason is to allow the "#[stable()" thingy.

I'm now finally able to produce examples of this output, like:

Intersection(Iter([1, 2, 3]), Iter([1, 2, 3]))

If I simply stick a #[derive(Debug)] onto IntersectionInner, and simplify Intersection::fmt all the way down to f.debug_tuple("Intersection").field(&self.inner).finish(), the output is much more clearly:

Intersection(Stitch { a: Iter([1, 2, 3]), b: Iter([1, 2, 3]) })

@bluss
Copy link
Member

bluss commented Oct 13, 2019

Nice benchmarks (posted in the other pr)! Looks like a neat improvement

}
}
}

fn size_hint(&self) -> (usize, Option<usize>) {
(0, Some(self.a.len() + self.b.len()))
(0, Some(self.0.a.len() + self.0.b.len()))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the union upper bound - this sum potentially overflows (and then it should be None).

Both can be ok but it would be best with a comment in that case, that max length of any set is limited.

@ssomers
Copy link
Contributor Author

ssomers commented Oct 14, 2019

These are the readings for the updated PR:

test difference_random_100::vs_100                     ... bench:         729 ns/iter (+/- 14)
test difference_random_100::vs_100_future              ... bench:         686 ns/iter (+/- 28)
test difference_random_10k::vs_10k                     ... bench:     168,942 ns/iter (+/- 5,895)
test difference_random_10k::vs_10k_future              ... bench:     160,226 ns/iter (+/- 4,405)
test difference_subsets::_10_vs_100                    ... bench:         214 ns/iter (+/- 9)
test difference_subsets::_10_vs_100_future             ... bench:         187 ns/iter (+/- 4)
test symmdiff_neg_vs_pos::_100_neg_vs_100_pos          ... bench:         880 ns/iter (+/- 123)
test symmdiff_neg_vs_pos::_100_neg_vs_100_pos_future   ... bench:         890 ns/iter (+/- 32)
test symmdiff_neg_vs_pos::_100_neg_vs_10k_pos          ... bench:      37,187 ns/iter (+/- 980)
test symmdiff_neg_vs_pos::_100_neg_vs_10k_pos_future   ... bench:      36,721 ns/iter (+/- 1,002)
test symmdiff_neg_vs_pos::_100_pos_vs_100_neg          ... bench:         901 ns/iter (+/- 26)
test symmdiff_neg_vs_pos::_100_pos_vs_100_neg_future   ... bench:         856 ns/iter (+/- 35)
test symmdiff_neg_vs_pos::_100_pos_vs_10k_neg          ... bench:      44,593 ns/iter (+/- 965)
test symmdiff_neg_vs_pos::_100_pos_vs_10k_neg_future   ... bench:      43,231 ns/iter (+/- 1,275)
test symmdiff_neg_vs_pos::_10k_neg_vs_100_pos          ... bench:      49,806 ns/iter (+/- 2,459)
test symmdiff_neg_vs_pos::_10k_neg_vs_100_pos_future   ... bench:      45,261 ns/iter (+/- 3,033)
test symmdiff_neg_vs_pos::_10k_neg_vs_10k_pos          ... bench:      88,181 ns/iter (+/- 4,752)
test symmdiff_neg_vs_pos::_10k_neg_vs_10k_pos_future   ... bench:      84,279 ns/iter (+/- 3,806)
test symmdiff_neg_vs_pos::_10k_pos_vs_100_neg          ... bench:      43,414 ns/iter (+/- 2,347)
test symmdiff_neg_vs_pos::_10k_pos_vs_100_neg_future   ... bench:      39,981 ns/iter (+/- 2,312)
test symmdiff_neg_vs_pos::_10k_pos_vs_10k_neg          ... bench:      87,481 ns/iter (+/- 3,478)
test symmdiff_neg_vs_pos::_10k_pos_vs_10k_neg_future   ... bench:      82,290 ns/iter (+/- 2,926)
test symmdiff_random_100::vs_100                       ... bench:         972 ns/iter (+/- 37)
test symmdiff_random_100::vs_100_future                ... bench:         953 ns/iter (+/- 41)
test symmdiff_random_100::vs_10k                       ... bench:      56,538 ns/iter (+/- 2,140)
test symmdiff_random_100::vs_10k_future                ... bench:      59,465 ns/iter (+/- 1,742)
test symmdiff_random_100::vs_1600                      ... bench:       9,970 ns/iter (+/- 264)
test symmdiff_random_100::vs_1600_future               ... bench:       9,692 ns/iter (+/- 502)
test symmdiff_random_10k::vs_10k                       ... bench:     209,957 ns/iter (+/- 13,843)
test symmdiff_random_10k::vs_10k_future                ... bench:     205,140 ns/iter (+/- 9,867)
test symmdiff_subsets::_100_vs_10k                     ... bench:      41,016 ns/iter (+/- 1,358)
test symmdiff_subsets::_100_vs_10k_future              ... bench:      39,083 ns/iter (+/- 1,158)
test symmdiff_subsets::_10_vs_100                      ... bench:         427 ns/iter (+/- 25)
test symmdiff_subsets::_10_vs_100_future               ... bench:         433 ns/iter (+/- 22)
test union_neg_vs_pos::_100_neg_vs_100_pos             ... bench:         791 ns/iter (+/- 41)
test union_neg_vs_pos::_100_neg_vs_100_pos_future      ... bench:         902 ns/iter (+/- 14)
test union_neg_vs_pos::_100_neg_vs_10k_pos             ... bench:      36,296 ns/iter (+/- 3,422)
test union_neg_vs_pos::_100_neg_vs_10k_pos_future      ... bench:      42,735 ns/iter (+/- 2,068)
test union_neg_vs_pos::_100_pos_vs_100_neg             ... bench:         849 ns/iter (+/- 34)
test union_neg_vs_pos::_100_pos_vs_100_neg_future      ... bench:         915 ns/iter (+/- 43)
test union_neg_vs_pos::_100_pos_vs_10k_neg             ... bench:      41,275 ns/iter (+/- 5,920)
test union_neg_vs_pos::_100_pos_vs_10k_neg_future      ... bench:      42,866 ns/iter (+/- 1,007)
test union_neg_vs_pos::_10k_neg_vs_100_pos             ... bench:      43,137 ns/iter (+/- 1,527)
test union_neg_vs_pos::_10k_neg_vs_100_pos_future      ... bench:      40,187 ns/iter (+/- 1,012)
test union_neg_vs_pos::_10k_neg_vs_10k_pos             ... bench:      78,250 ns/iter (+/- 2,655)
test union_neg_vs_pos::_10k_neg_vs_10k_pos_future      ... bench:      83,164 ns/iter (+/- 2,745)
test union_neg_vs_pos::_10k_pos_vs_100_neg             ... bench:      44,690 ns/iter (+/- 1,496)
test union_neg_vs_pos::_10k_pos_vs_100_neg_future      ... bench:      43,874 ns/iter (+/- 1,177)
test union_neg_vs_pos::_10k_pos_vs_10k_neg             ... bench:      85,101 ns/iter (+/- 1,054)
test union_neg_vs_pos::_10k_pos_vs_10k_neg_future      ... bench:      85,770 ns/iter (+/- 2,906)
test union_random_100::vs_100                          ... bench:         942 ns/iter (+/- 39)
test union_random_100::vs_100_future                   ... bench:         936 ns/iter (+/- 45)
test union_random_100::vs_10k                          ... bench:      49,211 ns/iter (+/- 1,340)
test union_random_100::vs_10k_future                   ... bench:      50,612 ns/iter (+/- 1,617)
test union_random_100::vs_1600                         ... bench:       8,754 ns/iter (+/- 312)
test union_random_100::vs_1600_future                  ... bench:       8,708 ns/iter (+/- 227)
test union_random_10k::vs_10k                          ... bench:     199,802 ns/iter (+/- 5,530)
test union_random_10k::vs_10k_future                   ... bench:     191,034 ns/iter (+/- 3,066)
test union_subsets::_100_vs_10k                        ... bench:      36,560 ns/iter (+/- 941)
test union_subsets::_100_vs_10k_future                 ... bench:      40,408 ns/iter (+/- 1,325)
test union_subsets::_10_vs_100                         ... bench:         422 ns/iter (+/- 8)
test union_subsets::_10_vs_100_future                  ... bench:         449 ns/iter (+/- 23)

Theoretically (and in practice) unchanged tests pruned from this list. You see that halfway through, the future gets bleak. While the future is bright when I build & run only the same benches (using --features merge), as I've always been doing lately. Each run is quite reproducible: the more benches built, the slower they all become, but those named *_future suffer much more. Running fewer benches by passing a name gives the same timing as running all, so it's the build that is different. Seems like the more code there is mixed in, the less the optimizer focuses, and the future code needs more attention.

Building only MergeIter-based tests, the story is:

test symmdiff_neg_vs_pos::_100_neg_vs_100_pos        ... bench:         815 ns/iter (+/- 4)
test symmdiff_neg_vs_pos::_100_neg_vs_100_pos_future ... bench:         440 ns/iter (+/- 13)
test symmdiff_neg_vs_pos::_100_neg_vs_10k_pos        ... bench:      34,895 ns/iter (+/- 513)
test symmdiff_neg_vs_pos::_100_neg_vs_10k_pos_future ... bench:      19,752 ns/iter (+/- 340)
test symmdiff_neg_vs_pos::_100_pos_vs_100_neg        ... bench:         787 ns/iter (+/- 28)
test symmdiff_neg_vs_pos::_100_pos_vs_100_neg_future ... bench:         458 ns/iter (+/- 10)
test symmdiff_neg_vs_pos::_100_pos_vs_10k_neg        ... bench:      40,745 ns/iter (+/- 1,139)
test symmdiff_neg_vs_pos::_100_pos_vs_10k_neg_future ... bench:      25,244 ns/iter (+/- 209)
test symmdiff_neg_vs_pos::_10k_neg_vs_100_pos        ... bench:      41,470 ns/iter (+/- 815)
test symmdiff_neg_vs_pos::_10k_neg_vs_100_pos_future ... bench:      24,493 ns/iter (+/- 659)
test symmdiff_neg_vs_pos::_10k_neg_vs_10k_pos        ... bench:      77,740 ns/iter (+/- 2,157)
test symmdiff_neg_vs_pos::_10k_neg_vs_10k_pos_future ... bench:      45,414 ns/iter (+/- 1,195)
test symmdiff_neg_vs_pos::_10k_pos_vs_100_neg        ... bench:      37,147 ns/iter (+/- 1,795)
test symmdiff_neg_vs_pos::_10k_pos_vs_100_neg_future ... bench:      19,770 ns/iter (+/- 562)
test symmdiff_neg_vs_pos::_10k_pos_vs_10k_neg        ... bench:      79,054 ns/iter (+/- 2,147)
test symmdiff_neg_vs_pos::_10k_pos_vs_10k_neg_future ... bench:      45,242 ns/iter (+/- 767)
test symmdiff_random_100::vs_100                     ... bench:         884 ns/iter (+/- 22)
test symmdiff_random_100::vs_100_future              ... bench:         463 ns/iter (+/- 15)
test symmdiff_random_100::vs_10k                     ... bench:      50,738 ns/iter (+/- 464)
test symmdiff_random_100::vs_10k_future              ... bench:      32,554 ns/iter (+/- 1,100)
test symmdiff_random_100::vs_1600                    ... bench:       8,841 ns/iter (+/- 232)
test symmdiff_random_100::vs_1600_future             ... bench:       4,164 ns/iter (+/- 140)
test symmdiff_random_10k::vs_10k                     ... bench:     196,240 ns/iter (+/- 4,330)
test symmdiff_random_10k::vs_10k_future              ... bench:     147,610 ns/iter (+/- 5,148)
test symmdiff_subsets::_100_vs_10k                   ... bench:      37,499 ns/iter (+/- 250)
test symmdiff_subsets::_100_vs_10k_future            ... bench:      21,950 ns/iter (+/- 755)
test symmdiff_subsets::_10_vs_100                    ... bench:         394 ns/iter (+/- 7)
test symmdiff_subsets::_10_vs_100_future             ... bench:         249 ns/iter (+/- 13)
test union_neg_vs_pos::_100_neg_vs_100_pos           ... bench:         715 ns/iter (+/- 25)
test union_neg_vs_pos::_100_neg_vs_100_pos_future    ... bench:         540 ns/iter (+/- 43)
test union_neg_vs_pos::_100_neg_vs_10k_pos           ... bench:      32,009 ns/iter (+/- 1,268)
test union_neg_vs_pos::_100_neg_vs_10k_pos_future    ... bench:      26,652 ns/iter (+/- 1,340)
test union_neg_vs_pos::_100_pos_vs_100_neg           ... bench:         778 ns/iter (+/- 31)
test union_neg_vs_pos::_100_pos_vs_100_neg_future    ... bench:         574 ns/iter (+/- 19)
test union_neg_vs_pos::_100_pos_vs_10k_neg           ... bench:      38,332 ns/iter (+/- 1,003)
test union_neg_vs_pos::_100_pos_vs_10k_neg_future    ... bench:      29,655 ns/iter (+/- 2,412)
test union_neg_vs_pos::_10k_neg_vs_100_pos           ... bench:      38,999 ns/iter (+/- 2,350)
test union_neg_vs_pos::_10k_neg_vs_100_pos_future    ... bench:      27,371 ns/iter (+/- 2,276)
test union_neg_vs_pos::_10k_neg_vs_10k_pos           ... bench:      71,462 ns/iter (+/- 3,014)
test union_neg_vs_pos::_10k_neg_vs_10k_pos_future    ... bench:      52,400 ns/iter (+/- 1,993)
test union_neg_vs_pos::_10k_pos_vs_100_neg           ... bench:      36,790 ns/iter (+/- 1,011)
test union_neg_vs_pos::_10k_pos_vs_100_neg_future    ... bench:      22,742 ns/iter (+/- 836)
test union_neg_vs_pos::_10k_pos_vs_10k_neg           ... bench:      76,380 ns/iter (+/- 3,300)
test union_neg_vs_pos::_10k_pos_vs_10k_neg_future    ... bench:      54,141 ns/iter (+/- 2,291)
test union_random_100::vs_100                        ... bench:         815 ns/iter (+/- 3)
test union_random_100::vs_100_future                 ... bench:         551 ns/iter (+/- 32)
test union_random_100::vs_10k                        ... bench:      47,455 ns/iter (+/- 1,681)
test union_random_100::vs_10k_future                 ... bench:      38,165 ns/iter (+/- 1,165)
test union_random_100::vs_1600                       ... bench:       8,403 ns/iter (+/- 247)
test union_random_100::vs_1600_future                ... bench:       6,313 ns/iter (+/- 779)
test union_random_10k::vs_10k                        ... bench:     191,842 ns/iter (+/- 3,554)
test union_random_10k::vs_10k_future                 ... bench:     159,478 ns/iter (+/- 2,855)
test union_subsets::_100_vs_10k                      ... bench:      34,860 ns/iter (+/- 967)
test union_subsets::_100_vs_10k_future               ... bench:      26,622 ns/iter (+/- 837)
test union_subsets::_10_vs_100                       ... bench:         393 ns/iter (+/- 5)
test union_subsets::_10_vs_100_future                ... bench:         292 ns/iter (+/- 10)

Anyways, looking into cargo-benchcmp now.

@ssomers
Copy link
Contributor Author

ssomers commented Oct 14, 2019

Same story told in benchcmp's words:

cargo bench --features merge >bench.txt
cargo benchcmp union_actual union_future  bench.txt

gives (hurray!)

 ::parted_100_neg_vs_100_pos  698                   501                           -197  -28.22%   x 1.39
 ::parted_100_neg_vs_10k_pos  31,470                23,384                      -8,086  -25.69%   x 1.35
 ::parted_100_pos_vs_100_neg  775                   512                           -263  -33.94%   x 1.51
 ::parted_100_pos_vs_10k_neg  38,795                28,711                     -10,084  -25.99%   x 1.35
 ::parted_10k_neg_vs_100_pos  39,186                26,613                     -12,573  -32.09%   x 1.47
 ::parted_10k_neg_vs_10k_pos  71,591                51,126                     -20,465  -28.59%   x 1.40
 ::parted_10k_pos_vs_100_neg  36,520                22,280                     -14,240  -38.99%   x 1.64
 ::parted_10k_pos_vs_10k_neg  74,588                51,317                     -23,271  -31.20%   x 1.45
 ::random_100_vs_100          797                   588                           -209  -26.22%   x 1.36
 ::random_100_vs_10k          47,535                37,914                      -9,621  -20.24%   x 1.25
 ::random_100_vs_1600         8,427                 6,216                       -2,211  -26.24%   x 1.36
 ::random_10k_vs_10k          187,674               154,503                    -33,171  -17.67%   x 1.21
 ::random_10k_vs_160k         988,090               848,980                   -139,110  -14.08%   x 1.16
 ::subset_100_vs_10k          35,474                25,948                      -9,526  -26.85%   x 1.37
 ::subset_10_vs_100           401                   295                           -106  -26.43%   x 1.36

But instead:

cargo bench --features diff,intersect,merge,stagger >bench2.txt
cargo benchcmp union_actual union_future bench2.txt

gives (booh!)

 ::parted_100_neg_vs_100_pos  837                   890                             53   6.33%   x 0.94
 ::parted_100_neg_vs_10k_pos  34,725                39,303                       4,578  13.18%   x 0.88
 ::parted_100_pos_vs_100_neg  829                   850                             21   2.53%   x 0.98
 ::parted_100_pos_vs_10k_neg  39,003                39,779                         776   1.99%   x 0.98
 ::parted_10k_neg_vs_100_pos  47,165                46,271                        -894  -1.90%   x 1.02
 ::parted_10k_neg_vs_10k_pos  81,220                86,729                       5,509   6.78%   x 0.94
 ::parted_10k_pos_vs_100_neg  44,245                42,507                      -1,738  -3.93%   x 1.04
 ::parted_10k_pos_vs_10k_neg  84,030                82,151                      -1,879  -2.24%   x 1.02
 ::random_100_vs_100          854                   857                              3   0.35%   x 1.00
 ::random_100_vs_10k          48,966                51,663                       2,697   5.51%   x 0.95
 ::random_100_vs_1600         8,483                 9,333                          850  10.02%   x 0.91
 ::random_10k_vs_10k          197,703               187,357                    -10,346  -5.23%   x 1.06
 ::random_10k_vs_160k         1,012,570             1,035,545                   22,975   2.27%   x 0.98
 ::subset_100_vs_10k          38,788                44,800                       6,012  15.50%   x 0.87
 ::subset_10_vs_100           428                   456                             28   6.54%   x 0.94

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-review Status: Awaiting review from the assignee but also interested parties.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants