1 I benchmarked a build of a large wiki (my home wiki), and it was spending
2 quite a lot of time sorting; `CORE::sort` was called only 1138 times, but
3 still flagged as the #1 time sink. (I'm not sure I trust NYTProf fully
4 about that FWIW, since it also said 27238263 calls to `cmp_age` were
5 the #3 timesink, and I suspect it may not entirely accurately measure
6 the overhead of so many short function calls.)
8 `pagespec_match_list` currently always sorts *all* pages first, and then
9 finds the top M that match the pagespec. That's innefficient when M is
10 small (as for example in a typical blog, where only 20 posts are shown,
11 out of maybe thousands).
13 As [[smcv]] noted, It could be flipped, so the pagespec is applied first,
14 and then sort the smaller matching set. But, checking pagespecs is likely
15 more expensive than sorting. (Also, influence calculation complicates
18 Another option, when there is a limit on M pages to return, might be to
19 cull the M top pages without sorting the rest.
21 > The patch below implements this.
23 > But, I have not thought enough about influence calculation.
24 > I need to figure out which pagespec matches influences need to be
25 > accumulated for in order to determine all possible influences of a
28 > The old code accumulates influences from matching all successful pages
29 > up to the num cutoff, as well as influences from an arbitrary (sometimes
30 > zero) number of failed matches. New code does not accumulate influences
31 > from all the top successful matches, only an arbitrary group of
32 > successes and some failures.
34 > Also, by the time I finished this, it was not measuarably faster than
35 > the old method. At least not with a few thousand pages; it
36 > might be worth revisiting this sometime for many more pages? [[done]]
40 diff --git a/IkiWiki.pm b/IkiWiki.pm
41 index 1730e47..bc8b23d 100644
44 @@ -2122,36 +2122,54 @@ sub pagespec_match_list ($$;@) {
46 delete @params{qw{num deptype reverse sort filter list}};
48 - # when only the top matches will be returned, it's efficient to
49 - # sort before matching to pagespec,
50 - if (defined $num && defined $sort) {
51 - @candidates=IkiWiki::SortSpec::sort_pages(
52 - $sort, @candidates);
55 + # Find the first num matches (or all), before sorting.
59 my $accum=IkiWiki::SuccessReason->new();
60 - foreach my $p (@candidates) {
61 - my $r=$sub->($p, %params, location => $page);
63 + for ($i=0; $i < @candidates; $i++) {
64 + my $r=$sub->($candidates[$i], %params, location => $page);
65 error(sprintf(gettext("cannot match pages: %s"), $r))
66 if $r->isa("IkiWiki::ErrorReason");
70 + push @matches, $candidates[$i];
71 last if defined $num && ++$count == $num;
75 + # We have num natches, but they may not be the best.
76 + # Efficiently find and add the rest, without sorting the full list of
78 + if (defined $num && defined $sort) {
79 + @matches=IkiWiki::SortSpec::sort_pages($sort, @matches);
81 + for ($i++; $i < @candidates; $i++) {
82 + # Comparing candidate with lowest match is cheaper,
83 + # so it's done before testing against pagespec.
84 + if (IkiWiki::SortSpec::cmptwo($candidates[$i], $matches[-1], $sort) < 0 &&
85 + $sub->($candidates[$i], %params, location => $page)
87 + # this could be done less expensively
88 + # using a binary search
89 + for (my $j=0; $j < @matches; $j++) {
90 + if (IkiWiki::SortSpec::cmptwo($candidates[$i], $matches[$j], $sort) < 0) {
91 + splice @matches, $j, $#matches-$j+1, $candidates[$i],
92 + @matches[$j..$#matches-1];
100 # Add simple dependencies for accumulated influences.
101 - my $i=$accum->influences;
102 - foreach my $k (keys %$i) {
103 - $depends_simple{$page}{lc $k} |= $i->{$k};
104 + my $inf=$accum->influences;
105 + foreach my $k (keys %$inf) {
106 + $depends_simple{$page}{lc $k} |= $inf->{$k};
109 - # when all matches will be returned, it's efficient to
110 - # sort after matching
111 + # Sort if we didn't already.
112 if (! defined $num && defined $sort) {
113 return IkiWiki::SortSpec::sort_pages(
115 @@ -2455,6 +2473,12 @@ sub sort_pages {
126 IkiWiki::pagetitle(IkiWiki::basename($a))
130 This would be bad when M is very large, and particularly, of course, when
131 there is no limit and all pages are being matched on. (For example, an
132 archive page shows all pages that match a pagespec specifying a creation
133 date range.) Well, in this case, it *does* make sense to flip it, limit by
134 pagespe first, and do a (quick)sort second. (No influence complications,
137 > Flipping when there's no limit implemented, and it knocked 1/3 off
138 > the rebuild time of my blog's archive pages. --[[Joey]]
140 Adding these special cases will be more complicated, but I think the best
141 of both worlds. --[[Joey]]