X-Git-Url: http://git.vanrenterghem.biz/git.ikiwiki.info.git/blobdiff_plain/9adb841c92be53fd1af796e4aae4437d3c098b54..dd0981824d1d919f431d75af47e07447d460e1b4:/doc/todo/optimisations.mdwn?ds=sidebyside diff --git a/doc/todo/optimisations.mdwn b/doc/todo/optimisations.mdwn index 4e8118756..582c03ef1 100644 --- a/doc/todo/optimisations.mdwn +++ b/doc/todo/optimisations.mdwn @@ -1,25 +1,8 @@ -* Render each changed page only once. Currently pages are rendered up to 4 - times in worst case (8 times if there's an rss feed). - - The issue is that rendering a page is used to gather info like the links - on the page (and other stuff) that can effect rendering other pages. So it - needs a multi-pass system. But rendering the whole page in each pass is - rather obscene. - - It would be better to have the first pass be a data gathering pass. Such - a pass would still need to load and parse the page contents etc, but - wouldn't need to generate html or write anything to disk. - - One problem with this idea is that it could turn into 2x the work in - cases where ikiwiki currently efficiently renders a page just once. And - caching between the passes to avoid that wouldn't do good things to the - memory footprint. - - Might be best to just do a partial first pass, getting eg, the page links - up-to-date, and then multiple, but generally fewer, rendering passes. - -* Don't render blog archive pages unless a page is added/removed. Just - changing a page doesn't affect the archives as they show only the title. +Ikiwiki has already been optimised a lot, however.. * Look at splitting up CGI.pm. But note that too much splitting can slow perl down. + +* The backlinks calculation code is still O(N^2) on the number of pages. + If backlinks info were stored in the index file, it would go down to + constant time for iterative builds, though still N^2 for rebuilds.