+> That's difficult to do and have resonable speed as well. Ikiwiki needs to
+> know all about all the links between pages before it can know what pages
+> it needs to build to it can update backlink lists, update links to point
+> to new/moved pages etc. Currently it accomplishes this by a first pass
+> that scans new and changed files, and quickly finds all the wikilinks
+> using a simple regexp. If it had to render the whole page before it was
+> able to scan for hrefs using a html parser, this would make it at least
+> twice as slow, or would require it to cache all the rendered pages in
+> memory to avoid re-rendering. I don't want ikiwiki to be slow or use
+> excessive amounts of memory. YMMV. --[[Joey]]
+
+>> Or you could disk cache the incomplete page containing only the body text,
+>> which should often not need re-rendering, as most alterations consist of
+>> changing the link targets exactly, and we can know pages that exist before
+>> rendering a single page. Then after backlinks have been resolved, it would
+>> suffice to feed this body text from the cache file to the template. However, e.g.
+>> the inline plugin would demand extra rendering after the depended-upon pages
+>> have been rendered, but these pages should usually not be that frequent, or
+>> contain that many other pages in full. (And for 'archive' pages we don't need
+>> to remember that much information from the semi-inlined pages.) It would help
+>> if you could get data structures instead of HTML text from the HTMLizer, and
+>> then simply cache these data structures in some quickly-loadeble form (that
+>> I suppose perl itself has support for). Regexp hacks are so ugly compared
+>> to actually parsing a properly-defined syntax...
+
+A related possibility would be to move a lot of "preprocessing" after HTML
+generation as well (thus avoiding some conflicts with the htmlifier), by
+using special tags for the preprocessor stuff. (The old preprocessor could
+simply replace links and directives with appropriate tags, that the
+htmlifier is supposed to let through as-is. Possibly the htmlifier plugin
+could configure the format.)
+
+> Or using postprocessing, though there are problems with that too and it
+> doesn't solve the link scanning issue.