X-Git-Url: http://git.vanrenterghem.biz/git.ikiwiki.info.git/blobdiff_plain/7e987bbfeeea0d93a7d636de5124ecf15ef730be..a01e0679f4134f849473f8a98cb43f5a4aa8d7d8:/doc/tips/convert_mediawiki_to_ikiwiki.mdwn?ds=sidebyside diff --git a/doc/tips/convert_mediawiki_to_ikiwiki.mdwn b/doc/tips/convert_mediawiki_to_ikiwiki.mdwn index 1e5b912a9..9719d9a7e 100644 --- a/doc/tips/convert_mediawiki_to_ikiwiki.mdwn +++ b/doc/tips/convert_mediawiki_to_ikiwiki.mdwn @@ -8,6 +8,9 @@ converting some of the Mediawiki conventions into Ikiwiki ones. The following instructions describe ways of obtaining the current version of the wiki. We do not yet cover importing the history of edits. +Another set of instructions and conversion tools (which imports the full history) +can be found at + ## Step 1: Getting a list of pages The first bit of information you require is a list of pages in the Mediawiki. @@ -27,15 +30,16 @@ that this script is sensitive to the specific markup used on the page, so if you have tweaked your mediawiki theme a lot from the original, you will need to adjust this script too: + import sys from xml.dom.minidom import parse, parseString - dom = parse(argv[1]) + dom = parse(sys.argv[1]) tables = dom.getElementsByTagName("table") pagetable = tables[-1] anchors = pagetable.getElementsByTagName("a") for a in anchors: print a.firstChild.toxml().\ - replace('&,'&').\ + replace('&','&').\ replace('<','<').\ replace('>','>') @@ -114,7 +118,7 @@ into an ikiwiki tag name using a script such as pattern = r'\[\[Category:([^\]]+)\]\]' def manglecat(mo): - return '[[!tag %s]]' % mo.group(1).strip().replace(' ','_') + return '\[[!tag %s]]' % mo.group(1).strip().replace(' ','_') for line in sys.stdin.readlines(): res = re.match(pattern, line) @@ -131,5 +135,7 @@ most of the Mediawiki syntax. [[sabr]] used to explain how to [import MediaWiki content into git](http://u32.net/Mediawiki_Conversion/index.html?updated), including full -edit history, but as of 2009/10/16 that site is not available. +edit history, but as of 2009/10/16 that site is not available. A copy of the +information found on this website is stored at +