2 [[!meta date="2008-10-20 16:55:38 -0400"]]
4 Mediawiki is a dynamically-generated wiki which stores its data in a
5 relational database. Pages are marked up using a proprietary markup. It is
6 possible to import the contents of a Mediawiki site into an ikiwiki,
7 converting some of the Mediawiki conventions into Ikiwiki ones.
9 The following instructions describe ways of obtaining the current version of
10 the wiki. We do not yet cover importing the history of edits.
12 Another set of instructions and conversion tools (which imports the full history)
13 can be found at <http://github.com/mithro/media2iki>
15 ## Step 1: Getting a list of pages
17 The first bit of information you require is a list of pages in the Mediawiki.
18 There are several different ways of obtaining these.
20 ### Parsing the output of `Special:Allpages`
22 Mediawikis have a special page called `Special:Allpages` which list all the
23 pages for a given namespace on the wiki.
25 If you fetch the output of this page to a local file with something like
27 wget -q -O tmpfile 'http://your-mediawiki/wiki/Special:Allpages'
29 You can extract the list of page names using the following python script. Note
30 that this script is sensitive to the specific markup used on the page, so if
31 you have tweaked your mediawiki theme a lot from the original, you will need
32 to adjust this script too:
35 from xml.dom.minidom import parse, parseString
37 dom = parse(sys.argv[1])
38 tables = dom.getElementsByTagName("table")
39 pagetable = tables[-1]
40 anchors = pagetable.getElementsByTagName("a")
42 print a.firstChild.toxml().\
43 replace('&','&').\
47 Also, if you have pages with titles that need to be encoded to be represented
48 in HTML, you may need to add further processing to the last line.
50 Note that by default, `Special:Allpages` will only list pages in the main
51 namespace. You need to add a `&namespace=XX` argument to get pages in a
52 different namespace. (See below for the default list of namespaces)
54 Note that the page names obtained this way will not include any namespace
55 specific prefix: e.g. `Category:` will be stripped off.
57 ### Querying the database
59 If you have access to the relational database in which your mediawiki data is
60 stored, it is possible to derive a list of page names from this. With mediawiki's
61 MySQL backend, the page table is, appropriately enough, called `table`:
63 SELECT page_namespace, page_title FROM page;
65 As with the previous method, you will need to do some filtering based on the
70 The list of default namespaces in mediawiki is available from <http://www.mediawiki.org/wiki/Manual:Namespace#Built-in_namespaces>. Here are reproduced the ones you are most likely to encounter if you are running a small mediawiki install for your own purposes:
73 Index | Name | Example
77 3 | User talk | User_talk:Jon
78 6 | File | File:Barack_Obama_signature.svg
79 10 | Template | Template:Prettytable
80 14 | Category | Category:Pages_needing_review
83 ## Step 2: fetching the page data
85 Once you have a list of page names, you can fetch the data for each page.
87 ### Method 1: via HTTP and `action=raw`
89 You need to create two derived strings from the page titles: the
90 destination path for the page and the source URL. Assuming `$pagename`
91 contains a pagename obtained above, and `$wiki` contains the URL to your
92 mediawiki's `index.php` file:
94 src=`echo "$pagename" | tr ' ' _ | sed 's,&,&,g'`
95 dest=`"$pagename" | tr ' ' _ | sed 's,&,__38__,g'`
97 mkdir -p `dirname "$dest"`
98 wget -q "$wiki?title=$src&action=raw" -O "$dest"
100 You may need to add more conversions here depending on the precise page titles
103 If you are trying to fetch pages from a different namespace to the default,
104 you will need to prefix the page title with the relevant prefix, e.g.
105 `Category:` for category pages. You probably don't want to prefix it to the
106 output page, but you may want to vary the destination path (i.e. insert an
107 extra directory component corresponding to your ikiwiki's `tagbase`).
109 ### Method 2: via HTTP and `Special:Export`
111 Mediawiki also has a special page `Special:Export` which can be used to obtain
112 the source of the page and other metadata such as the last contributor, or the
115 You need to send a `POST` request to the `Special:Export` page. See the source
116 of the page fetched via `GET` to determine the correct arguments.
118 You will then need to write an XML parser to extract the data you need from
121 ### Method 3: via the database
123 It is possible to extract the page data from the database with some
124 well-crafted queries.
126 ## Step 3: format conversion
128 The next step is to convert Mediawiki conventions into Ikiwiki ones.
132 Mediawiki uses a special page name prefix to define "Categories", which
133 otherwise behave like ikiwiki tags. You can convert every Mediawiki category
134 into an ikiwiki tag name using a script such as
137 pattern = r'\[\[Category:([^\]]+)\]\]'
140 return '\[[!tag %s]]' % mo.group(1).strip().replace(' ','_')
142 for line in sys.stdin.readlines():
143 res = re.match(pattern, line)
145 sys.stdout.write(re.sub(pattern, manglecat, line))
146 else: sys.stdout.write(line)
148 ## Step 4: Mediawiki plugin or Converting to Markdown
150 You can use a plugin to make ikiwiki support Mediawiki syntax, or you can
151 convert pages to a format ikiwiki understands.
153 ### Step 4a: Mediawiki plugin
155 The [[plugins/contrib/mediawiki]] plugin can be used by ikiwiki to interpret
156 most of the Mediawiki syntax.
158 The following things are not working:
162 * spaces and other funky characters ("?") in page names
164 ### Step 4b: Converting pages
166 #### Converting to Markdown
168 There is a Python script for converting from the Mediawiki format to Markdown in [[mithro]]'s conversion repository at <http://github.com/mithro/media2iki>. *WARNING:* While the script tries to preserve everything is can, Markdown syntax is not as flexible as Mediawiki so the conversion is lossy!
170 # The script needs the mwlib library to work
171 # If you don't have easy_install installed, apt-get install python-setuptools
172 sudo easy_install mwlib
175 git clone git://github.com/mithro/media2iki.git
179 python mediawiki2markdown.py --no-strict --no-debugger <my mediawiki file> > output.md
182 [[mithro]] doesn't frequent this page, so please report issues on the [github issue tracker](https://github.com/mithro/media2iki/issues).
188 There is a repository of tools for converting MediaWiki to Git based Markdown wiki formats (such as ikiwiki and github wikis) at <http://github.com/mithro/media2iki>. It also includes a standalone tool for converting from the Mediawiki format to Markdown. [[mithro]] doesn't frequent this page, so please report issues on the [github issue tracker](https://github.com/mithro/media2iki/issues).
190 ### mediawiki2gitikiwiki (ruby)
192 [[Albert]] wrote a ruby script to convert from mediawiki's database to ikiwiki at <https://github.com/docunext/mediawiki2gitikiwiki>
194 ### levitation (xml to git)
196 [[scy]] wrote a python script to convert from mediawiki XML dumps to git repositories at <https://github.com/scy/levitation>.
200 There's now support for mediawiki as a git remote:
202 <https://github.com/moy/Git-Mediawiki/wiki>
205 [[Anarcat]] wrote a python script to convert from a mediawiki website to ikiwiki at git://src.anarcat.ath.cx/mediawikigitdump.git/. The script doesn't need any special access or privileges and communicates with the documented API (so it's a bit slower, but allows you to mirror sites you are not managing, like parts of Wikipedia). The script can also incrementally import new changes from a running site, through RecentChanges inspection. It also supports mithro's new Mediawiki2markdown converter (which I have a copy here: git://src.anarcat.ath.cx/media2iki.git/).
207 > Some assembly is required to get Mediawiki2markdown and its mwlib
208 > gitmodule available in the right place for it to use.. perhaps you could
209 > automate that? --[[Joey]]
211 > > You mean a debian package? :) media2iki is actually a submodule, so you need to go through extra steps to install it. mwlib being the most annoying part... I have fixed my script so it looks for media2iki directly in the submodule and improved the install instructions in the README file, but I'm not sure I can do much more short of starting to package the whole thing... --[[anarcat]]
213 >>> You may have forgotten to push that, I don't see those changes.
214 >>> Packaging the python library might be a good 1st step.
217 > Also, when I try to run it with -t on www.amateur-radio-wiki.net, it
218 > fails on some html in the page named "4_metres". On archiveteam.org,
219 > it fails trying to write to a page filename starting with "/", --[[Joey]]
221 > > can you show me exactly which commandline arguments you're using? also, I have made improvements over the converter too, also available here: git://src/anarcat.ath.cx/media2iki.git/ -- [[anarcat]]
223 >>> Not using your new converter, just the installation I did earlier
228 fetching page 4 metres from http://www.amateur-radio-wiki.net//index.php?action=raw&title=4+metres into 4_metres.mdwn
229 Unknown tag TagNode tagname='div' vlist={'style': {u'float': u'left', u'border': u'2px solid #aaa', u'margin-left': u'20px'}}->'div' div
230 Traceback (most recent call last):
231 File "./mediawikigitdump.py", line 298, in <module>
232 fetch_allpages(namespace)
233 File "./mediawikigitdump.py", line 82, in fetch_allpages
234 fetch_page(page.getAttribute('title'))
235 File "./mediawikigitdump.py", line 187, in fetch_page
236 c.parse(urllib.urlopen(url).read())
237 File "/home/joey/tmp/mediawikigitdump/mediawiki2markdown.py", line 285, in parse
239 File "/home/joey/tmp/mediawikigitdump/mediawiki2markdown.py", line 76, in parse_node
241 File "/home/joey/tmp/mediawikigitdump/mediawiki2markdown.py", line 88, in on_article
242 self.parse_children(node)
243 File "/home/joey/tmp/mediawikigitdump/mediawiki2markdown.py", line 83, in parse_children
244 self.parse_node(child)
245 File "/home/joey/tmp/mediawikigitdump/mediawiki2markdown.py", line 76, in parse_node
247 File "/home/joey/tmp/mediawikigitdump/mediawiki2markdown.py", line 413, in on_section
248 self.parse_node(child)
249 File "/home/joey/tmp/mediawikigitdump/mediawiki2markdown.py", line 76, in parse_node
251 File "/home/joey/tmp/mediawikigitdump/mediawiki2markdown.py", line 83, in parse_children
252 self.parse_node(child)
253 File "/home/joey/tmp/mediawikigitdump/mediawiki2markdown.py", line 76, in parse_node
255 File "/home/joey/tmp/mediawikigitdump/mediawiki2markdown.py", line 474, in on_tagnode
256 assert not options.STRICT
258 zsh: exit 1 ./mediawikigitdump.py -v -t http://www.amateur-radio-wiki.net/
262 joey@wren:~/tmp/mediawikigitdump>./mediawikigitdump.py -v -t http://archiveteam.org
263 fetching page list from namespace 0 ()
265 fetching page /Sites using MediaWiki (English) from http://archiveteam.org/index.php?action=raw&title=%2FSites+using+MediaWiki+%28English%29 into /Sites_using_MediaWiki_(English).mdwn
266 Traceback (most recent call last):
267 File "./mediawikigitdump.py", line 298, in <module>
268 fetch_allpages(namespace)
269 File "./mediawikigitdump.py", line 82, in fetch_allpages
270 fetch_page(page.getAttribute('title'))
271 File "./mediawikigitdump.py", line 188, in fetch_page
272 f = open(filename, 'w')
273 IOError: [Errno 13] Permission denied: u'/Sites_using_MediaWiki_(English).mdwn'
274 zsh: exit 1 ./mediawikigitdump.py -v -t http://archiveteam.org
277 > > > > > I have updated my script to call the parser without strict mode and to trim leading slashes (and /../, for that matter...) -- [[anarcat]]
279 > > > > > > Getting this error with the new version on any site I try (when using -t only): `TypeError: argument 1 must be string or read-only character buffer, not None`
280 > > > > > > bisecting, commit 55941a3bd89d43d09b0c126c9088eee0076b5ea2 broke it.
281 > > > > > > --[[Joey]]
283 > > > > > > > I can't reproduce here, can you try with -v or -d to try to trace down the problem? -- [[anarcat]]
286 fetching page list from namespace 0 ()
288 fetching page 0 - 9 from http://www.amateur-radio-wiki.net/index.php?action=raw&title=0+-+9 into 0_-_9.mdwn
289 Traceback (most recent call last):
290 File "./mediawikigitdump.py", line 304, in <module>
292 File "./mediawikigitdump.py", line 301, in main
293 fetch_allpages(options.namespace)
294 File "./mediawikigitdump.py", line 74, in fetch_allpages
295 fetch_page(page.getAttribute('title'))
296 File "./mediawikigitdump.py", line 180, in fetch_page
297 f.write(options.convert(urllib.urlopen(url).read()))
298 TypeError: argument 1 must be string or read-only character buffer, not None
299 zsh: exit 1 ./mediawikigitdump.py -v -d -t http://www.amateur-radio-wiki.net/