Note that by default, `Special:Allpages` will only list pages in the main
namespace. You need to add a `&namespace=XX` argument to get pages in a
-different namespace. The following numbers correspond to common namespaces:
-
- * 10 - templates (`Template:foo`)
- * 14 - categories (`Category:bar`)
+different namespace. (See below for the default list of namespaces)
Note that the page names obtained this way will not include any namespace
specific prefix: e.g. `Category:` will be stripped off.
stored, it is possible to derive a list of page names from this. With mediawiki's
MySQL backend, the page table is, appropriately enough, called `table`:
- SELECT page_namespace, page_title FROM page;
+ SELECT page_namespace, page_title FROM page;
As with the previous method, you will need to do some filtering based on the
namespace.
+### namespaces
+
+The list of default namespaces in mediawiki is available from <http://www.mediawiki.org/wiki/Manual:Namespace#Built-in_namespaces>. Here are reproduced the ones you are most likely to encounter if you are running a small mediawiki install for your own purposes:
+
+[[!table data="""
+Index | Name | Example
+0 | Main | Foo
+1 | Talk | Talk:Foo
+2 | User | User:Jon
+3 | User talk | User_talk:Jon
+6 | File | File:Barack_Obama_signature.svg
+10 | Template | Template:Prettytable
+14 | Category | Category:Pages_needing_review
+"""]]
+
## Step 2: fetching the page data
Once you have a list of page names, you can fetch the data for each page.
The [[plugins/contrib/mediawiki]] plugin can be used by ikiwiki to interpret
most of the Mediawiki syntax.
-## External links
+The following things are not working:
+
+* templates
+* tables
+* spaces and other funky characters ("?") in page names
+
+## Scripts
[[sabr]] used to explain how to [import MediaWiki content into
git](http://u32.net/Mediawiki_Conversion/index.html?updated), including full
edit history, but as of 2009/10/16 that site is not available. A copy of the
-information found on this website is stored at <http://github.com/mithro/media2iki>
+information found on this website is stored at <http://github.com/mithro/media2iki>.
+
+[[Albert]] wrote a ruby script to convert from mediawiki's database to ikiwiki at <https://github.com/docunext/mediawiki2gitikiwiki>
+[[Anarcat]] wrote a python script to convert from a mediawiki website to ikiwiki at <http://anarcat.ath.cx/software/mediawikigitdump.git/>. The script doesn't need any special access or privileges and communicates with the documented API (so it's a bit slower, but allows you to mirror sites you are not managing, like parts of Wikipedia). The script can also incrementally import new changes from a running site, through RecentChanges inspection.
+[[scy]] wrote a python script to convert from mediawiki XML dumps to git repositories at <https://github.com/scy/levitation>.