* Use precalculated backlinks info when determining if files need an update
due to a page they link to being added/removed. Mostly significant if
there are lots of pages.
* Remove duplicate link info when saving index. In some cases it could
pile up rather badly. (Probably not the best way to deal with this
problem.)
"ctime=$pagectime{$page} ".
"src=$pagesources{$page}";
$line.=" dest=$_" foreach @{$renderedfiles{$page}};
"ctime=$pagectime{$page} ".
"src=$pagesources{$page}";
$line.=" dest=$_" foreach @{$renderedfiles{$page}};
- $line.=" link=$_" foreach @{$links{$page}};
+ my %count;
+ $line.=" link=$_" foreach grep { ++$count{$_} == 1 } @{$links{$page}};
if (exists $depends{$page}) {
$line.=" depends=".encode_entities($depends{$page}, " \t\n");
}
if (exists $depends{$page}) {
$line.=" depends=".encode_entities($depends{$page}, " \t\n");
}
if ($config{aggregate}) {
IkiWiki::loadindex();
aggregate();
if ($config{aggregate}) {
IkiWiki::loadindex();
aggregate();
savestate();
}
IkiWiki::unlockwiki();
savestate();
}
IkiWiki::unlockwiki();
$feed->{expireage}=defined $params{expireage} ? $params{expireage} : 0;
$feed->{expirecount}=defined $params{expirecount} ? $params{expirecount} : 0;
delete $feed->{remove};
$feed->{expireage}=defined $params{expireage} ? $params{expireage} : 0;
$feed->{expirecount}=defined $params{expirecount} ? $params{expirecount} : 0;
delete $feed->{remove};
+ delete $feed->{expired};
$feed->{lastupdate}=0 unless defined $feed->{lastupdate};
$feed->{numposts}=0 unless defined $feed->{numposts};
$feed->{newposts}=0 unless defined $feed->{newposts};
$feed->{lastupdate}=0 unless defined $feed->{lastupdate};
$feed->{numposts}=0 unless defined $feed->{numposts};
$feed->{newposts}=0 unless defined $feed->{newposts};
+ elsif ($data->{expired} && exists $data->{page}) {
+ unlink pagefile($data->{page});
+ delete $data->{page};
+ delete $data->{md5};
+ }
my @line;
foreach my $field (keys %$data) {
my @line;
foreach my $field (keys %$data) {
+sub expire () { #{{{
+ foreach my $feed (values %feeds) {
+ next unless $feed->{expireage} || $feed->{expirecount};
+ my $count=0;
+ foreach my $item (sort { $IkiWiki::pagectime{$b->{page}} <=> $IkiWiki::pagectime{$a->{page}} }
+ grep { exists $_->{page} && $_->{feed} eq $feed->{name} && $IkiWiki::pagectime{$_->{page}} }
+ values %guids) {
+ if ($feed->{expireage}) {
+ my $days_old = (time - $IkiWiki::pagectime{$item->{page}}) / 60 / 60 / 24;
+ if ($days_old > $feed->{expireage}) {
+ debug("expiring ".$item->{page}." ($days_old days old)");
+ $item->{expired}=1;
+ }
+ }
+ elsif ($feed->{expirecount} &&
+ $count >= $feed->{expirecount}) {
+ debug("expiring ".$item->{page});
+ $item->{expired}=1;
+ }
+ else {
+ $count++;
+ }
+ }
+ }
+} #}}}
+
sub aggregate () { #{{{
eval q{use XML::Feed};
die $@ if $@;
sub aggregate () { #{{{
eval q{use XML::Feed};
die $@ if $@;
displaytime($feed->{lastupdate});
$feed->{error}=0;
}
displaytime($feed->{lastupdate});
$feed->{error}=0;
}
} #}}}
sub add_page (@) { #{{{
} #}}}
sub add_page (@) { #{{{
my $backlinks_calculated=0;
sub calculate_backlinks () { #{{{
my $backlinks_calculated=0;
sub calculate_backlinks () { #{{{
+ return if $backlinks_calculated;
%backlinks=();
foreach my $page (keys %links) {
foreach my $link (@{$links{$page}}) {
%backlinks=();
foreach my $page (keys %links) {
foreach my $link (@{$links{$page}}) {
sub backlinks ($) { #{{{
my $page=shift;
sub backlinks ($) { #{{{
my $page=shift;
- calculate_backlinks() unless $backlinks_calculated;
my @links;
return unless $backlinks{$page};
my @links;
return unless $backlinks{$page};
# render changed and new pages
foreach my $file (@changed) {
# render changed and new pages
foreach my $file (@changed) {
- # if any files were added or removed, check to see if each page
- # needs an update due to linking to them or inlining them
+ # rebuild pages that link to added or removed pages
-FILE: foreach my $file (@files) {
- next if $rendered{$file};
- my $page=pagename($file);
- foreach my $f (@add, @del) {
- my $p=pagename($f);
- foreach my $link (@{$links{$page}}) {
- if (bestlink($page, $link) eq $p) {
- debug("rendering $file, which links to $p");
- render($file);
- $rendered{$file}=1;
- next FILE;
- }
- }
+ foreach my $f (@add, @del) {
+ my $p=pagename($f);
+ foreach my $page (keys %{$backlinks{$p}}) {
+ my $file=$pagesources{$page};
+ next if $rendered{$file};
+ debug("rendering $file, which links to $p");
+ render($file);
+ $rendered{$file}=1;
* Improve login/register process, the login dialog has only name and
password fields, which allows more web browsers to regognise it as a login
field, and is less confusing.
* Improve login/register process, the login dialog has only name and
password fields, which allows more web browsers to regognise it as a login
field, and is less confusing.
-
- -- Joey Hess <joeyh@debian.org> Mon, 30 Oct 2006 18:26:55 -0500
+ * Implemented expiry options for aggregate plugin.
+ * Use precalculated backlinks info when determining if files need an update
+ due to a page they link to being added/removed. Mostly significant if
+ there are lots of pages.
+ * Remove duplicate link info when saving index. In some cases it could
+ pile up rather badly. (Probably not the best way to deal with this
+ problem.)
+
+ -- Joey Hess <joeyh@debian.org> Wed, 1 Nov 2006 00:00:10 -0500
ikiwiki (1.31) unstable; urgency=low
ikiwiki (1.31) unstable; urgency=low
[[template id=plugin name=aggregate included=1 author="[[Joey]]"]]
[[tag type/useful]]
[[template id=plugin name=aggregate included=1 author="[[Joey]]"]]
[[tag type/useful]]
-This plugin allows content from other blogs to be aggregated into the wiki.
-Aggregate a blog as follows:
+This plugin allows content from other feeds to be aggregated into the wiki.
+Aggregate a feed as follows
\[[aggregate name="example blog"
feedurl="http://example.com/index.rss"
\[[aggregate name="example blog"
feedurl="http://example.com/index.rss"
* `name` - A name for the feed. Each feed must have a unique name.
Required.
* `name` - A name for the feed. Each feed must have a unique name.
Required.
-* `url` - The url to the web page for the blog that's being aggregated.
+* `url` - The url to the web page for the feed that's being aggregated.
Required.
* `dir` - The directory in the wiki where pages should be saved. Optional,
if not specified, the directory is based on the name of the feed.
Required.
* `dir` - The directory in the wiki where pages should be saved. Optional,
if not specified, the directory is based on the name of the feed.
will look for feeds on the `url`. RSS and atom feeds are supported.
* `updateinterval` - How often to check for new posts, in minutes. Default
is 15 minutes.
will look for feeds on the `url`. RSS and atom feeds are supported.
* `updateinterval` - How often to check for new posts, in minutes. Default
is 15 minutes.
-* `expireage` - Expire old items from this blog if they are older than
- a specified number of days. Default is to never expire on age. (Not yet
- implemented.)
-* `expirecount` - Expire old items from this blog if there are more than
+* `expireage` - Expire old items from this feed if they are older than
+ a specified number of days. Default is to never expire on age.
+* `expirecount` - Expire old items from this feed if there are more than
the specified number total. Oldest items will be expired first. Default
the specified number total. Oldest items will be expired first. Default
- is to never expire on count. (Not yet implemented.)
-* `tag` - A tag to tag each post from the blog with. A good tag to use is
- the name of the blog. Can be repeated multiple times. The [[tag]] plugin
+ is to never expire on count.
+* `tag` - A tag to tag each post from the feed with. A good tag to use is
+ the name of the feed. Can be repeated multiple times. The [[tag]] plugin
must be enabled for this to work.
Note that even if you are using subversion or another revision control
must be enabled for this to work.
Note that even if you are using subversion or another revision control