#include <sys/file.h>
extern char **environ;
-char *newenviron[$#envsave+6];
+char *newenviron[$#envsave+7];
int i=0;
void addenv(char *var, char *val) {
@wrapper_hooks
$envsave
newenviron[i++]="HOME=$ENV{HOME}";
+ newenviron[i++]="PATH=$ENV{PATH}";
newenviron[i++]="WRAPPED_OPTIONS=$configstring";
#ifdef __TINYC__
* meta: Ensure that the url specified by xrds-location is absolute.
* attachment: Fix attachment file size display.
+ * Propigate PATH into wrapper.
+ * htmlbalance: Fix compatability with HTML::Tree 4.0. (smcv)
-- Joey Hess <joeyh@debian.org> Sun, 19 Sep 2010 20:13:06 -0400
>
> Using `%pagestate` to store the cut content when scanning would be
> one way to fix this bug. It would mean storing potentially big chunks
-> of page content in the indexdb. --[[Joey]]
+> of page content in the indexdb. [[done]] --[[Joey]]
</div>
Any suggestions gladly received. -- [[Jon]]
+
+> Well, you *should* be able to do things like this, and in my testing, I
+> *can*. I used your exact example above (removing the backslash escape)
+> and invoked it as:
+> \[[!template id=test href=himom.png size=100x]]
+>
+> And got just what you would expect.
+>
+> I don't know what went wrong for you, but I don't see a bug here.
+> My guess, at the moment, is that you didn't specify the required href
+> and size parameters when using the template. If I leave those off,
+> I of course reproduce what you reported, since the img directive gets
+> called with no filename, and so assumes the size parameter is the image
+> to display.. [[done]]? --[[Joey]]
--- /dev/null
+[[!template id=gitbranch branch=smcv/ready/htmlbalance author="[[smcv]]"]]
+[[!tag patch]]
+
+My one-patch htmlbalance branch fixes incompatibility with HTML::Tree 4.0.
+From the git commit:
+
+ The HTML::Tree changelog says:
+
+ [THINGS THAT MAY BREAK YOUR CODE OR TESTS]
+ ...
+ * Attribute names are now validated in as_XML and invalid names will
+ cause an error.
+
+ and indeed the regression tests do get an error.
+
+--[[smcv]]
+
+[[done]] --[[Joey]]
[[jerojasro]]
+> How is this a bug? It's perfectly legal html for a class attribute to
+> put an element into multiple classes. [[notabug|done]] --[[Joey]]
--- /dev/null
+[[!comment format=mdwn
+ username="https://www.google.com/accounts/o8/id?id=AItOawmlZJCPogIE74m6GSCmkbJoMZiWNOlXcjI"
+ nickname="Ian"
+ subject="comment 1"
+ date="2010-09-24T19:01:08Z"
+ content="""
++1 for a \"revert\" web plugin which at least handles the simple cases. -- Ian Osgood, The TOVA Company
+"""]]
OpenID, and see how OpenID works for you. And let me know your feelings about
making such a switch. --[[Joey]]
-[[!poll 66 "Accept only OpenID for logins" 21 "Accept only password logins" 36 "Accept both"]]
+[[!poll 66 "Accept only OpenID for logins" 21 "Accept only password logins" 37 "Accept both"]]
###Pretty Painless
I just tried logging it with OpenID and it Just Worked. Pretty painless. If you want to turn off password authentication on ikiwiki.info, I say go for it. --[[blipvert]]
+> I doubt I will. The new login interface basically makes password login
+> and openid cooexist nicely. --[[Joey]]
+
###LiveJournal openid
One caveat to the above is that, of course, OpenID is a distributed trust system which means you do have to think about the trust aspect. A case in point is livejournal.com whose OpenID implementation is badly broken in one important respect: If a LiveJournal user deletes his or her journal, and a different user registers a journal with the same name (this is actually quite a common occurrence on LiveJournal), they in effect inherit the previous journal owner's identity. LiveJournal does not even have a mechanism in place for a remote site even to detect that a journal has changed hands. It is an extremely dodgy situation which they seem to have *no* intention of fixing, and the bottom line is that the "identity" represented by a *username*.livejournal.com token should not be trusted as to its long-term uniqueness. Just FYI. --[[blipvert]]
+
+----
+
+Submitting bugs in the OpenID components will be difficult if OpenID must be working first...
> absolute urls that have been fixed since Brian filed the bug. --[[Joey]]
[[wishlist]]
+
+----
+
+[[!template id=gitbranch branch=smcv/https author="[[smcv]]"]]
+[[!tag patch]]
+
+For a while I've been using a configuration where each wiki has a HTTP and
+a HTTPS mirror, and updating one automatically updates the other, but
+that seems unnecessarily complicated. My `https` branch adds `https_url`
+and `https_cgiurl` config options which can be used to provide a HTTPS
+variant of an existing site; the CGI script automatically detects whether
+it was accessed over HTTPS and switches to the other one.
+
+This required some refactoring, which might be worth merging even if
+you don't like my approach:
+
+* change `IkiWiki::cgiurl` to return the equivalent of `$config{cgiurl}` if
+ called with no parameters, and change all plugins to indirect through it
+ (then I only need to change that one function for the HTTPS hack)
+
+* `IkiWiki::baseurl` already has similar behaviour, so change nearly all
+ references to the `$config{url}` to call `baseurl` (a couple of references
+ specifically wanted the top-level public URL for Google or Blogspam rather
+ than a URL for the user's browser, so I left those alone)
+
+--[[smcv]]