Hacker News new | past | comments | ask | show | jobs | submit login
GET to do POST's job -- when is it okay? (micropledge.com)
4 points by benhoyt on June 13, 2007 | hide | past | favorite | 9 comments



I have one state-modifying GET in my app - it's the password reset page, where the link goes out via e-mail and I can't trust e-mail clients to allow a form submission. It uses an unforgeable one-time-use reset code, so I don't need to worry about crawlers. Once someone has visited the page, the page becomes invalid.

If I were in your situation, I'd document.write out a link that submits the form via Javascript. Then, in the noscript tag, I'd put a normal unstyled Submit button. That way, Javascript-enabled browsers get the fully-styled link with a minimum of hacks (which tend to break on lots of edge cases). Javascript-disabled browsers degrade gracefully - they won't look as pretty, but the functionality is there.


Your password reset link should lead to a page where you have a button that he pushes to submit a POST to reset his password. Otherwise a user's password could be reset without his knowledge.


It comes from an e-mail sent to the address associated with the account, though. The only way he can get the reset code is to have it e-mailed to him, so unless he's forwarded on the e-mail to someone else (why would you do this?), the reset is coming from him.

I wouldn't put up a GET page where anyone could reset anyone else's password - that would be silly. (Though fun in a chaotic way...)

Edit: To clarify, the workflow goes like this:

1) Unauthenticated user visits forgot_password, enters in either username or e-mail address, submits a POST back to system with those parameters.

2) POST handler generates a unique reset code and embeds it into an e-mail that's send to the e-mail address associated with the account.

3) User clicks on a link in that e-mail and visits a GET page that resets the password to a random one and tells the user what it is. Also logs the user in.

4) There is a link on that page that lets the user change it instantly, so they don't have to use the randomly-generated password permanently.

A simple POST to reset the password doesn't work, because a "forgot password" link necessarily requires that the user not be authenticated (otherwise, they haven't forgotten their password ;-)). So you need the extra e-mail verification step so people can't change other people's passwords.


Say Joe User has an account at your site, and Arnie Asshole knows Joe's email address, and he knows that Joe uses a web accelerator. So he goes to your site and puts in Joe's email address. Joe goes to look at the resulting email he gets in his webmail, and bam, his password has been reset without his knowledge, because his web accelerator followed the link in the email.

The workflow should be like this:

1) as in your example

2) as in your example

3) User clicks on a link in that email and visits a GET page. The GET page has a form with a single button ("Really Reset My Password" or something), or maybe it has a form right there for him to enter his new password.

4) He clicks the submit button, and the form submits right back to that same url, with the reset code embedded in the url, but this time it's a POST.

5) Your site's code detects it's a POST this time, and changes the password.

I was never suggesting you bypass the email verification step. Just add an extra screen between clicking the link in the email, and the password actually being reset.


> It comes from an e-mail sent to the address associated with the account, though. The only way he can get the reset code is to have it e-mailed to him, so unless he's forwarded on the e-mail to someone else (why would you do this?), the reset is coming from him.

So what happens when your fancy new spam filter follows the link in your email to see how spammy the page is? You end up locked out of your account, logged out, with a random password you can't retrieve.

It's really not hard to come up with scenarios where GETs are automatically performed. The HTTP 1.1 specification was written with this in mind. Assuming that it's not going to happen is simply an unnecessary risk.


Remember that the spec's concept of GET for read only requests (idempotence, in the spec language) and POST for requests that change data on the server is just a recommendation.

There are places where breaking convention makes sense, e.g. using a POST for a read only query if you allow foreign language (non-ascii character) parameters.

If your only objection to POST is stylistic, I'd try the javascript suggestion by nostrademons.

Also, if you're going to go down the slippery slope of javascript and into ajax, note that XMLHttpRequest will let you do either GET or POST just as easily, by changing the method parameter in the open statement: http://developer.apple.com/internet/webcontent/xmlhttpreq.html


In regards to ajax applications, is idempotence an important consideration? I use ajax/GETs to change server state, does this really matter?

inklesspen's objections to GETs changing server state seem to be based on web-accelerators following links. If web accelerators do not parse js, then inklesspen's objections are not relevant. Am I correct?


In regards to ajax applications, is idempotence an important consideration? I use ajax/GETs to change server state, does this really matter?

Whether you use ajax or not, remember that all GET requests are formed, ultimately, into a url string.

So all the usual GET limitations apply, and there will always be cases where you wouldn't want to use GET (e.g., the list of parameters is very long, or if the parameters contain non-ascii characters, etc.), regardless if ajax is involved.

If web accelerators do not parse js, then inklesspen's objections are not relevant. Am I correct?

Yes, you're correct.

Because most of the accelerators/bots/spiders/etc. out there right now do not have javascript engines, they simply cannot click on an ajax GET link.

But don't be surprised if that changes in the future: some bot authors probably know they're being foiled by their inability to execute scripts, so I wouldn't be shocked to see javascript-capable crawlers clicking ajax GET links eventually.


> inklesspen's objections to GETs changing server state seem to be based on web-accelerators following links. If web accelerators do not parse js, then inklesspen's objections are not relevant. Am I correct?

No. GETs are defined to be safe by the HTTP specification. Web accelerators are only talked about so much because the problems GWA caused were so widely talked about. In reality, anything that see the URI can conceivably cause problems.

By abusing GETs in the way that you are, you're essentially gambling that no software is going to see and follow your links automatically. Right now, software like that might not exist. But you never know what fancy Firefox extension, UserJS script or proxy magic somebody could release tomorrow.

When it's so easy to use HTTP properly, is it really worth taking that kind of risk?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: