"User privacy is enhanced as the issuer does not learn which web application is making the request as the request is mediated by the browser."
Every web application nowadays send you a welcome, onboarding, reminder after the verification. (No user privacy enhancement)
So we get a new process that solves nothing, but makes everything complicated. (And complicated helps the big and hurt the little in th long run)
Not verified but feels like a Google draft that closes the web.
The convenience advantage is significant, and it goes farther than convenience, since it’s very common for services to have their verification mail blocked or sent to spam. (Bonus pain: there’s no user-visible difference between delayed and blocked mail.)
The privacy advantage is also significant and real: no, not every web app sends an onboarding reminder, and the current state of web apps came to be without this functionality, so you can expect behaviour changes for those services that value the privacy, plus new services/authentication options to spring up that weren’t previously possible.
> it’s very common for services to have their verification mail blocked or sent to spam
So instead, there’s no verification mail and it’s the next message, the one that you actually wanted, that gets blocked or sent to spam.
The “privacy advantage” that the issuer can’t learn the identity of the application that wants to send mail seems to me to be a significant functional liability. If it instead produced a token that said to the email service provider “see, the message was invited”, now that would be useful. (It would raise concerns of its own, but it would at least be useful.)
Now THAT would be an interesting idea to implement... My gmail matches my username, and I can't even begin to count the amount of services, systems and people that don't understand how to get an email address that have entered mine.
Example: you can make orders from mlb online without verifying your email, and then you get marketing emails regularly. In that case, I was able to call the very senior citizen who thought he could just use any address he wanted.
I can't remember the dating app that let someone sign up mobile using my email address... I hijacked the account (password recovery) and changed the prompts to "I'm an idiot that doesn't know how email works." ...
> The privacy advantage is also significant and real
Depending which privacy, currently if I input a email into xyz noone can trust that this email belongs to me.
In the future every email input can verify if the mail belongs to me, that scream abuse and more new things that try to fix the old.
Nowadays, email inputs are just plain inputs.
If they gain the ability to automatically verify an email address through JavaScript, there’s a high risk that this feature could be abused by scam or phishing sites.
I think there is benefit to this because folding some identity primitives into the browser helps the user (in UX, in security). This was certainly true of password managers.
The other comments talk about how you will need to have a fallback. That is certainly true. But just because you have to have a fallback doesn't mean you can't improve things.
> Every web application nowadays send you a welcome, onboarding, reminder after the verification. (No user privacy enhancement)
But would they need to if they could trust info coming from the browser?
I thought this initially, the privacy thing looks like a non issue and is confusing. But the advantage is stated in the preceding paragraph. The user doesn't need to leave the signup flow and doesn't need to open their email.
The auth mechanism flows through the cookies, assuming the email provider offers a web browser and the user is signed in this could be seamless, although I'm not certain the cookie could be safely read cross site without risk or without being blocked by the browse
It wouldn't be simple to implement but not impossible, and it sounds like it would cost nothing to the user, it could work behind the scenes. Like as a user you are logged in to gmail or zoho mail in your browser. You sihn up for another service and you didn't get a confirmation email, just a welcome email. No fucks are given, it just works.
Mobile does this with autofilling auth codes sometimes with sms, so there's precedent.
Congrats OP the idea looks feasible. I'm usually the ackshually guy looking for the nitpick, but it looks nice. Will check the technicals later, cause the devil is in the details.
I can't tell you how many times email verification context switches made me completely lose track of what I was doing.
There's literally no worse context switch than having to go into your inbox, wait for an email, then come back to the appropriate tab to complete registration or login.
There are probably dozens, maybe hundreds, of services I never finished registering for all on account of this problem.
I worked authc/authz and security for a large fintech and we constantly butted heads against the growth folks. They fought hard and eventually won the right to do account creation and IDV without email verification. You don't have to verify your email until you're already making transactions, and that does wonders for growth. We're still accountable for all the stringent KYC regulations, of course.
What's worse is that the email is often delayed at the sender (cheap bulk email services) or the receiver (gray listing), but for no reason I can fathom have a short expiration date.
What's worse they are often unique AND delivered out of order AND have no timestamp or sequence number. So you get to guess which is the newest, using any other fails, and the ones that succeed often time out before they can be used.
Having an expiration date as short as 15 minutes seems insane and counter productive.
And when a customer fat fingered their email address and that fintech company didn't bother verifying email addresses, policy probably prohibited granting a request from the email address owner to remove their address from the account because they're not the financial account owner. Fortunately for that company, financial institutions seem to avoid Gmail's spam filter no matter how many times I mark those emails as spam.
> There's literally no worse context switch than having to go into your inbox, wait for an email, then come back to the appropriate tab to complete registration or login.
Then it's something maybe the customer isn't interested in the first place.
Most of the time mail just works for me only issues are sometimes greylisting and it takes hours.
I can understand it from the company side, but not sure how well it really works when someone use a mail app on mobile and on desktop not even logged into the mail account.
Cool, so if I want to use myname+yourdomainname.here@myemail.com to register on your application I now first have to go to some third party(?*) to verify that myname+yourdomainname.here@myemail.com is valid**. And then, once I've gone through the hassle of that, I have to go back to your website to use the third party service to verify my email. Thanks I guess...
* It's not clear if this service would be provided by a third party (in which case, the problem has merely just been moved) or the email provider. It sounds like the former, but in case it's the latter, then this doesn't have as big an impact I guess.
** While _I_ as the owner of an email address can decisively know that all emails of the form `myname+<whatever>@myemail.com` will go to me, you as the owner of a website attempting to verify my email cannot know that. The standards specify that + is valid in an email user part, but they do not require plus addressing to work.
On second glance, the validator is dictated by the domain owner so this falls into the "in case it's the latter, then this doesn't have as big an impact I guess" category.
I'll put this on the backlog of things to implement if I'm incredibly bored and want to weaken the security of my infrastructure.
What is your first paragraph referring to? This whole standard is trying to eliminate the context switching that happens when a website wants to verify your email.
Perhaps you mistook the two bullet points outlining what currently happens as goals for the standard?
The new standard relies on some possibly third party (at least that seems somewhat implied here) which has a database of email addresses which it can attest exist and which is tied to some user authentication.
If the email address isn't yet known to this third party (or, you are not logged in), there _will_ be a context switch which in my example case will occur for every registration since I use a per-entity email address.
+ is not a unique convention. I have dynamic rewrite sets so that all mails with specific prefixe s go to my mailbox:
<my initials>-site-<companyname>@<my domain> go to my personal mailbox
<my partner's initials>-app-<appname>@<her domain> go to my partner's mailbox
<daughter initials>-account-<entity name>@<my domain> go to my daughter's account
Sure you could in theory set up the server side verification mecanism for these pattern too. I am just stating that the +suffix stuff is not the only way used.
You are using a workaround for your privacy, and to prevent spam (not solid at either).
The protocol proposes to alleviate a UX burden. The back and forth.
it would need Google (and other email provider supporting the + trick) to allow you to certify your ownership of a wild card set of email addresses, i.e anything matching what's before the + and the protocol would work just the same. Absolutely reducing some friction without adding you the extra burden your trick currently involves.
> You are using a workaround for your privacy, and to prevent spam (not solid at either).
Neither, I do it so I can track which companies sold my email address on without my permission so I can put them on my shit list / report them to my government / shame them on the internet / whatever.
> The protocol proposes to alleviate a UX burden. The back and forth.
That seems to be _one_ aspect but that assumes you're logged into whatever email verification provider is in use.
> it would need Google (and other email provider supporting the + trick) to allow you to certify your ownership of a wild card set of email addresses, i.e anything matching what's before the + and the protocol would work just the same. Absolutely reducing some friction without adding you the extra burden your trick currently involves.
You assume that it's the email provider which has to implement this, which isn't so clear to me.
Only the email provider can attest that + addressing is in place, if a third party is involved, they can only explicitly match on full email addresses.
Like I said in my original comment, if it's the email provider that has to implement this, then the bulk of my issue is gone. Aside from the fact that now, as my own email provider, I have to implement this protocol somehow (easier said than done given my current infrastructure approach is aimed towards moving as many things into a non internet facing network).
I personally use a different, but still perfectly compliant suffix character. So just simplistic +suffix filtering isn’t complete. I’ve also considered using a double suffix and having the first one be required, so if someone cut off all suffixes it would go into junk anyway.
Regexing them out will break mail deliverability if the mail system doesn't do plus addressing. And you cannot know from a third party perspective if someone just likes putting plusses in their email address or if the plus is for plus addressing.
I make a point telling companies that the point of +yourdomainname on email addresses is to avoid having their email and news letters go through the extra aggressive and strict spam filtering that occur without +appendix. They as a company benefit from better delivery and lower support costs, and I enjoy the accountability of who is using my email address. It is a nice win-win solution for both.
Ages ago we intentionally configured MTA's to prevent enumeration and validation of email addresses on purpose. This appears to be a convoluted way to unwind that change and in my opinion would be heavily abused by shady email marketing groups on day 1. With all due respect I would never implement this in a company and would fight it. I choose my battles carefully before presenting them to the board until groups such as NCC [1] have reviewed the implementation concepts and details. All it would take is one poorly coded application using this incorrectly to be abused. i.e. devil in the implementation details or otherwise known as the weakest link. Having NCC validate every single implementation is going to get very expensive.
The ideas proposed in here aren't bad, but it does seem like you'll need to maintain two user flows as a site owner because:
1) Not all email providers will implement this, and
2) Users may not be signed into their email at the moment they signup
As a developer, I would find it easier to have one "verification code" flow for all users rather than fragmenting the process; it's much easier to document for your support staff. Again, not a bad proposal but perhaps not very useful in practice.
I thought Mozilla Persona aka BrowserID handled this email validation well with a fallback provider that used the same flow (and also implemented the OIDC work for obvious existing social providers like Gmail/Google Accounts). Though obviously not well enough because that fallback provider was seen as a large expense and shutdown without a replacement killing the Mozilla Persona effort.
But that does relate to I keep wanting an email claim for Passkeys. A user's browser/OS could verify an email address once and then associate it with a Passkey. Passkeys might be a good place for that (as Persona/BrowserID suggested). Obviously some browsers could lie about verifying the email address in the claim and there might still need to be more steps to it, but if you are already taking Passkeys it doesn't necessarily add an entirely different flow to accept a verified email claim from a Passkey (and/or decide you don't trust that Passkey's claim and trigger your regular verification code flow).
* It's lowering the friction to the site identifying the user (separate from the identification done now by the more sophisticated third-party tracking by surveillance companies like Google and Meta), even for sites that previously couldn't justify the friction of trying to do that.
* It's putting surveillance companies even more in the loop, building on the recent "log in with [surveillance company]" buttons, while existing login methods are destroyed through dark pattern practices or simply removed.
* It can be a ready-made platform, waiting for the next authoritarian government directives that say, now that everyone is hooked up or can easily be hooked up, turn on oppressive feature X, Y, or Z for all targeted Web sites/people.
I don't know if this is the solution, but we desperately need one. It's to the point where "email bombing" is forcing service providers to add captchas to login and registration because those forms are being abused as mail-flooders.
One other problem is there isn't a way to definitely know that a given OIDC provider is authoritive for a given email. Although, this spec could probably be simplified by just having a dns record that specifies the domain to use for oidc for emails on that domain.
Another is that there is a lot of variance in OIDC and OAuth implementations, so getting login to work with any arbitrary identity provider is quite difficult.
I wouldn't mix OAuth and OIDC up when thinking about this. OAuth is a chaotic ecosystem, but OIDC is fairly well standardized.
OIDC actually does have a discovery mechanism standardized to convert an email address into an authoritative issuer. Then, it has a dynamic registration mechanism standardized so that an application could register to new issuers automatically. Those standards could absolutely be improved, but they already exist.
The problem is that no one that mattered implemented them.
If you want to get anywhere with something like this, you need buy-in from the big email providers(Google, Microsoft, Yahoo, and Apple) and the big enterprise single sign on providers(Ping, OneIdentity, and Okta). All of those companies already do OIDC fairly well. If they wanted this feature to exist, it already would.
Instead, it seems like big tech is all-in on passkeys instead of fixing single sign on.
It's more of an invisible feature than a protocol.
The signup protocol and user flow is the same if the feature is supported or not. You just skip a step if the convenience feature is supported.
With SSO the user is inconvenienced with an additional option at sign up and login, and there's the risk of duplicate accounts. Also stronger vendor lock in.
Additionally, some corporate or personal policies might prefer to NEVER use SSO, even if it is sometimes accepted. I hate being presented with option to login with email or login with Google, and I don't know which I signed up with.
God forbid I accidentally make an account with SSO and another with email but the same email. I'd rather just always use email, it's supposed to be a convenience, the advantages are lost when it goes south once
The Verified Email Protocol got renamed to BrowserID, and Persona was its reference implementation.
This looks broadly similar to that, but with some newer primitives (SD-JWT) and a focus on autocomplete as an entrypoint to the flow. If I recall correctly, the entire JOSE suite (JWT, JWK, JWE, etc.) was still under active iteration while we were building Persona.
And hey, I applaud the effort. Persona got a lot of things right, and I still think we as an industry can do better than Passkeys.
I haven't managed to formulate the exact issue yet, but if I squint, I swear there's a path to track and/or deanonymize someone visiting your site. If you have any kind of previous information about the user, such as Meta, or Google or etc, you could easily try and see if the user holds any number of emails you think they might hold. From there on out we're practically back to third party cookie tracking.
The key mitigation is that the protocol - as envisioned - is mediated by the user agent; you as a website cannot silently fire off probes that tell you anything.
could easily be done by malicious JS, an ad script, or the website itself, and then as the RP gets the output of 6.4) email and email_verified claims.
I'm guessing that this proposal requires new custom browser (user-agent) code just to handle this protocol?
Like a secure <input Email> element that makes sure there is some user input required to select a saved one, and that the value only goes to the actual server the user wants, that cannot be replaced by malicious JS.
You'd have to make an authenticated cross-origin request to the issuer, which would be equivalent to mounting a Cross-Site Request Forgery (CSRF) attack against the target email providers.
Even if you could send an authenticated request, the Same Origin Policy means your site won't be able to read the result unless the issuer explicitly returns appropriate CORS headers including `Access-Control-Allow-Origin: <* or your domain>` and `Access-Control-Allow-Credentials: true` in its response.
Browsers can exempt themselves from these constraints when making requests for their own purposes, but that's not an option available to web content.
> I'm guessing that this proposal requires new custom browser (user-agent) code just to handle this protocol?
Correct; which is going to be the main challenge for this to gain traction. We called it the "three-way cold start" in Persona: sites, issuers, and browsers are all stuck waiting for the other two to reach critical mass before it makes sense for them to adopt the protocol.
Google could probably sidestep that problem by abusing their market dominance in both the browser and issuer space, but I don't see the incentive nor do I see it being feasible for anyone else.
I largely agree, but I still think there's a compelling argument that blinding the issuer implicitly precludes API gatekeeping or censorship. Sites wouldn't need to pre-register with any issuer, nor could the issuer refuse to provide tokens on the basis of where they'll be used.
This is sort of missing the point of email verification. It's to test that the email from this particular site is deliverable and visible to the user, not just that it's a legitimate address known to work by some third party.
A user may make a typo in the email, and that email might still be a valid email know to work (but for another, unrelated person). The user's email agent (such as GMail or Outlook) can mark the email unimportant and make it hard to notice, or even mark as spam. All these issues are much better to find out and iron out before the user sees themself unable to communicate, or successfully bound to an email they cannot access.
The whole point of email verification is to make certain that a channel of alternative communication exists for a case when the user would be unable to identify themself normally, for whatever reason. A working email alone is not always sufficient for successful credentials reset, but almost always it's much easier to when the user has it.
> A user may make a typo in the email, and that email might still be a valid email know to work (but for another, unrelated person).
That won't verify. The issuer should check if the request has valid session cookies for the e-mail-address that should be verified. This also implies that it just won't work for any service that uses sessions with a short timeout.
"There are privacy implications as the email transmission informs the mail service the applications the user is using and when they used them."
Not really, as I can enter any email on a service login page that uses magic links for auth. The owner of that email will receive the login link but that doesn't mean they tried to login on that system.
Not really indeed. You're right that false positive are possible with such a system, but false negatives are not. That means that you're leaking information about when a user didn't use a service, as well as partial information about when the did (which you could combine with other data to tell you something meaningful).
Is there a nonce relay vulnerability here? You try to verify your email with site A. Site A starts an email verification with site B. Site B sends a nonce to A, A relays the nonce to the user. The user generates the proof, sends it to A. Then A sends it to B.
Skimming that I'm thinking yes, sure, why not, but this repo is missing useful context. Who are you, authors? Why should I bother learning this protocol? Is anyone using or going to use this? If it's new, has it been shopped around at conferences? Any related research?
And specifically Sam goto (Google, fedcm) and Dick Hardt (hello, oauth2 spec writer).
This was originally thought up a couple (5-6) years ago along side fedcm and privacy sandbox, but before SD-jwt was full baked, so it wasn't as clean. The use of SD-jwt is much better for privacy.
Hard to see how this provides substantial benefits over OIDC. Either one requires support from the email provider, but one is already standardized and has widespread support.
Well the problem is simply user base. There is no point in being provider if you have 100 users. On the other hand, despite OIDC being standardised, there are way too many ways of implementing it. It is essentially impossible to have a "wildcard" support for OIDC providers. How do I know? I just implemented one myself. For example, providers usually support only one or very few authorisation flows, so in reality you would likely end up with a lot of failed attempts to sign up with some "3rd world" provider.
PS: just take PKCE where the provider has no way of communicating whether it is supported, or required, at all.
I have just added OIDC support for bring-your-own-SSO to our application, and it wasn’t as bad as you make it sound: As long as the identity provider exposes a well-known OpenID configuration endpoint, you can figure it out (including whether PKCE is required or supported, by the way!)
The only relevant flow is authorisation code with PKCE now (plus client credentials if you do server-to-server), and I haven’t found an identity provider yet that wouldn’t support that. Yes, that protocol has way too many knobs providers can fiddle with. But it’s absolutely doable.
I didn't say it was impossible, just impractical and that is why majority of services that use SSO only support google, apple, twitter or facebook. You write the code specific to these few providers once and are done with it for good. There is little reason to invest time and money for adding generic support for other providers. It's just the way it is. If OIDC protocol would get streamlined a bit, we could easily have universal support. But then again, these big providers would likely be stuck in the current version and not bother adjusting to the new, simpler version, if it would come to be.
With metadata endpoint, things become much easier, that is true.
Though how would you implement it? Like, user comes to your website and wants to sign in with some foo.bar provider, do you force the user to paste in the domain where you go look for the metadata? What about facebook or google, do you give them special treatment with prepared buttons or do you force user to still put in their domains? What about people using your flow to "ddos" some random domain...?
Fedcm offers some hope here, where the browser gets some capability to announce the federation domains to the RP. It's not straightforward though, of course. In this case though it's inverted - you are providing the url of the MCP server, and the MCP server is providing the url of an authz server it supports. The client is uses the metadata lookup to know if it should include PKCE bits or not.
This isn’t fundamental to its design, though. It’s a result of providers wanting to gate access to identities for various reasons. The protocol presented here does nothing to address this gating.
With DCR (dynamic client registration) you can have an unlimited number of providers. Basically, just query the well-known endpoint and then use regular OAuth with a random secret.
There's also a proposal to add stateless ephemeral clients.
DCR is cool, but I haven't seen anyone roll it out. I know it has to be enabled per-tenant in Okta and Azure (which nobody does), and I don't think Google Workspace supports it at all yet. It's a shame that OIDC spent so long and got so much market-share tied to OAuth client secrets, especially since classic OpenID had no such configuration step.
This is because the MCP folks focus almost entirely on the client developer experience at the expense of implementability
CIMD is a bit better and will supplant DCR, but overall there's yet to be a good flow here that supports management in enterprise use cases.
This is sorta interesting, but it fails on several levels. First, email verification as it exists currently is fairly simple, there are a lot of different ways to do it, and it works universally for all email addresses (as long as you don't expire codes too fast for servers that use greylisting).
This protocol solves a pretty contrived problem ("By sending the email verification code, the inbox provider knows the user is using that service!") by making email verification exponentially more complex, with only one correct flow, and will only work for domains that have opted in and configured this protocol.
Importantly, the protocol seems to rely on 1st party web cookies, which means you could no longer run a "pure" MTA that offers IMAP; you would need to have some web interface where your users can log in, even if there is no webmail functionality.
The bigger question is: why would the company who is hosting the email have any economic incentive to invest time and money in implementing and maintaining this protocol which currently has zero adoption? It's a chicken-and-egg with no upside.
> This protocol solves a pretty contrived problem ("By sending the email verification code, the inbox provider knows the user is using that service!")
I agree with a lot of what you are saying, but I think the main motivation is actually trying to reduce friction for the user to verify their email, which is good for the user, because it makes registration easier, and good for the company, because less users bounce at the email registration step.
But yeah, this is quite complicated, and there isn't a lot of motivation for email providers to implement it.
If my memory serves, this is the same wolf in sheep’s clothing that the attestation based Web Environment API was, from the same kinds of very interested parties. (Edit: I may be misremembering the name of that proposed API.)
It’s not about efficient, effective solutions. It’s about control. Something you have to look at with WICG and W3C is the source of proposals and drafts.
to be honest, i am kinda wondering, why mailserver do not publish on some http service:
- whom the accept mails from under which conditions
- who's blocked and why
- perhaps hashed-and-salted-email-addresses for verification
- how much spam (as the receiver understands it) happened from where
- that you produce tokens with hashcash, so you unknown senders can verify themselves with that per mail/receiver
Creating a email/messaging protocol to solve spam, which is a different problem from verifying sign ups to solve spam sign ups in this thread, but is relevant for people in this thread interested in the issue of spam. It is directly compatible with email/messengers, including all of the large email providers, low false negative rate compared to current spam filtering, free for senders.
Check this profile for the email if you wanna ask for more info or get updates.
Many applications need a way to contact a user (security breach, password reset). If one only has a username and forgets the password, there’s no way to reverify the user.
There are many ways to re-verify the user if one forgets a password. Some may even be more secure than sending a e-mail. Simplest is a set of single-use reset codes that could be generated at signup or later on, like the ones to remove 2FA.
In the case of security procedures, I'd argue that there is some room for tough beans. Reducing security to cater for carelessness seems like a really bad compromise to me, one that I see far too often.
This is an absurd position, and potentially illegal - for paid services.
You have a business relationship between the company and a person. Whether that person remembers the password or not is immaterial to whether they have the legal right to anything they purchased in the app.
> Many applications need a way to contact a user … password reset
At this point the password is pointless, you might as well just use the email address. Or perhaps a distinct username and email address, but then there would probably be a “forgot username” workflow making that as pointless as the separate password.
* prevent signing up for someone else (validate it is you who owns the email)
* poor man's mfa, although please allow me to use totp instead (probably the three most legitimate reasons from a user perspective, email validation prevent you from making a typo)
* send ads and notifications (legitimate from the provider's perspective, they want campaigns to succeed, email validation makes them sure emails land)
Most people want a way to recover their account if they lose those creds, especially when you ask them once they’ve lost their creds.
It’s also a rudimentary PoW system against bots. And people who don’t want to share their email can use a temp email service, so it’s no skin off their back.
> And people who don’t want to share their email can use a temp email service, so it’s no skin off their back.
That seems to be a better option for bots than for actual users: if you care about the account, you probably would not want to make its password resettable via a service like that. Or even via a regular email provider you do not trust, and those could easily be the only kind available.
Ultimately this is akin to password requirements. They are a bother but the average user is just much too careless to be trusted with their own security.
Weird that no one said this yet: To verify users' legitimacy. If you make effort to block 10 minute email services it works kinda well and slows down bots
Without traceability, any app that can be used for abuse will be. (An HN reader used an anonymous mail service to send me some hate speech and tell me to kill myself within the last day. The service they used to do it obviously does not care, but also cannot do anything about it, because they don't know who used their service to do it.)
The onus is on you here… but, I think I know where you’re going with this. In terms of number of email addresses people have and use, vs number of usernames people have and use, you might be right that some people have 1 or 2 email addresses and many usernames.
Email masking has become easier to use, and many people use `+addressing` to uniquely tie their email to the service for spam prevention / tracking, which would make stuffing harder.
In these cases, email would be much more unique and a better protection against stuffing. HOWEVER, it’s not obvious how Email verification protocol would work for these types of things.
Credential stuffing happens when a user signs up on one Website B with account information matching the information they used when setting up their account on Website A, and the operator of either Website A or Website B can use those credentials to access the user's account with the other operator.
If websites authenticate with username and password combo chosen by the user, then credential stuffing is neutralized if the user avoids re-using the same combo, effected by the user selecting at least one of a different password or the selection of a different username.
If instead of a username, an email address is required to register, that generally results in one less degree of freedom; rather than being able to create a username with Website B that differs from the username they created on Website A, absent the use of a wildcard/catch-all mailbox or forwarding service (which are not straightforward to set up, and almost nobody has one), the user is required to disclose an existing email address.
(It also increases the surface area for attacks, since the malicious website, now knowing the user's email address, can attempt credential stuffing with the user's email provider itself.)
You can balk at whether or not these are negligible differences, but it's non-zero. Therefore, all other things held equal, then strictly speaking it is more robust.
>If instead of a username, an email address to register, that generally results in one less degree of freedom [...]
It "generally" doesn't, because the average user isn't randomly generating usernames per-site, just like they're not randomly generating passwords per-site. If they're randomly generating usernames per site, they'll need some sort of system to keep track of it, which is 90% of the way to using a password manager (and therefore randomized passwords, immune to credential stuffing). For it to practically make a difference, you'd need someone who cares about security enough to randomize usernames, but for whatever reason doesn't care enough about security to randomize passwords.
To start with, randomly generated usernames weren't mentioned, and they are not a prerequisite.
> It "generally" doesn't, because the average user isn't randomly generating usernames per-site
What other people do, whether average users or not, doesn't matter. When average user Alice is registering accounts on Websites A and B, the fact that average user Bob doesn't use different usernames for his accounts doesn't change the fact that if Alice would have otherwise registered account agirl on one site and pie_maker26 on the other, but instead has been forced to enter her email address, then that has a non-zero effect on risk.
For the claim as stated to be untrue, the difference in risk would need to be zero.* But it isn't zero. The claim as stated is true.
> For it to practically make a difference, you'd need someone who cares about […]
That's not true. Users who are exposed to lower risk by accident are still exposed to lower risk. It's not a prerequisite for the user to care at all, nor does it require them to understand any of this or to be trying to adhere to any particular scheme to achieve a certain outcome. The only thing that matters is what they're doing—and whether what they're doing increases or decreases risk. Intent doesn't matter.
* or it would need to be somehow less risky when email addresses are required in place of where a username otherwise would be, but that's not the case, either
>To start with, randomly generated usernames weren't mentioned, and they are not a prerequisite.
I've seen sites randomly generate passwords for users as well. Does that mean users reusing their passwords at all is a prerequisite? Moreover if we're really accepting "whether average users or not, doesn't matter", I can also say that using emails doesn't decrease security because you can use randomized emails, as others have mentioned. At some point you have to constrain yourself to realistic threat models, otherwise the conversation gets mired in lawyering over increasingly implausible scenarios. For instance, by asking for emails at registration, you can more easily perform 2fa, whereas you can't do that with only a username/password combination[1].
[1] before you jump to say "but can ask for an email with username/password too!", keep in mind the original claim that username/password is better was in response to a comment asking "Why must apps require email?".
Surprising proposal. Normally I'd review the credentials of the authors but it's late Sunday night so nevermind.
I like the idea in general - an OIDC-like flow without needing any a priori setup. But, the RP has only a signed token with the pubkey in DNS, so this doesn't prove anything about the user unless the RP also verifies against some trusted and known email providers. This is absolutely awful for the Internet and makes sure power stays concentrated. PLEASE don't let this become a thing.
Second, this doesn't improve privacy. Most RPs will send an email right at signup, or soon thereafter. Thus the email provider does learn of the individual's association with that web application.
A last issue that's immediately obvious, is that you have to use a webmail interface.
In extension to that spirit, some SPAM could be eliminated, if more people would turn address verification on in their SMTP servers, which makes the delivery peers symmetric.
Do you mean source or destination address verification or both?
Source address verification doesn't really mean anything (no-reply@example.co.uk) and destination verification is obvious and as far as I am aware pretty much no-one doesn't do it already.
With source address verification (and server validation) it is guaranteed that the mail comes from the server that controls the senders mail address and that this address does indeed exist. With symmetric I mean that both servers then resolve each other the same, both check whether their side of mailbox exists and they share the time during which this happens, so you can't use it for DOS, since it takes your time as well.
I am a little sad the original pretty interesting FedCM work got reduced to this. There was some neat work underway to allow using identity providers without the site even knowing the provider! https://github.com/w3c-fedid/FedCM/issues/677
On the rare occasions where I would care about this as a user, I make a throwaway account on an anonymous service. If I don't want my email service to know I have an account with you then I don't trust you to handle my main address either.
> Verifying control of an email address is a frequent activity on the web today and is used both to prove the user has provided a valid email address
LOL WUT??
This is also ideal in “war dialling” eMail servers to get accurate lists of what eMail accounts exist on said server. This has been the case since marketing first hit the Internet.
Do you really want all of your legitimate eMail addresses to end up on spam lists? Because this is how you get complete and unabridged lists of your domain’s valid eMail addresses onto spam lists.
It’s why my own eMail server is set up to quietly confirm and accept any and all eMail sent to the domain - regardless of username employed. Even invalid eMail accounts get confirmed and incoming eMails to them get accepted.
Anything not sent to a valid account then drops into a catch-all account for further processing. Occasionally I’ll get eMail where the username was misspelled - it happens - and I just forward it to the appropriate family member.
The rest get reported as spam. And I enjoy making every last report. Enjoy ending up on a blacklist.
Ha! Thank you, I misunderstood who was behind this proposal but since it's W3C it's something that would directly be implemented by the browser itself.
I have a couple of problems with this, although kudos for the author and I won't dismiss this project's usefulness or value.
1) Email shouldn't be used for this purpose. It is inherently insecure. Many have tried, you won't succeed.
2) The subject line of the email should not contain verification details (code), it shouldn't even imply the content of the email. "A secure message from <insert site>" is enough.
3) The device receiving the verification message is often not the same device that initiated the process. It is very important that users are able to easily type out the code in the webapp, instead of what many do: require a link to be opened.
4) Alright, use email, but don't treat as a special or absolute means of contacting users. The whole "contact user" aspect should be abstracted to a point. Any messaging app that the user would like to use should be used. There are dozens of them, and all of them should be abstracted to the webapp. Managing api keys and integrations sounds like a nightmare, this is one big reason no one is doing it. But again, that's my gripe, this is a solvable problem, services and libraries to make it easier should exist, but where they don't .. the developers of the application should take on the costs associated with supporting them. Maybe not dozens but a handful of messaging protocols, based on target audience can be used (e.g.: Signal,Whatsapp, Weechat, VK, Telegram, Bluesky, Twitter) - 7 api keys to rotate once every few months and you've just made billions of potential users happy!
5) Perhaps the problem is a lack of a "secure address resolution layer" to messaging? Without requiring api keys and all of that, it should be possible to resolve the address of a recipient, encrypt a message to them, using their public key, and simply send it. Messaging apps should support a standard protocol of receiving external messages this way. The protocol should also allow including a "reply" address?
I didn't say that, you added that part. It is used for auth. it isn't secure.
Email is less secure than SMS, unless you encrypt your email (even then..). With email, there are multiple middle parties that can just read the message. Forget malicious insiders, it is more than reasonable to assume at least one MTA out there is compromised. Mail server CVE's aren't that rare.
Furthermore, despite email being used for auth, as you correctly claimed, email clients aren't secured like authentication applications or password managers are. For most people, a compromise of their email account means a compromise of most of their other accounts.
Even furthermore, not only is email used for authentication, email is being used to revoke,reset and tamper with other authentication methods and account security in general. You don't just login to apps via email, your password, MFA, account changes,etc.. can all be done by someone controlling your email (and more and more, your phone number/SIM these days).
End to end encryption is all the rage on sites like HN, but I'm shocked when those same people have no problem using email for sensitive operations.
There is no advantage.
"User privacy is enhanced as the issuer does not learn which web application is making the request as the request is mediated by the browser." Every web application nowadays send you a welcome, onboarding, reminder after the verification. (No user privacy enhancement)
So we get a new process that solves nothing, but makes everything complicated. (And complicated helps the big and hurt the little in th long run)
Not verified but feels like a Google draft that closes the web.
reply