One thing I've always wondered is how security researchers feel justified in releasing tools like the one in this blog post to the public. I can almost certainly say that the number of bad or creepy uses for an automated email to phone number generating tool massively outweighs the good reasons for having one. Does he get a pass because he's doing this for "research" and it's a grey area anyways? Does he feel better because he talked to the companies who exposed the vulnerability and it's neutered now?
I think the idea is to highlight the bad security practices that allow this in hopes that these companies patch these holes (in this case reduce leaked data in the password reset process).
A GREAT example of this was when Firesheep forced Facebook (and countless other sites) into embracing https. Firesheep was a firefox plugin that anyone could run on a public wifi (e.g. coffee shop) and instantly start getting the passwords of anyone on the same network that logged in to anything over http. At the time Facebook was http by default. So, it made the news and forced Facebook to make https required basically overnight. Many other companies followed suit, and it's likely fair to say that the release of that plugin single-handedly accelerated https adoption by a considerable margin.
I don't know that this release will be that impactful, but its certainly better than having this be a technique that only black hats know about.
Similarly to how Journalists feel justified in stories that have negative repercussions for some parties being reported upon. One way of assessing these decisions is answering the question "Is more harm done than good by releasing information this to the public?"
From my perspective, I'm happy that Martin Vigo released this information (in 2019) as it helped me inform my employers (and now my clients) to additional threat model vectors to consider before deciding how to best perform password resets.
Also in his defense:
1) He originally released a rather crippled form of the PoC
2) It requires a Twilio account, which raises the barrier to entry and provides a data point for analysts were the tool to be used criminally.
> Similarly to how Journalists feel justified in stories that have negative repercussions for some parties being reported upon. One way of assessing these decisions is answering the question "Is more harm done than good by releasing information this to the public?"
That method leads to the worst evils in the world. Many have concluded, or used it to justify everything from, 'it's ok to take these poor people's land and give it to megacorp, because we'll get a factory' to 'it's ok to silence these journalists because it's for the public good' to 'it's ok to kill my enemies because I think they are bad' to 'it's ok to commit genocide against this group because the world will be better off without them'.
Who am I, or who are you, to decide what is good or bad, or how good or bad, or to weigh those things for others? Beyond our obvious cognitive limitations (as humans, we are too flawed cognitively and morally to make judgments for others) and lack of legitimacy (who elected us?), there is our obvious bias - 'good' is what is good from our perspective, based on our biases, subject to our ignorance of others.
That's why human rights exist: It's their right and you can't make that decision for them; it's up to the person involved. If you think their land, etc. is so important, then ask them - it's up to them whether they want to do it. They have property rights, speech rights, etc. and nobody can abridge them, and in the limited circumstances where they can be abridged, there is a whole infrastructure of legitimacy (democracy), protection from corruption (separation of powers, juries, etc.), process (law, due process).
I cannot follow your thread from a security researcher sharing tools to put pressure on an insecure website, to a megacorporation stealing someone's land.
I think there's a good ethical argument for releasing the knowledge, not so much the tool. I think the open secret is that most people who go into cybersecurity do so because they enjoy breaking security through clever methods rather than actually helping others stay secure.. but security research is legal and hacking random targets isn't.
I'm in the security industry, and this is absolutely correct. There are definitely many who carefully release PoCs when appropriate (giving vendors enough time to patch, etc.), but a LOT of these tool releases are done mostly to show off how smart we are and get clout. You see this big time every summer, as researchers all scramble to get a Defcon tool talk slot with some new thing they wrote, before immediately abandoning it post-con.
Obviously, it's not like anything can or should be done to change this, as it's mostly just human nature, and keeping the security industry capable of operating legally and in the open is paramount. But sometimes people just wanna brag. And they get big mad about it and sputter about how literally any possible end justifies literally any actual means if you point it out (see: the other person responding to the top level comment lol)
When arguing with an executive on why their company’s security posture needs to be updated, there is nothing quite as effective as an off the shelf demo.
The bad guys know these and a million more exploits already so personally I'm fine with these guys exposing the industries dirty laundry especially if it shames them into doing something. There is also no defense from the company that they did not know when it comes to legal action.
> I can almost certainly say that the number of bad or creepy uses for an automated email to phone number generating tool massively outweighs the good reasons for having one
Meanwhile, I can almost certainly say that the number of ways to bury your head in the sand instead of simply facing an uncomfortable problem massively outweighs the good reasons for doing so anyway.
A person who is in need of money and lacking in empathy will not fail to use any technique available and it is thus good to know the defenses of that or at least be aware of it.
"Creepy" arguments (appeals to shame or disgust) are fallacies.
Security researcher types are well aware of the good-actor motivations behind white-hat-hackerdom. Is it wrong that I can buy a book on lockpicking? Would I be seen by some as a bad parent if I taught it to my kid when he expressed curiosity about it?