I don't see how that would have helped in this case. This was not a resource at a known location that was supposed to be only available to logged in users. This was a resource that the admins didn't know about available at an unknown url that was exposed to the public internet due to a configuration error. Are you going to write a test case for every possible url in your server to make sure it's not being exposed?
Something that could work is including a random hash as a first hidden email inside of every client, and then regularly searching outbound traffic for that hash. But that would be rather expensive.
n=1, head of a security at a fintech. We perform automated scans of external facing sensitive routes and pages after deploys, checking for PII, PAN, and SPI indicators, kicked off by Github Actions. We also use a WAF with two person config change reviews (change management), which assists in preventing unexpected routes or parts of web properties being made public unexpectedly due to continuous integration and deployment practices (balancing dev velocity with security/compliance concerns).
Not within the resources of all orgs of course, but there is a lot of low hanging fruit through code alone that improves outcomes. Effective web security, data security, and data privacy are not trivial.
You don't need to check every one though. Or any. You create a known account with known content in it (similar to your hash idea) and monitor that.
Even if they never got around to automating it and were highly laissez-faire, manually checking that account with those testcases say once a month would have caught this within 30 days. That still sucks but it's at least an order of magnitude less suck than the situation they're in now.
If the screenshot in the article isn't edited, this was an HTTP service exposed to the internet on an unusual port (81). I'd propose the following test cases:
1) Are there any unexpected internet-facing services?
* Once per week (or per month, if there are thousands of internet-facing resources) use masscan or similar to quickly check for any open TCP ports on all internet-facing IPs/DNS names currently in use by the company.
* Check the list of open ports against a very short global allowlist of port numbers. In 2024, that list is probably just 80 and 443.
* Check each host/port combination against a per-host allowlist of more specific ports. e.g. the mail servers might allow 25, 465, 587, and 993.
* If a host/port combination doesn't match either allowlist, alert a human.
Edit: one could probably also implement this as a check when infrastructure is deployed, e.g. "if this container image/pod definition/whatever is internet-facing, check the list of forwarded ports against the allowlists". I've been out of the infrastructure world for too long to give a solid recommendation there, though.
2) Every time an internet-facing resource is created or updated (e.g. a NAT or load-balancer entry from public IP to private IP is changed, a Route 53 entry is added or altered, etc.), automatically run an automated vulnerability scan using a tool that supports customizing the checks. Make sure the list of checks is curated to pre-filter any noise ("you have a robots.txt file!"). Alert a human if any of the checks come up positive.
OpenVAS, etc. should easily flag "directory listing enabled", which is almost never something you'd find intentionally set up on a server unless your organization is a super old-school Unix/Linux software developer/vendor.
Any decent commercial tool (and probably OpenVAS as well) should also have easily flagged content that disclosed email addresses, in this case.
3) Pay for a Shodan account. Set up a recurring job to check every week/month/whatever for your organization name, any public netblocks, etc. Generate a report of anything that was found during the current check that wasn't found during the previous check, and have a human review it. This one would take some more work, because there would need to be a mechanism for the human(s) to add filtering rules to weed out the inevitable false positives.
There was rather a lot of NATO coordination in the US-led invasions of both Iraq and Afghanistan. None of the military missions in these countries were in response to the Article V mutual defense clause of the NATO treaty. It's very easy to see how these operations (and therefore the NATO alliance) would be seen as aggressive to these countries.
This is false. Standard sampling algorithms like beamsearch can "backtrack" and are widely used in generative language models.
It is true that the runtime of these algorithms is exponential in the length of the sequence, and so lots of heuristics are used to reduce this runtime in practice, and this limits the "backtracking" ability. But this limitation is purely for computational convenience's sake and not something inherent in the model.
One of the reasons I've intentionally decided not become independently wealthy is that I want to have to explain to other people why I'm doing things. Part of my work is "charity-ish", and by not being able to do things on my own, I'm forced me to improve my communication skills and involve other people in these charity activities. I think that ultimately improves the final outcome, even if the process is immensely more frustrating.
I am referring to technical work specifically. Where most of the time people don’t even know what they want until they see it.
Creating mock-ups and going back and forth costs time and money.
While most of the time what I do is good enough - I am getting tired of people trying to block work until unimportant details are “discussed”.
It would not be frustrating if customers would be willing to pay for mock-up work and then for actual work but what most want is working application right away not mockup but they also want to waste time blabbing about details that would be clear in mock-up or in first version of the app.
Could be so with charity or depending on the field I suppose. I think innovating or figuring out new tech is different as if something is easily understood and explainable, it's probably already done, and if you have a great idea that hasn't been done it's probably because it's really hard to explain or to sell to others on it.
I suspect the numbers would be worse if you looked at households instead of individuals due to declining marriage rates (but I'm not willing to put in the effort to find numbers).
I don't understand what you're saying. To me, if the rate stays flat but represents fewer married couple, then this same rate actually means more homes are owned by people of that age.
Meaning - if guy and girl are married and own a house together, that counts as 2 people towards home owner bucket.
If they are not married, they'd need to each own a house for the same rate to hold.
And the next step after this epiphany is that you still have to remember to take the phone with you places, not to leave it behind, and worry about it getting dropped in the toilet by a toddler. Not caring this tool still has a lot of benefits.
Do you happen to have a link to the proposal I can see and share with a class? I'm teaching a few lectures about some "weird" stuff this semester, and this would be a great example.
Then computer security. Unlike the internet or jet engines, these have not panned out as foundational research (except perhaps for some of HIV)
In what word is computer security not a foundational topic? There's lots of reasons to critique the way NSF/NIH/DOD/etc allocate funding, but this is definitely not one of them.
These exercises are writing mathematical proofs that basic machine learning algorithms behave correctly. They are "pen and paper" not because you are manually solving a large equation that a machine would normally solve, but because we don't have automated theorem provers capable of proving interesting machine learning theorems. I would expect a typical 1st year grad student to be using a resource like this.
If you don't understand the purpose of proofs, then this resource is not aimed at you.
I believe you are incorrect. According to wikipedia:
> The Lindy effect is a theorized phenomenon by which the future life expectancy of some non-perishable things, like a technology or an idea, is proportional to their current age.
This implies that things that have been around for a short period of time do in fact have a short expected lifespan. You're correct that "A implies B does not mean B implies A as well", but that assumption is not needed.
Something that could work is including a random hash as a first hidden email inside of every client, and then regularly searching outbound traffic for that hash. But that would be rather expensive.