Hacker News new | past | comments | ask | show | jobs | submit | scottpiper's comments login

I live in SLC, and I saw one of these show up over a year ago. The location was about here: https://www.google.com/maps/place/40%C2%B047'58.0%22N+111%C2...

It was a very well built setup that I assumed (as did everyone) that it was put there by the gov in some capacity, such as for weather observations. It wasn't hidden or hard to find by any means and on a fairly popular trail. I would bet during nice summer weather a few hundred people per week walked past it.


The High-Vis vest effect.


Author here. Thank you. I haven't made the repo for this doc public, but the LaTeX setup is based on an older document of mine here: https://github.com/0xdabbad00/research/tree/master/emet_unco...


Thanks!


Thank you for posting this, but doesn't "Show HN" mean you're linking to something you've made? I'm the creator of flaws.cloud


Scott, Thanks for creating this! Julien from Mozilla referred this to me back at AppSec and I have been using it as a devops security primer for folks in my org since. Really clever teaching tool. I've had it on my mind to create an Azure equivalent when time permits!


Thanks for putting this together Scott - I remember running through the exercises a year or so ago and realising how awesome some of these mistakes are. I ended up turning the S3 bucket stuff into a conference presentation, after bruteforcing *.s3.amazonaws.com for valid buckets, and checking their permissions/ACLs.

Great for bug bounties, or in UpGuard’s situation, a tonne of publicity from private data being accessible from public buckets.


I live in Salt Lake City, Utah. I do AWS security consulting. I would love this. I've lived all over the US (Boulder, SF, Austin, DC, and more) and because I do remote work I can work anywhere, and I chose to move here as the best place for my interests.


From https://summitroute.com/blog/2017/01/07/news_summaries/ , some have already been mentioned.

- Downclimb (my own), for weekly infosec news summaries: https://summitroute.com/blog/2017/03/12/downclimb/

- Bulletproof TLS, monthly, for crypto and TLS news: https://www.feistyduck.com/bulletproof-tls-newsletter/issue_...

- Mobile security news, monthly: http://www.mulliner.org/blog/blosxom.cgi/security/mobile_sec...

- This week in 4n6, weekly DFIR: https://thisweekin4n6.com/2017/03/12/week-10-2017/


I'm the creator if you have any questions.


I'm the author of the blog post in question (and have no idea why HN won't let me reply to the original post). For some clarity here, the "untrusted" third party is Google, whom I don't want to read my data, but I trust that they will not modify my backups. However, in the article I do gzip the files before encrypting them, and gzip has a CRC-32 check, so if the files were modified, then after decryption when you attempted to gunzip them, it would error.

I used openssl because I find GPG on servers is awkward to use.

The full article is more clear that I have only one private key, and for all the nightly backups I'm generating AES keys and encrypting them with the public key.


I don't know why HN disallows reply in certain circumstances. If you replied to the original post I never would have seen your reply.

If you don't care if the third party can modify your data, then OK. If you did care though, does this checksum stop "tar xvzf foo.tgz" from writing modified data immediately? Or does it just tell you at the end?


Probably just at the end. My goal is disaster recovery (AWS disappears as a service to the world because an Amazon employee accidentally `rm -rf` everything, or my own admin `rm -rf`'s my account on accident) so for those situations, I'm going to need to extract out the data somewhere and then rebuild, so checking at the end of the extraction and unzip is fine for my needs.


If you're doing backups for your business, I've written on how to properly encrypt backups[1] and how to use Google Compute Engine for backups[2]. I'm working on write-ups for AWS and Azure that should post within the new few weeks.

[1] https://summitroute.com/blog/2016/12/25/creating_disaster_re...

[2] https://summitroute.com/blog/2016/12/25/using_google_for_bac...


This is certainly one way to do backups. Two things which come to mind on first reading:

1. You're encrypting backups but not authenticating them; someone with access to your archives could trivially truncate an archive or replace one archive with another, and there's a nontrivial chance that they could splice parts of different archives together.

2. Every archive you're creating is stored as a separate full archive; this will probably limit you to keeping a handful of archives at once. With a more sophisticated archival tool, you could store tens of thousands of backups in the same amount of space.


These are both accurate.

For 1, I ensure that an attacker can not modify my archives after they've been uploaded by giving the backup service "put" only privileges. This is not possible with GCE from the article unfortunately, as I point out in a warning banner there, but is with AWS that I'll post soon. My use case is primarily to have a backup in the event of a devops mistake, or malicious attacker (ransomware), so I assume if someone has write-access to my archives they would just delete them, so authenticating them isn't as big of a concern, but although this would be a good idea just to ensure the files aren't corrupt in some other way.

For 2, my storage needs currently aren't expensive (100GB archives per day, which means pennies per day for all of them), but eventually I plan on sending just diffs. I also wanted to create and send backups in the simplest possible way to help people get up and running as fast as possible, which meant limiting myself to the "openssl" command and other basic commands. The other, smarter, solutions I'm aware of are either tied to a service (ex. tarsnap) or don't maintain the data as encrypted at the backup location.


Not bad! Can I recommend that you try out the per-object storage classes and lifecycle policies? Particularly if folks are going to be effectively rsyncing things, it's really handy to minimize the combination of retrieval fees and storage fees (this really depends on the manner of backup, incremental versus full, etc.).

Also not mentioned is that each service also supports versioning, which for backups that don't do block-based backup, can be an alternative DR plan (e.g., don't allow some users to delete the last version).

All in all, a good start (complete with helpful screenshots!). Looking forward to the guides on S3 and Azure Blobs.

Disclosure: I work on Google Cloud.


Assuming the program has no vulnerabilities, these will do nothing. They are mitigations for memory corruption vulns, such as buffer overflows and use-after-frees. Sanitizing inputs doesn't really help for some of these.

These mitigations are "free" to add to programs in the sense that they are just compiler flags. Usually the reason these aren't being used is because the build process for a tool is really old and isn't using the latest version of Visual Studio or some other compiler. Some examples of this are Google Drive, which is still compiled with Visual Studio 2008, or 7-zip which seems to use VS 2005.


Cool. Are there other tools on your list to check a binary executable?


For static analysis to check for best practices, there aren't any free tools that come to mind that give you easily digestible info without usually doing some manual reversing and a lot of extra knowledge. HBGary once created a tool called FingerPrint that provided some of this extra info but that tool disappeared. I extracted out a lot of what it does into YARA signatures: http://0xdabbad00.com/2011/01/09/fingerprinting-using-yara/

The original purpose of HBGary's tool was for fingerprinting malware, but what it really was doing was just extracting out info from malware author's build environments that could be learned from the executables they created.

Other things you would want to check for though are: - Are the executables code signed? - When run, look in ProcessExplorer (for Windows) to see if it runs with DEP/ASLR/CFG/x64 and ensure every DLL it loads has ASLR and CFG. - Run with wireshark on and ensure any network traffic is HTTPS.

There are thousands of ways for all these checks to give incorrect information with regard to whether the developers follow best practices or not, but if you do enough steps they'll add up to a decently accurate understanding.


I'm the author if you have any questions


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: