Hacker News new | past | comments | ask | show | jobs | submit login

We have a library that puts the payload in s3 bucket under random key, the bucket has expiration policy of few days. Then we generate http link to the object and send an sqs message with this url in metadata. The reader library gets data from s3, it doesn't even have to remove it. It will disappear automatically later.

We do it "by ourselves", not using the provided lib, because that way it works both for SQS and SNS. The provided lib only supports SQS.

Also our messages aren't typically very big, so we do this only if the payload size demands it.




Wouldn't sending and later retrieving millions of S3 objects be expensive?


Messaging in our system is not high volume so not in this case. Also as I said - these big messages are pretty rare.


Gotcha. That makes sense.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: