Hacker Newsnew | past | comments | ask | show | jobs | submit | hiroshi3110's commentslogin

>Why are they killing the app?

Cutting the maintenance cost I guess.


That app hasn't been touched in years. I doubt the cost was that high.


App stores have been getting more annoying every year with new requirements for apps that aren't even updated. I just archived 3 apps I've built for people over the years but hosted for free but now the juice isn't worth the squeeze.


In Japan, iodized salt is banned as a food additive. because we can take it from see weeds like kombu.


Why would that be a reason to ban iodized salt?


Too much iodine can lead to thyroid problems, and this has been a problem in some subgroups in Japan: https://academic.oup.com/jcem/article/107/6/e2634/6516999


Even without iodized salt, Japan is one of the highest consumers of iodine worldwide. [1]

But I agree. Even though iodized salt is pointless in Japan, so is the law banning it (assuming OP is correct, and it is in fact banned).

[1]: https://anaturalhealingcenter.com/documents/Thorne/articles/...


IIUC it's not specifically banned by name in a law, more like not on the whitelist for food additives. Industrialized foodstuffs manufacturing in late 20th century Japan was wild, and additives are managed on approvals basis than bans as the result.




Good move, but if you have data in GCS colder than nearline it may cost you still.


I got a DM from API Brew. It reminds me of Firebase (Remember? it is not the current Google acquired bloated mobile platform, but only the realtime database and looks promising for me...). Good luck for the API Brew team.


How about GKE and containerd?



How about Kubernetes cronjobs on Spot VM on GKE autopilot? https://cloud.google.com/kubernetes-engine/pricing If your job only consume 1 vCPU/1GB memory, it costs about $10/month. If those jobs only run 1/100 of a month cost should be $0.1/month.


Hi, those caches are available in parallel?

CircleCI's remote docker have a restriction that only one of jobs can access same remote docker enginge at a time. Say, a job A build an image, then job B, C try to use same remote docker, but only one of them have the cache.

Google Cloud Build have no cache at all.

I don't know about GitHub Actions.


Yup the caches for each architecture are available in parallel and multiple builds for a single architecture can simultaneously use the same build machine for a single project. So we don't limit the concurrency.

I believe Cloud Build has no persistent caching so you are forced to use remote cache saving and loading. Which can incur a network latency that can slow the build to some extent. Cloud Build with Kaniko also expires the layer cache after 6 hours by default.

GitHub Actions is similar except that there is the ability to store Docker cache using GitHub's Cache API via the `cache-to=gha` and `cache-from=gha` directives. However, this has limitations like only being able to store a total cache size of 10GB per repository. You also have network latency for loading/saving that cache as well.

With Depot, the cache is kept on a persistent disk. So no need to save/load it or incur network latency doing so. It's there ready to be used by any builds that come in for the given projects.


Sounds great! Thanks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: