Hacker Newsnew | past | comments | ask | show | jobs | submit | moca's commentslogin

Both Google and Microsoft have published their API design guidelines, which are very much RESTful. Both companies have applied their guidelines to broader set of products at very large scale. At least, we have not seen alternatives that would be applicable or scalable to companies at Google or Microsoft scale.

API design is like UI design or car design or clothes design. It is a form of craft, which does take time and skills. API design is for customers to have best experience, it was never meant to save time or work for the designers.

Disclosure: contributor of Google API Design Guide.


Except those carefully designed URLs are often behind a client side RPC library. So what was the point of doing it REST style in the first place?


This guide has been in use since 2014, including recently launched Cloud Spanner API.

Disclaimer: co-author of the design guide.


Many Google APIs were created before this guide. New APIs published at https://github.com/googleapis follow this guide. Having the same API available via both REST and gRPC is very valuable, as gRPC often provides 10x performance.


Thanks for the comment. The error handling chapter will be published in a few weeks. For now, you can reference https://github.com/googleapis/googleapis/blob/master/google/....

Disclaimer: I am one of the co-authors.


Canonical error codes used by Google, mapped to HTTP error codes: https://github.com/googleapis/googleapis/blob/master/google/...


The proxy is packaged as a docker image, it can run anywhere docker is supported. For performance and convenience, the proxy and the server typically run next to each other, but that is not required.


Have you considered to use a centralized configuration storage (such as S3 and anything else) with access control and audit trail? That is easier to update configs without restarting all the servers.


Agree. If you use any SDK built on top of Application Default Credentials library, it should work automatically and transparently across different environments. The complexity is handled by the library.

[0] https://cloud.google.com/docs/authentication


If we look at how much technologies have advanced over last decades, the economy produces more than enough output for basic needs, which can be distributed to everyone without much burden.

We can simply offer basic income for everyone, then we can get rid of most special rules, such as food stamps, minimum wage, all kinds of deductibles, flexible spending, childcare. We can tax all incomes at fixed rate and be done with it. All these can be trivially tracked by simple computer system, the only thing needed is identify verification once a year or two.

We would have a much greater society and economy, and avoid the need of "job creation like Walmart".


For consumer applications, you are limited by the ecosystems, such Android or iOS or Windows, so you can't choose kernel anyway.

For cloud applications, what is more important is the system architecture, not the kernel. If we assume docker is the preferred way to deploy an app. We only need very limited features from kernel:

- If I specify the CPU and RAM requirements, the kernel simply allocates them to the container. There is no need for complicated dynamic scheduling and balance.

- Network within the same data center has much lower latency than hard disk (<<1ms vs 10ms). We will be much better of using network storage or database.

- If we assign network address to each container, the kernel can deliver network packets to container, and the container uses its own CPU and RAM to process the network traffic. This will avoid the situation that kernel spend a lot of CPU/RAM to process network traffic for all containers and find the correct way to charge the cost.

If we do all above, there is not much kernel features are needed by cloud applications, as such features will be replaced by cloud services like cloud storage/database. It will make develop and deploy cloud services easier and more productive.


A single disk seek takes a few milliseconds. If you need to do any kind of disk verification, it will take longer than a few milliseconds.


The "few milliseconds" was a response to "Your system will start faster because it doesn't have to do things like initialize the various pseudo-filesystems Linux has, or initialize the virtual file system, or any number of other tasks." Doing that init takes very little time and zero disk accesses. That and a few megabytes of ram are all you save by using a microkernel instead of linux in this situation.

Linux won't access the hard drive if it's unnecessary, same as the microkernel.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: