Who's getting approved for a house @ $15 an hour? And now you're hoping your parents can support you as if everyone has a healthy family life. As a not-so-fun fact, many homeless people were orphans aged out of the system. They end up straight on the streets.
Can you clarify? From https://www.uber.com/us/en/ride/uberblack/, the only obvious location dependency is that drivers must "meet state- or local-level livery regulations" -- I think this just means that for this subset of drivers, Uber chooses to actually verify that they're following the law? Which I know is a deviation from their usual model, but seems like it's very literally the least they could do. Is there more per-geo (per-country?) variation in what "Black" means that's not documented there?
The main trick is in how you build up it's context for the problem. What I do is think of it like a colleague I'm trying to explain the bug to: the overall structure is conversational, but I interleave both relevant source chunks and detailed/complete observational info from what I've observed about anomalous program behavior. I typically will send a first message building up context about the program/source, and then build up the narrative context for particular bug in second message. This sets it up with basically perfect context to infer the problem, and sets you up for easy reuse: you can back up, clear that second message and ask something else, reusing detailed program context given by the first message.
Using it on the architectural side you can follow a similar procedure but instead of describing a bug you're describing architectural revisions you've gone through, what your experience with each was, what your objectives with a potential refactor are, where your thinking's at as far as candidate reformulations, and so on. Then finish with a question that doesn't overly constrain the model; you might retry from that conversation/context point with a few variants, e.g.: "what are your thoughts on all this?" or "can you think of better primitives to express the system through?"
I think there are two key points to doing this effectively:
1) Give it full, detailed context with nothing superfluous, and express it within the narrative of your real world situation.
2) Be careful not to "over-prescribe" what it says back to you. They are very "genie-like" where it'll often give exactly what you ask for in a rather literal sense, in incredibly dumb-seeming ways if you're not careful.
Not OP but I’ll give my perspective. I have no problem fixing someone else’s bugs up to a point, but I also don’t want to be the one cleaning up someone else’s mess who didn’t take time to do proper testing but is getting rated higher than me for clearing more tasks last month because they cut corners
Modern microkernels deliver stability, security, performance (look it up if you want the details). Back when I did CS we were talking about this as the next big thing in operating systems. It didn't happen - common operating systems instead expanded in scope, started to include things like a web browser and supporting a gazallion pieces of hardware, rather than trying to "do things right".
The game changer part is of course in terms of the broader tech war. What we have here might be a consumer operating system that is technologically better than what is on offer from Apple, Google, and Microsoft. Built by a vilified Chinese company.
This is not a game changer. Microkernels have been a reality for ages. See QNX or even Fuchsia. I don’t know what "modern" microkernel means. The architectural concepts haven’t changed.
There are reasons nobody uses true microkernels. IPCs are slow and the gains are limited compared to the strategies all broadly used kernels already use. They are no monolithic kernel anymore. Everyone has slowly but surely been shifting more and more things to user space in isolated processes including Linux and Windows.
Hongmeng might be an interesting kernel. It might also not be. Sadly its proprietary and there are very little benchmarks not published by Huawei. Personally I won’t hold my breath for this one.
>IPCs are slow and the gains are limited compared to the strategies all broadly used kernels already use.
The problem you are describing is a characteristic of 1st generation microkernels, and was solved by Jochen Liedtke in the mid 90s, introducing 2nd generation microkernels.
seL4 is a 3rd generation microkernel.
>I don’t know what "modern" microkernel means.
To get up to date, a good resource is Gernot Heiser's blog[0], read from oldest to newest.
It’s not about being up to date. What you call modern here is just recent. It doesn’t fundamentally diverge from the historical architecture.
Even SeL4 fast IPC which is not actually a full IPC but works well in the barebone context of SeL4 remains in fact slower than good old syscalls.
The fundamental question remains the same “Is this worse the costs (in terms of both efficiency and design complexity)?”
To me, the answer is muddy here. Sometimes yes, sometimes probably not. I think it’s why hybrid approaches are now generalised but no one is really shipping a microkernel outside of industrial applications.
Sorry. Of course you right - the game changing part is that there is now an advanced consumer os that is owned by a Chinese company - it being micro kernel is a small part but important.
I'm in a similar boat (grandfathered from Gplay), but remain apprehensive that the prices will continue to rise and the window will continue to shift towards enshitification.
I'm not worried as long as Google keeps their generous revenue split. The way it's set up now, effectively 50-50, means that the incentives of the creators and the service are aligned. Both parties want as much viewership as possible. If Google stops sharing revenue, then Google has an incentive that doesn't align with the creators, as the quality of the product effectively isn't important, and the quality creators will leave to another platform.
When that happens, I'll likely just move to Nebula.
It can when workloads are relatively predictable day-over-day but have low lows and high peaks. For example,
My team has a service which has a daily traffic peak that is >5x our traffic min.
Scaling on traffic and resource demand gives us an increased average utilization rate for the hardware we pay for. Especially when peaks are short lived, an hour out of 24 for example.
reply