Airbnbs only make sense in rural areas. Hotels are more competitive in terms of price and location inside of cities. Cities have enough to do that I don’t want to be inside of my room that much during my trip, so space and TV simply don’t matter to me.
Even in cities, would you rather have a clean well furnished apartment with plenty of space, nice tv, nice kitchern and a nice view or a hotel? The only pro I can think of by comparison is room service and maids cleaning up or if you actually make use of concierge services.
Fair enough. You're right about me wanting to avoid interacting with people, especially when tipping is involved. But keep in mind, many people travel with others and airbnb is still a better option if you want to party a bit or have kids. Can't beat a private pool at an airbnb house or to entertain guests at a fancy airbnb.
Some of us also like not having to check every lamp and decoration in a bedroom/bathroom wondering if there's hidden cameras in it placed there by the "host"
Nice article. I discovered this myself recently, but in a slightly different context: Taking the union of two sets represented as arrays. To my surprise, sorting the arrays and iteratively computing their union was faster than a hashset.
Others in this thread have mentioned that computing the hash takes time. This is true. Let n be the size of the array. When n in small, the (constant) overhead of computing the hash >> O(n * log n) sorting. Now, consider what happens as n becomes large: Virtually every access becomes a cache miss. The (constant) overhead of a cache miss is >> O(n * log n). At the time, I did the math, and determined that n would need to be some unreasonably gargantuan quantity (on the order of the number of stars in the visible universe) before log n is big enough to overtake the effects of cache misses.
>And before you mention the 3rd world needs crypto as a "usecase"
I'm an American. I got into crypto completely by accident. I needed to quickly send a large amount of cash to a family member out-of-state. I couldn't wire the money due to transfer limits on my family member's account. My banker told me it would take up to a week for a personal check to clear. My bank charges $10 for a cashier's check. FedEx told me it would be $65 to overnight the check.
Someone suggested I try Bitcoin. I purchased the amount in Bitcoin on Binance for no fees. I sent the Bitcoin to my family member for a transaction fee <$3. They received the amount within an hour. Within another hour, they had converted back to fiat.
Bitcoin solved a problem for me, as an American. I'd be glad to learn of a cheaper, easier alternative.
Egg production fell 6.6% [1]. Egg prices rose 60%.
The author knows exactly how markets work: Prices are high because there is not enough competition. Cal-Maine is tricking people into tolerating higher prices by loudly complaining about every small problem they face. They're literally bragging about it on investor calls.
> Egg production fell 6.6% [1]. Egg prices rose 60%.
It seems plausible to me that the price elasticity of demand[1] for eggs is small - that is to say, even if the price of eggs doubles, people will only reduce their egg consumption by a small amount.
If the prices jumped by 60%, and now the shelves were full of an abundance of expensive eggs, that would be suggestive that the reason for the spike in price is pure profiteering.
But I have seen no evidence that a large number of eggs are going unused because prices have spiked. So I think it's plausible that the increased prices now really do reflect an increase in demand.
Eggs are a scarce resource. When the supply of eggs falls faster than the demand for eggs, the amount of resources required to obtain eggs will go up. Those resources can be money, or they can be time spent going from store to store searching for eggs, or they can be social connections in terms of knowing who can get you eggs and leaning on those connections. And those who have the ability to procure more eggs will now be able to demand more resources for those eggs.
You can say that it's unfair that Cal-Maine is getting a windfall here, instead of letting the grocery stores have a windfall, or instead of requiring people to burn time and gas looking for eggs while the price of the eggs they find remains the same. But I don't think you can say that Cal-Maine is the reason there is an egg shortage.
>It seems plausible to me that the price elasticity of demand[1] for eggs is small
Agreed.
>If the prices jumped by 60%, and now the shelves were full of an abundance of expensive eggs, that would be suggestive that the reason for the spike in price is pure profiteering.
Price gouging, one form of profiteering [1], “is the practice of increasing the prices of goods, services, or commodities to a level much higher than is considered reasonable or fair.” [2]. The ability to sell product has nothing to do with price gouging.
Cal-Maine’s profit margin is up 7,890% year-to-year [3]. This is textbook profiteering.
>And those who have the ability to procure more eggs will now be able to demand more resources for those eggs.
What one can do is rarely what one should do. The people who can’t afford eggs could resort to violence. But where would society be then?
>or instead of requiring people to burn time and gas looking for eggs while the price of the eggs they find remains the same.
Or, like with toilet paper, meat, or several other essential items during the pandemic, groceries limit the amount of eggs customers can purchase to keep them stocked.
The majority of egg production goes to commercial production/food service/etc, the consumer gets what ever is left over so a small drop in production can be very noticeable at the grocery store.
The percentages are irrelevant. The smaller the elasticity of demand the greater the price change needed to bring supply and demand back in line. Most egg use is commercial and not easily changed, they simply pay the higher price and use the same number of eggs. Thus we see a sharp reaction in the area where egg demand is more flexible--consumers. The high egg prices drive some consumers to eat something else, bringing down the demand.
There isn't an oversupply of eggs--since egg prices went nuts I have seen many an empty shelf, never have I seen eggs on clearance.
This is not a completely hashed-out thought. But I'll share it and see what others think.
My impression is that the simplest way to improve energy efficiency is to simplify hardware. Silicon is spent isolating software, etc. Time is spent copying data from kernel space to user space. Shift the burden of correctness to compilers, and use proof-carrying code to convince OSes a binary is safe. Let hardware continue managing what it's good at (e.g., out-of-order execution.) But I want a single address space with absolutely no virtualization.
Some may ask "isn't this dangerous? what if there are bugs in the verification process?" But isn't this the same as a bug in the hardware you're relying on for safety? Why is the hardware easier to get right? Isn't it cheaper to patch a software bug than a hardware bug?
A good reason why memory virtualization has not been "disrupted" yet seems to be fragmentation. Almost all low level code relies on the fact that process memory is continuous, it can be extended arbitrarily, and that data addresses cannot change (see Rust `Pin` trait). This is an illusion ensured by the MMU (aside from security).
A "software replacement for MMU" would thus need to solve fragmentation of the address space. This is something you would solve using a "heavier" runtime (e.g. every process/object needs to be able to relocate). But this may very well end up being slower than a normal MMU, just without the safety of the MMU.
> This is an illusion ensured by the MMU (aside from security).
Even in places where DMA is fully warranted, IOMMU gets shoe-horned in. I don't think there's any running away from costs to be paid for security (not the least for power-efficiency reasons).
But in this case the job of the hardware is to prevent the software from doing things, and it pays a constant overhead to do so whereas static verification as integrated into a compiler would be a one-time cost.
Arbitrarily complex programs makes even defining what is and isnt a bug arbitrarily complex
Did you want the computer to switch off at random button press; did you want two processes to swap half their memory. Maybe, maybe not
A second problem to consider is that verification is arbitrarily harder than simply running a program -- often to the extent of being impossible, even for sensible and useful functionality. This is why programs that get verified either don't allocate or do bounded allocations. But unbounded allocation is useful
It is possible to push proven or sanboxed parts across the kernel boundary. Maybe we should increase those opportunities?
Also separate address spaces simplify separate threads -- since they do not need to keep updating a single shared address space. So L1 and L2 cache should definitely give address separation. Page tables is one way to maintain that illusion for the shared resource of main memory... Probably a good thing
That's not to say there isn't a lot of space to explore your idea. It is probably an idea worth following
One final thought: verification is complex because computers are complex. Simplifying how processes interact at the hardware level. Shifts the burden of verification from arbitrarily long running and arbitrarily complex and changing software; to verifying fixed and predefined limitations on functionality. That second one has got to be the easier to verify
I like this idea, and given today's technology it feels like something that could be accomplished and rolled out in the next 30 years.
If the compiler (like rust) can prove that OOB memory is never accessed, the hardware/kernel/etc don't need to check at all anymore.
And your proof technology isn't even that scary: just compile the code yourself. If you trust the compiler and the compiler doesn't complain, you can assume the resulting binary is correct. And if a bug/0day is found, just patch and recompile.
The reality is that we do want to run code developed and compiled and delivered by entities we don't fully trust and who don't want to provide us the code or the ability to compile it ourselves. And we also want to run code that can dynamically generate other code while it's doing so - e.g. JIT compilers, embedded scripting languages, javascript in browsers, etc.
Removing these checks from the hardware is possible only if you can do without it 100% of the time; if you can trust that 99% of the binaries executed, that's not enough, you still need this 'enforced sandboxing' functionality.
Perhaps instead of distributing program executables, we can distribute program intermediate representations and then lazily invoke the OS's trusted compiler to do the final translation to binary. Someone suggested a Vale-based OS along these lines, it was an interesting notion.
I do not believe such OSes can ever be secure given how often vulnerabilities are found in web browsers's JS engines alone. Besides, AFAIK the only effective mitigation against all Spectre variants is using separate address spaces.
My understanding is that's more or less what Microsoft was looking at in their Midori operating system. They weren't explicitly looking to get rid of the CPU's protection rings, but ran everything ring 0 and relied on their .NET verification for protection.
eBPF does this, but its power is very limited and it has significant issues with isolation in a multi-tenant environment (like in a true multi-user OS). Beyond this one experiment, proof-carrying code is never going to happen on a larger scale: holier-than-thou kernel developers are deathly allergic to anything threatening their hardcore-C-hacker-supremacy and application developers are now using Go, a language so stupid and backwards it's analog to sprinting full speed in the opposite direction of safety and correctness.
I do agree. But being able to combine old ideas in new ways is also intelligence. LLMs have memorized a ton of information, and learned “information combinators” to compose them. All that’s missing is a clean way for LLMs to engage in the scientific method.
When I ask the question, what I really mean is ``is there a mechanical structure that guarantees the correct output?'' For example, we can train neural networks to perform functions such as "and", "xor", etc., and convince ourselves the network has "really learned" what it means to calculate the function.
Is that true for interpreting programming languages? If so, a bug isn't just ``I haven't seen a similar enough example''. It reflects a deeper mistake that will likely occur again.
Is it wise to compare spending within defense budgets between these countries? Much AI research in the US is conducted by private firms. How does the current defense budget compare to historical defense budgets, such as during the early days of the digital computer in the Cold War?
On a related note: yes, Chinese researchers have models that perform certain tasks well. But are those models useful in the contexts the author mentions?
To be sure, I don't disagree that AI research needs funded. I'm just genuinely curious about these points.
I switched from Firefox to Chrome several years ago. After the announcement of the planned changes, I tried switching back to Firefox. I just couldn't stick with it -- Chromium-based browsers are faster (e.g., [1]).
I decided to give Brave a try because it's Chromium-based. It's been great. I did have to modify a few settings to get the look/feel how I like, but it I found it easy, and I expect others may not care. Brave shields block ads without the need of extensions like ublock. I don't have any complaints so far.
There is a funny notion that professors work for students. They don't. They hardly even work for the university. It is more correct to say that they are affiliated with the university.
At top universities, professors bring in grant money or else they lose their job. The university takes a portion of the grant before any of the money is spent. Then, the professor pays the university from the remaining money for their graduate student's tuition and stipend, to use university technology, and rent for research space. Next, the university only pays professors for 9 months out of the year. So, another chunk of grant money finances the professor's salary for the remaining 3 months.
Top universities scrape billions off research grants annually. These grants are won by faculty members. They dwarf salaries. Faculty members bring in more cash to the university than their salaries are worth.
In reality, professors finance the university. Students are mistaken if they think they are their professor's employer.