Hacker News new | past | comments | ask | show | jobs | submit | pi-rat's comments login

> I don't own an iPhone so I don't know for sure if old messages clog iPhones too but the article hints that this is the case. > Maybe Apple is happy to sell new phones with more storage to people that run out of space. Maybe iMessage is a tiny storage eater, maybe not. Photos, videos, vocal messages are on the phone forever too?

iOS has a built in tool that help you identify and clean up space hogs. It’s first recommendation is usually to remove large messages attachments. It will show you a list of all attachments (descending size), you select the ones you want to remove and hit delete. It also offers to automatically delete messages after some time. It’s a global option though, not per chat, not usable if you only want some to be ephemeral.


We had a fleet of dedicated hetzner boxes w/10 gbit, it’s just an option you get and pay a little extra for. Generally had good performance as well.


We got day or maybe even week of linux on the desktop at least :)


With a crowdstrike kernel driver, so technically not a microsoft/windows issue.


Generally gets attributed with:

- Increased liquidity. Ensures there's actually something to be traded available globally, and swiftly moves it to places where it's lacking.

- Tighter spreads, the difference between you buying and then selling again is lower. Which often is good for the "actual users" of the market.

- Global prices / less geographical differences in prices. Generally you can trust you get the right price no matter what venue you trade at, as any arbitrage opportunity has likely already been executed on.

- etc..


> Tighter spreads, the difference between you buying and then selling again is lower. Which often is good for the "actual users" of the market.

I just wanted to highlight this one in particular - the spread is tighter because HFTs eat the spread and reduce the error that market players can benefit from. The spread is disappearing because of rent-seeking from the HFTs.


What needs to be pointed out is that the rent and spread is the same thing in this equation. Before the rise of HFT actual people performed these functions, and then the spread/rent-seeking was a lot higher.


Not that different for the prodesk/elitedesk small form factors IMHO. There’s a steady supply of cheap used ones available on most online marketplaces (there’s probably millions of them being cycled out of offices every year).

Get a new replacement one, swap over the NVME. Doesn’t have to be the same cpu, linux handles the rest.


I worked for a FX market maker, and we just used "yards".

Removes any confusion for european and american collaboration.

Got old roots dating back to cockney trading slang: "The Old Lady just bought half a yard of cable"


Link to implementation: https://github.com/maghoff/id30


By following the same vague “can be used to grow piracy” logic, there’s a pretty big percentage of github repos that can be slapped with a DMCA and taken down..


> "can be used to grow piracy"

By that metric github should be taken down.


The whole internet, really.


Computers should simply be banned in the US since they facilitate piracy.


To end piracy, all creators must halt all creation.


In what way is ZFS a good filesystem for lightweight devices like watches, phones, pads and single ssd/nvme laptops?

APFS and ZFS have very different design goals.

That said though, they could obviously have supported both, but from their perspective it makes sense to design for the 90%+.


In what way isn’t it? Any potential shortfalls could have been made up with a little bit of engineering elbow grease.

What are the advantages APFS has over ZFS, for lightweight systems or otherwise?


> The APFS engineers I talked to cited strong ECC protection within Apple storage devices. Both NAND flash SSDs and magnetic media HDDs use redundant data to detect and correct errors. The Apple engineers contend that Apple devices basically don't return bogus data. NAND uses extra data, e.g. 128 bytes per 4KB page, so that errors can be corrected and detected. (For reference, ZFS uses a fixed size 32 byte checksum for blocks ranging from 512 bytes to megabytes. That's small by comparison, but bear in mind that the SSD's ECC is required for the expected analog variances within the media.) The devices have a bit error rate that's low enough to expect no errors over the device's lifetime.

https://arstechnica.com/gadgets/2016/06/a-zfs-developers-ana...

Dominic Giampaolo wrote BeFS, Spotlight, and now APFS.

In my 15-ish years running ZFS at home, the only time I’ve had corruption was when there was also noticeable hardware issues (cables, drives, enclosures). ZFS made them easy to deal with, but wouldn’t have helped if I wasn’t already running RAIDZ or mirrors. I’ve not looked recently, but in the past ZFS was extremely RAM hungry and relatively CPU expensive, not necessarily something optimized for mobile devices or battery life.


ZFS expects to have a huge cache, and defaults to 50% of memory, separate to any other fs cache. For advanced features, it requires a certain amount of cache per TB of storage.

For single-disk non-checksummed, non-deduplicated storage, it's a lot of wasted code that a device with a "mere" gigabyte of RAM doesn't need. So APFS hits most of their needs: volume management + journal + better disk layout for SSD.


Am noob. But as I understand it:

ZFS features like dedupe and data protection require a lot of RAM and run in the background;

filesystems optimized for different medias (HDD, SDD, WORM, etc) make different design choices.


Dedupe is an optional and often misunderstood feature. It's a solution to a niche problem and not something you should enable because it exists. Data protection doesn't require lots of memory, but the way it's implemented means you can't just use the file system cache to back memory mapped files 1:1 like you can with a less reliable file system that modifies file content in place without checksumming. ZFS doesn't have to be the memory hog it's made out to be. A lot of the issues with ZFS on 32bit systems come from the fact that pre Meltdown the kernel heap had to fit into <1GiB of the address space shared with memory-mapped I/O. Often its cache hat to be restricted to ~300MiB and tried to allocate 128kiB continuous buffers from that since it was designed for 64bit virtual address spaces there was no support for chaining multiple smaller allocations to back a large block.


The big problem with APFS is that it was designed by people believing in magical hardware that doesn't let the file system observe whole classes of errors...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: