I was tempted to make a finger gun at this post and take a photo to show how ridiculous this is, but I'm afraid to (deservedly) get hammered by dang for this.
I find the recommendations genuinely horrible. I only get channels like real life lore, Adam Something etc, with sometimes very shallow takes at politics. And after marking those channels with "I don't want to see this", I get their hundreds of copycats.
I mostly watch channels like Aswath Damodaran, mCoding, etc., which are not related to the contents of the mentioned channels at all, yet their recommendation engine still shows me all this stuff.
Talking with friends: some report most recommendations align with their interests, even those that are surprising because they haven't expressed interest in that category (hadn't yet watched or liked a video in the theme). However like you other friends report routinely aweful recommendations across the board. My experience is in the middle somewhere.
I wish there were a way to see how it places you in various demographics, with a corresponding 'slider' to move yourself in/out of topics.
Most or my recommendation are OK, but after 10 or so, Youtube starts recommending MMA videos, football, the latest K-Pop song, "I am pregnant!" trending crap. None of it is relevant, I keep telling it I am not interested yet Youtube keeps recommending trending topics.
It just doesn't bloody care.
I added another reminder on my todo list to get rid of Premium and go rogue. My biggest issue is watching it on Android TV and the iPad, I will need a good adblocking setup, no way in hell I will watch ads.
Hmm, in Germany children and teens can still enter contracts, as long as they are able to pay it with their monthly pocket money. Every single thing you buy here in a supermarket is also an implicit contract (Kaufvertrag).
So, you could theoretically rent servers and buy ad space on Adsense as long your pocket money covers it. On the other hand: this is a liability for Google and $provider -- as most of them are post-pay. If a teen ever decides to spawn an A1000 instance with 36 cores and cannot pay, the contract is void and Google has no entitlement for compensation.
Pre-paid stuff like a small instance on vultr.com should work fine, though.
I believe to remember that the contract is valid if a minor of that age typically has that amount of money available and can understand the consequences of the contract. So the business owner does not need to know how much money the particulur kid gets. If the particular kid gets exceptionally small pocket money the contract is valid and the parents are responsible for its legal implications.
Of course what is typical for a certain age might end up for interpretation by the courts in the worst case (and occasionally it does).
I have not heard that it would be outdated as a concept. What would be the replacement? To my understanding it is generally recommended the give minors some money. And parents should not interfere much even if they disagree how it's spent.
What the term would be in English speaking countries I have no idea if pocket money sounds archaic. The translation sounds good to me not being a native speaker.
The sum might depend a lot on the country. I have no contacts to 16 year olds so I can't tell.
https://www.wiado.de/taschengeldparagraph-was-ist-das/ has a table. It doesn't look completely unreasonable for Germany. Please note that this is not a law, so if a case goes the court the judge might come to slightly different figures.
Unrelated to this project, but I dislike the obsession of "unsafe" within the rust community.
Sometimes I need to dereference a raw pointer (rare!).
Sometimes I actually know what I'm doing (very rare!!).
Sometimes I rigorously tested my code (exceptionally rare!!!).
When I see people making PRs (to e.g. Actix) to change unsafe code to safe code in an API the user *never* sees, which results in a performance penalty, just for the sake of not using the word "unsafe" in the code, I get mad. I totally understood Nikolay's reaction back then. Random people opened PRs and flamed him without knowing anything about the internals and the consequences.
The unsafe keyword means that I know what I'm doing. Just trust me for once, please.
Edit: if you actually want to know what you're doing too, I recommend you writing some linked lists. I hate linked lists with passion, I think they are a bad data structure and you should use Vectors 90% of the time and VecDeque the other 10% of cases. But they help you to understand what you're spending your electricity on.
Why should I? Trusting random people is exactly why C(++) libraries are under constant attack through use-after-free and buffer overflow exploits. You can use `unsafe` in your code just fine, but don't expect others to just trust that you know what you're doing. There's no clear way to distinguish an expert in ownership and multithreading semantics from someone who copy-pasted their unsafe code from Stackoverflow.
I trust libraries that don't use `unsafe` more than I trust libraries that say they know what they're doing. It's nothing personal, it's just a preference for the type of bugs and vulnerabilities I'd like to avoid if I can.
As for whether the user sees it or not, that's irrelevant. The library can be buggy and I would never know. I'd rather have the borrow checker verify that the code isn't buggy than take your word for it. I know the borrow checker isn't perfect and I know there are good reasons why one would use `unsafe` in their code, but if possible I'd like the code I (re)use to be as safe as possible.
Actix is a library that very loudly proclaims "trust me, I know what I'm doing". Some people believe the authors, I prefer to use safer alternatives at the cost of minor performance penalties. Power to you if you disagree, but that's your choice and opinion as much as the authors' of libraries.
I don't think writing linked lists is enough to learn how to use `unsafe` code. You'd have to write multithreaded linked list at the very least to get an understanding of why safe Rust code has all of these limitations. Even then you may never encounter race conditions when you run your code but at least it's a start.
I, for one, know that I'm not capable enough a Rust programmer to write well-tested, provably correct, multithreaded pointer magic code for performance optimization and I don't care enough to learn that art at the moment. If I were to publish a Rust crate, I'd much prefer the code to be at a level I can trust myself to maintain, which means no unsafe code. You may be better versed in the necessary semantics than I am but as a library owner I'd need to be able to maintain your code if you create a PR for my library which means you'll have to dumb down your unsafe code for me, sorry.
The problem is, do people know what they are doing?
I didn't follow the whole Actix situation carefully, but here is a discussion where someone found of 15 ways to trigger undefined behaviour in safe code, caused by the unsafes in Actix:
Personally, I'd take halving the speed of my project to reduce the possibility of remote security holes. We live in a dangerous world nowadays, and we should take every chance to minimise the risk of serious security issues.
What does it matter if a user never interacts with that API or not?
Rust is focused around -safety- and performance. I would rather have a slight performance hit and safe code, rather than trusting some random person to 100% correctly write unsafe code. Which is why tools like cargo-audit and cargo-geiger exist. IIRC Nikolay didn't communicate well about -why- unsafe was used, and just closed PRs that converted unsafe code to safe code.
> The unsafe keyword means that I know what I'm doing. Just trust me for once, please.
No, it means you think you know what you're doing.
It's more likely that you don't know what you're doing and/or are unnecessarily invoking unsafe for convenience, than the opposite. Theoretically I can look at your code and see if it's correct... or I could just use projects that don't use unsafe at all and save the time/headache.
When it comes to web server frameworks and security, I would like to see as little unsafe usage as possible, and documentation as to exactly why it's needed. Which is why people switched to Warp/Tower and now Axum which forbids unsafe code entirely.
If all I cared about were eking out all performance at the cost of safety, I wouldn't be using Rust in the first place.
I think the different philosophies you see re `unsafe` may be due to 2 related use-case pairs that both come up here:
#1: Low level vice applications programming. In the former, unsafe is a regular part of (at least certain layers) of code; ie you're working with memory (MMIO etc) as core operations, so will need `unsafe`. The situation gets ambiguous for things like peripheral typestates and owned singletons for register blocks etc; the line is blurred about what you're using the ownership model for, and what APIs should be marked as `unsafe`. For higher level uses like desktop programs and web servers, you may not need any `unsafe`.
#2: Libraries vice programs
This is directly related to your main point: If using someone else's code as a dependency, unsafe can be a liability if you don't know why it's is used. This is one aspect of the broader topic of whether you can/should trust any given dependency, and balancing not re-inventing wheels with learning library quirks, edge-cases, subtle bugs, complexity etc. A spin on this is making infrastructure specifically; I think Actix's creators and users may have had different opinions on this.
It's all context-dependent. You're right that people shouldn't just drop into a project they don't understand and demand that all unsafes be factored out, but just because an unsafe block is internal and carefully vetted that doesn't mean it's totally fine and chill either.
Question: Assuming you have a modern android phone like samsung galaxy S20. When you factory reset your phone you need to log in with your google account to use it. This prevents people from stealing your phone.
How does this work when your account has been banned? Is my phone a brick afterwards? Someone should try this out.
Some might see Occham's razor in this, but I could imagine this being a real edge case:
When I had an Android (OnePlus) device I often factory reset when tinkering with it, go to bed and continue whatever I wanted to do on the next day. Of course my phone had it's bootloader unlocked and TWRP recovery, since I was trying to get lineageOS + spoofed signatures + microG to work.
So basically the scenario is: factory reset your phone, go to bed, wake up with banned google account, have a new brick decor for your garden.
Would be funny if they didn't think of this possibility.