This is why we should get rid of as many of the distinct types and uses of plastic as possible, with the remaining ones as non toxic as possible, and some kind of explicit tax on manufacturers to pay whatever is necessary to handle all of the costs of cleanup.
If there were 2 types of plastic packaging and they were very obviously visually distinct, it would be much easier for anyone involved in the process to sort them (whether at home or at a recycling plant) compared to the current system with hundreds of distinct types of often visually indistinguishable mixed-material packaging much of which we can't do anything with except chuck it in a landfill.
Yeah. At first I primarily used Kagi to move away from Google as a company, hoping for results that were equally good. But Google search actually feels crappy now in comparison.
> but I've never encountered a QA team that actually writes the tests for engineering.
I have a few times. But the only common thing in the QA industry, is that every company does it differently and think they're doing it the "normal way".
> What’s the point in degrading the UX without telling the user 1) why the UX is degraded, 2) what they can do about it? I
Because 1) everyone in the Apple world knows, and 2) they want the answer to "What can be done about it" to be "Shame your peers into switching to an iPhone".
And it works. A little too well, especially with younger folks.
I have never once felt any shame for using Android, nor have I felt any pressure to switch to Apple. If anyone in my social circle tried that sort of nonsense, I'd never stop ridiculing them about it.
It is. I live in NYC and stopped going to CVS late at night because there is such a looter problem in my neighborhood, I can't buy shit at that time. The looters are there with bags ransacking the isles while I wait behind hoping they leave what I came to buy. Sometimes if it's locked then they can't take that but so still have to wait for them being done to go through the isle lol.
I wonder why they keep those stores open sometimes.
Throughout most of the rest of the country CVS and Walgreens are shuttering all over the place and are blaming it on theft. I have my doubts about theft being the culprit, but my nearest CVS is 20 minutes away, and there used to be 3 between here and there (likely the real reason they are closing them).
Not OP, but when I lived at 123rd and Lex, the A&P was looted constantly. Like OP, I just stood back and hoped they didn't go after my staples (beans, rice, spices).
FiDi. Only after dark (never seen it during the day), and not all of the pharmacies get hit as bad. There is like a million pharmacies around and I've only seen one closed so it can't be that bad, but damn it's annoying. Like, can you leave one bottle of shampoo please?
> which are now more or less a requirement if you don't want to lose your sanity while browsing the web
For a long time I wondered why people said this. I don't use ad blockers and didn't feel it was that bad.
Then 2 things made me understand. First, I pay for YouTube. If you don't and don't block their ads, they seriously test your patience as you browse. I tried it for less than an hour before I couldn't take it anymore.
The second was looking for torrents and hacks (it was for a legitimate and unambiguously legal purpose too, no gray area, but long story!). Those sites are literally impossible to use without ad blockers. Same thing for tools related to diagnosing PC issues. It's all ads over ads over scams and trying to get you to install some adware as you navigate the site trying to install the actually legitimate tools.
I very rarely do either of those things (YouTube without subscription, and navigating the "gray" web), so I never realized just how fucking awful it can get.
Hell, I'm a YouTube Premium subscriber and I still have an ad-skipper for all of the in-video promotions. It's not quite as bad as direct YouTube ads have gotten but it's still a noticeable change in itself.
It's a pretty common issue. If you have 2-3 services, it's pretty easy to manage. And if you have 1000, you likely have the infra to manage them and get the full benefit.
But if you have 20 engineers and 60 services, you're likely in a world of pain. That's not microservices, it's distributed monolith and it's the one model that doesn't work (but everyone seems to do)
A "microservice" solves scaling issues for huge companies. If you have 60 microservices, you should probably have 600 engineers (10 per) to deal with them. If you're completely underwater and have 10 services per engineer, you're 100% absolutely play-acting "web-scale" for an audience of really dumb managers/investors.
With proper devops tooling and a half decent design, even a junior engineer can manage several microservices without issues. Since microservices are about scaling people as much as they are about scaling tech, 10 people in one service is a lot to me in that world.
The best company I worked at had about 5-10 deployable per engineers on average and it worked really well. They were small, deployed almost instantly, dependencies were straightforward, etc.
Monoliths work fine too, it's just different tradeoffs.
I ended up getting into a few arguments at work with the over excited engineer in my last place. He wanted microservices. I said it was just going to add complexity. The app was already a mess, adding network calls rather than function calls wasn't going to help. We had a small teas - 3 backend devs, one of them doing mostly devops and two frontend.
It's not clear to me what you mean by dealing here. Do you mean developing? If so, I completely agree. If you mean deployments, a small number of engineers can manage hundreds of them easily.
It depends on how you do it. We have 5 engineers and around 50 services and it’s much easier for us to maintain that than it was when it was monolith with a couple of services on top.
Though to understand why this is, you would have to know just how poorly our monolith was designed. That’s sometimes the issue with monoliths though, they allow you to “cut corners” and suddenly you end up with this huge spiderweb mess of a database that nobody knows who pulls what from because everyone has connected some sort of thing to it and now your monolith isn’t really a monolith because of it. Which isn’t how a monolith is supposed to work, but is somehow always how it ends up working anyway.
I do agree though, that the “DevOps” space for “medium-non-software-development” IT departments in larger companies is just terrible. We ended up outsourcing it, sort of, so that our regular IT operations partner (the ones which also help with networking, storage, backups, security and so on) also handle the management part of our managed Kubernetes cluster. So that once something leaves the build pipeline, it’s theirs. Which was surprisingly cheap by the way.
I do get where you’re coming from of course. If we had wanted to do it ourselves, we’d likely need to write “infrastructure as code” that was twice the size of the actual services we deploy.
> designed. That’s sometimes the issue with monoliths though, they allow you to “cut corners”
I find this hard to relate to, the idea you have the discipline and culture to do microservices well if you can't do it with a monolith.
More likely is you migrate away from the monolith you never invested in fixing, and once you get to microservices you either call it a mistake and migrate back, or you have to eventually finally invest in fixing things.
Perhaps your microservice rewrite goes well because you now know your domain after building the monolith, that is another option.
With the microservice architecture it’s easier to lock things down. You can’t have someone outside of your team just give access to a dataset or similar because excel can’t get a connection directly into your DB. Which is an argument you could rightfully make for monoliths, except in my experience, someone always finds a sneaky way into the data on monoliths, but it’s too hard for them to do so with MicroServices.
If you gave me total control over everything, I’d probably build a couple of monoliths with some shared modules. But every time the data is centralised, it always somehow ends up being a total mess. With MicroServices you’ll still end up with a total mess in parts of the organisation, but at least it’ll be in something like PowerBI or even your datawarehouse and not directly in your master data.
Or to put it differently, for me MicroServices vs monoliths is all most completely an organisational question and not a technical one.
It's not like microservices don't also give you chances to mess your data up. It's hard to do transactions across boundaries, you have to deal with eventual consistency, sometimes there is no single source of truth.
I struggle to see how microservices fix this for people; having worked primarily with them for the past 6 years.
The thing with microservices is that shit doesn't have to infect everything. If someone in another team is clueless, they'll mess up their microservices but not anyone else's. If they are in a monolith, it's 50/50 based on how much clout they have (either they mess it up for everyone, or they get talked down and don't get to mess up their stuff either).
Unless it's a shared library, which is why good microservices architecture limit the shared surface as much as possible.
> it's 50/50 based on how much clout they have (either they mess it up for everyone, or they get talked down and don't get to mess up their stuff either
This still happens with microservices though; people can still make terrible architecture decisions and standup terrible services you depend on.
I have worked with a company that had around 8 developers and 30 'microservices'. They wanted the front end team (fully remote, overseas, different language, culture) to go micro front end. They are awesome at presentations and getting funded tho. A common theme in European startups.
A distributed monolith isn't based on how many services you have, a better question is, how many services do you need to redeploy/update to make a change.
Yes, by the time you get to thousands of services you hopefully have moved past the distributed monololith, if you built one.
Even being used to it (from other languages), it's super jarring in JavaScript because it's redundant. JS has other, better ways to achieve the same result, so it's generally a step down just to be more familiar for non js devs
Windows with JAWS or NVDA is fairly mainstream, and it's accessibility features are pretty well understood and supported. VoiceOver less so. That only covers one facet of accessibility of course, but they have like 80 percent market share in that segment.
To be fair, they're not assuming anything, and Remix is like the first thing in the list, which is one of the major React frameworks right now. The name is not as descriptive obviously but its like someone saying SvelteKit and not mentioning Svelte proper.