These are all points that were brought up in the article as to why voice recording is less useful than all of the other tracking mechanisms advertisers have available
Oh man NYC taxis. I swore them off after one too many "Oh the cardreader is broken" lines. I just explain I'm traveling for work and my company card is they only way they'll get paid et voila the card reader is magically working again! Whereas not only do Uber and Lyft only take digital payment, but both offer direct integration into my corporate expense system. Bring on the robo taxis: one less scammer in the loop.
OPs point is funnily accurate if you’d like to consider a certain ‘real-estate’ person in government could stand to benefit from the buildings being sold at the lowest price (recovering market) and that same person people will likely be friends with the buyers (lots of negotiating power).
They never mentioned whether it’d actually be beneficial for the government.
Can’t wait for all of our federal agency buildings to be named after different members of the Trump family once they’ve changed hands! /s
Awhile back I worked with someone who wrote a script to scroll through ebooks he purchased, screenshot each page and then aggregate the screenshots into a single PDF file.
The simplicity of the approach seemed pretty awesome
The article title is: "Energy Department Acts to Lower Prices and Increase Consumer Choice with Household Appliances", which says nothing of substance.
The only part of the article that explained what was done states: "Today’s actions postpone the efficiency standards for the following home appliance rules:"
Would a title of "US energy department indefinitely postpones efficiency standards for home appliances" sound less deceptive to you?
> The title of the submission is nowhere in line with what the linked page even talks about. Flagging for our right lying. All the press release mentions is that they are not moving forward on new standards that Biden was pushing for while he was in office. Existing energy efficiency standards for appliances are still in effect.
This is a very odd take. The title is based on the only part of the article that isn't PR fluff and describes what was done:
> Today’s actions postpone the efficiency standards for the following home appliance rules:
Central Air Conditioners
Clothes Washers and Dryers
General Service Lamps
Walk-In Coolers and Freezers
Gas Instantaneous Water Heaters
Commercial Refrigeration Equipment
Air Compressors
I can't change the title, but if it's worth changing, someone will do it or it will just be resubmitted.
I think "new" would make sense and I accidentally conflated them also writing about the EPA rolling back *existing* standards for other appliances in this article.
Over the years I’ve
spent a lot of time talking engineers and managers out of using serverless AWS options for various reasons. I’ve found that most non-infra focused engineers and managers see serverless marketed as “simpler” and “cheaper”.
It’s often the opposite, but most people don’t see that until after they’ve built their infrastructure around that, get locked in, and then start seeing the surprise bills, difficult to diagnose system failures, and hard-limitations start rolling in.
A bit of early skepticism, and alternative solutions with a long-term perspective in mind, often go a long way.
I've seen successful serverless designs but the complexity gets pushed out of code and into service configuration / integration (it becomes arch spaghetti). These systems are difficult to properly test until deployed to cloud. Also, yeah, total vendor lock in. It works for some teams but is not my preference.
If you have a low traffic or % of server ultiziation such as B2B applications. "Full Container on Serverless" can be insanely cheap. Such as FastAPI, Django, Rails etc running all of these on Lambda when you've only got a few hits during the day and almost none at night, is very cost effective.
We do this at current job for most of our internal tools. Not a setup I would choose on my own, but serviceable. Using a simple handler function that uses mangum [0] to translate the event into a request compatible with FastAPI, it mostly Just Works TM the same in AWS as it does locally. The trade off comes with a bit harder troubleshooting and there are some cases where it can be difficult to reproduce a bug locally because of the different server architectures.
What it's also surprising is people getting excited and "certified" on AWS (and attending AWS conferences, lol), job postings requiring you to "know" AWS for a developer position. Why on earth do I have to know AWS to develop software? Isn't it that supposed to be covered by DevOps or sysadmins? If one word could define AWS that would be: overengineer. The industry definitely does not need all that machinery to build things, probably a fraction of what is offered and way way simpler.
Because if you hire a DevOps after the original meaning then he needs to know AWS (assuming that’s the cloud vendor the company posting is using) DevOps means develop and operate. That was the raging new concept. Since actual sysadmin work of setting up hardware is no longer needed when hosting on AWS. So the developer takes the part of hosting and operation. But now that cloud infrastructure became so damn complicated and all, most DevOps define the dev as developing and maintaining the Zoo of tools and configurations. No time for actual development of the product. This is handled by another team. And we are back full circle to the times before DevOps. Our company still runs the old style of the definition and it is manageable.
Because the roles are increasingly blurring, and require both the dev and the ops knowledge. AWS gives you a lot of power if you buy into it, but of course that comes with a whole set of tradeoffs. There won’t be less cloud in the future, no matter personal feelings about it.
Our team kinda thinks the same thing about serverless but despite that we have some things built with it. And the paradoxical thing is that this issues have just never materialized, the serverless stuff is overwhelmingly the most stable part of our application. It's kinda weird and we don't fully trust it but empirically serverless works as advertised.
My experience - system went down every time we have significant load. Reasons were various, but all triggered by load. Switched to ECS + Aurora - problem gone, bill has slightly increased.
As with many of these things, I've seen it time and time again; the initial setup can be simple, perhaps that's a good thing for an org that needs to get moving quickly, but very soon you start to see the limitations and the more advanced stuff just isn't possible and now you're stuck, locked in so have to rebuild from the ground up.
Sometimes you have to do the (over) engineering part yourself, so that someone else isn't making the decisions on your behalf.
Just a counterpoint. But my experience has been the opposite.
Taking a legacy application that runs on a single server and is struggling to scale with spaghetti of config files all over the box and security issues that have gone unpatched because no one wants to touch the box as it’s too dangerous to change anything, as everything is intertwined. No backups or unverified backups. No redundancy. No reproducible environments so things work on local but not on remote. Takes devs days to pin the issues.
Then break it down and replace with individual fully managed AWS services that can scale independently and I don’t need to worry about upgrades.
Yeah, the total bill for services is usually higher. But that’s because now there’s actually someone (AWS) maintaining each service, rather than completely neglecting it.
They key to all this is to use IaC such as AWS CDK to have code be the single source of truth. No ClickOps!
I mostly understand what you’re saying and I hope I didn’t come across as saying serverless is *never* a good idea.
In my experience it’s just very often not the best choice. I will always keep an open mind about solutions, but serverless often comes with many shortcomings that teams don’t notice until it’s too late. I’ve seen it happen enough times that I feel I must keep a healthy dose of skepticism when anyone proposes changing a small part of stack for serverless options.
I’ve been working with AWS for about 8 years now and have worked on supporting dozens of projects across 5 different orgs ranging from small 5-person shops to 1000+ people. I have only seen a handful of cases where serverless makes sense.
> Taking a legacy application that runs on a single server and is struggling to scale with spaghetti of config files all over the box and security issues that have gone unpatched because no one wants to touch the box as it’s too dangerous to change anything, as everything is intertwined. No backups or unverified backups. No redundancy. No reproducible environments so things work on local but not on remote. Takes devs days to pin the issues.
All of these issues are easily possible with many serverless options. What it really sounds like is your org went from a legacy architecture that built up a ton of tech debt over the course of years and had no one owning the basics (ie backups) to a new one with less tech debt since it was built specifically around current issues you were facing. In 3-5 years I wouldn’t be surprised to hear the pendulum swinging back in the other direction as staffing and resources change hands and new requirements emerge.
I just moved to Oakland and pay $50/month for 10 Gbps fiber with Sonic. I thought that was just the norm here after staying in a few SFH BnBs in the area that all had the same. I’m kinda surprised SF doesn’t have better connectivity.
Is the issue just a matter of different regulations increasing cost of installations between the bay?
Where is this mythical “working poor” that drives into midtown Manhattan everyday for work? Do you have any stats whatsoever on the number of people who would be impacted? Maybe even a salary range you consider to be “working poor”?