_Number would not go up sufficiently steeply_, would be the major concern, not collapse. Microsoft might end up valued as (whisper it) a normal mature stable company. That would be something like a quarter to a half what it's currently valued. For someone paid mostly in options, this is clearly a problem (and people at the top in these companies mostly _are_ compensated with options, not RSUs; if the stock price halves, they get _nothing_).
The cost of the boat sinking is also very high and that’s looking like the more likely scenario. Watching your competitors sink huge amounts of capital into a probably sinking boat is a valid strategy. The growth path they were already on was fine no?
I hate to be cagey here but I just really don’t want to make anyone’s life harder than it needs to be by revealing their identity. Microsoft is a really tough place to be an employee right now.
> I mean, they've made all of the progress up to now in essentially the last 5 years
I have to challenge this one, the research on natural language generation and machine learning dates back to the 50s, it just it only recently came together at scale in a way that became useful, but tons of the hardest progress was made over many decades, and very little innovation happened in the last 5 years. The innovation has mostly been bigger scale, better data, minor architectural tweaks, and reinforcement learning with human feedback and other such fine tuning.
We're definitely in the territory of splitting hairs; but I think most of what people call modern AI is the result of the transformer paper. Of course this was built off the back of decades of research.
The article seems full of made up things. The "coworker" isn't a real person, some kind of "composite of people", I'm then curious if the "she" is simply used as "a random made up person".
Then it says: "Engineers don't try because they think they can't." They don't try AI is what I understand, but that contradicts the whole article, that every engineer in Seattle is actively using AI, even forced too.
Then it says: "now believes she's both unqualified for AI work", why would they believe that? She's supposedly has been using AI constantly, has not been part of those "layed off", so must be a great AI talent.
Finally it says: "now believes she's both unqualified for AI work and that AI isn't worth doing anyway. She's wrong on both counts, but the culture made sure she'd land there." Which is completely usubstantiated and also coming from a person trying to grift us with their AI product which they want to promote and sell.
I don't know, it read like a shill article from a grifter.
Pain actually has a lot of objective parts to it. There are real chemical and mechanical processes involved. You could even argue the subjective part might be smaller than people think. Mindset can change the experience, but different people might just have different "pain functions" to begin with.
Same idea with hunger and weight gain or loss. Hunger is a biological process. You can push through it, but people also experience it differently because their actual hunger mechanisms differ, not just because they "interpret" it differently.
I don't care about the objective parts; the chemical and mechanical processes would have been exactly the same if it had been Lisa's lip that was bruised and bleeding instead of my own, or the lip of another boy halfway around the world, but it wouldn't have mattered to me in the same way.
A lot of people formed their view of remote work during Covid, but that wasn’t real remote work. Companies had no idea what they were doing, everyone scrambled, nothing in the culture or processes actually changed.
And it wasn’t just working from home, it was lockdown. Total isolation. Of course people missed human contact and blamed it on remote work.
Even the "productivity" data people cite is skewed, because most of it compares lockdown productivity to normal-life office productivity, which isn’t a fair comparison at all.
I think the issue is that the OP wasn’t giving an opinion. They stated things as facts. When you say “x is y” you’re making a truth claim, and people are going to challenge it if it sounds wrong or depends on context.
A lot of folks flip to “it’s just my opinion” only after they get pushback, but if you present something as a fact, it’s fair game to question it.
Like if someone says “apples taste bitter and have no flavor” that reads like a universal claim, so yeah people will argue. If you say “I find apples bitter and lacking flavor” that’s obviously personal taste and nobody is going to demand proof.
Nobody is asking for IMO everywhere. Just don’t frame opinions as facts or the other way around.
The cost of missing that opportunity is why they're heavily investing in AI, they don't want to miss the boat if there's going to be one.
And what else would they do? What's the other growth path?
reply