The whole point is, from their point of view those decisions are rational. It's much more lucrative from their (managers') personal point of view to develop a smokes-and-mirrors looks-good-on-ppt AI project. To be safe from risk, don't give the AI people too much responsibility, let them "do stuff", who cares, the point is we can now say we are an AI-driven company on the brochures, and we have something to report up to upper management. When they ask "are we also doing this deep learning thing? It's important nowadays!" we say "Of course, we have a team working on it, here's a PPT!". An actual AI project would have much bigger risks and uncertainty. I as a manager may be blamed for messing up real company processes if we actually rely on the AI. If it's just there but doesn't actually do anything, it's a net win for me.
Note how this is not how things run when there are real goals that can be immediately improved through ML/AI and it shows up immediately on the bottom line, like ad and recommendation optimizations in Youtube or Netflix or core product value like at Tesla etc.
The bullshit powerpoint AI with frustrated and confused engineers happens in companies where the connection is less direct and everyone only has a nebulous idea of what they would even want out of the AI system (extract valuable business knowledge!).
What would you categorize as shiny in this case? "spam detection, image labeling, event parsing, text classification" can be implemented in lots of ways, simple and shiny as well.
Either way I don't think it matters too much because people can't really tell simple from shiny as long as the buzzword bullet points are there.
The point is rather that the job of the data science team is to deliver prestige to the manager, not to deliver data science solutions to actual practical problems. It's enough if they work on toy data and show "promising results" and can have percentages, impressive serious charts and numbers on the powerpoint slides.
I've heard from many data scientists in such situations that they don't get any input on what they should actually do, so they make up their own questions and own tasks to model, which often has nothing to do with actual business value, but they toy around with their models, produce accuracy percentages and that's enough.
The whole point is, from their point of view those decisions are rational. It's much more lucrative from their (managers') personal point of view to develop a smokes-and-mirrors looks-good-on-ppt AI project. To be safe from risk, don't give the AI people too much responsibility, let them "do stuff", who cares, the point is we can now say we are an AI-driven company on the brochures, and we have something to report up to upper management. When they ask "are we also doing this deep learning thing? It's important nowadays!" we say "Of course, we have a team working on it, here's a PPT!". An actual AI project would have much bigger risks and uncertainty. I as a manager may be blamed for messing up real company processes if we actually rely on the AI. If it's just there but doesn't actually do anything, it's a net win for me.
Note how this is not how things run when there are real goals that can be immediately improved through ML/AI and it shows up immediately on the bottom line, like ad and recommendation optimizations in Youtube or Netflix or core product value like at Tesla etc.
The bullshit powerpoint AI with frustrated and confused engineers happens in companies where the connection is less direct and everyone only has a nebulous idea of what they would even want out of the AI system (extract valuable business knowledge!).