I don't think so. Software-powered decisions do share a feature with any bureaucracy: the inscrutability of it. Both the IRS and the Google datacenter are black boxes that are enormously powerful, automated, and dangerous. The ham-handed way in which they behave can sometimes be traced to the virtue of simplicity: do it the same way every time, without regard to context. Any programmer recognizes this as the simplest possible program! And if you decide to build in more "policy wiggle room", this often has a perverse effect of being immediately and ruthlessly exploited in ways that are difficult or impossible to enforce.
The new thing with software decisions is the unaccountably for responsible humans. It's like the corporate veil, but much worse. This one is..real, physical. A human can say not only didn't they perform the action (the same is true using a human underling) but now they can say a) I didn't set the policy, It did, and b) I don't know where It runs, or how to change It or stop It. I imagine it will be quite the fashion for hyper wealthy c-suite to delegate to an AI, enjoy life and the continued high remuneration and decrease of accountability and liability.
The other thing is that, in general, software accelerates complexity and never reduces it. We live in a truly science-fiction era where we have problems that our society can cheaply reproduce devices (chips) with 10^10 microscopic states, and we forbid each other from looking, or even knowing, what those states are. We buy and sell these tiny machines, and we don't really know what's inside them. We connect them to the internet! There is just so much space for things to hide, it's frightening. Now consider a typical bureaucracy tends to do more poorly the more complex its inner workings; now add thousands, millions of computers, with 10 generations of programmers blood soaked code (note: programmer generations are like 5 years long).
The IRS did audits based on political affiliations under Obama. I'm aware of some of these groups, and they were lucky to have really good accountants that kept great records and donated their time to handle the paperwork.
The news captured many examples of these selective audits that were based primarily on political whims. I'm still waiting to see the audit hammer to be thrown at Black Lives Matters on their money laundering tactics and commmingling of funds.
This is not wholly accurate. While it is true that under the Obama administration some conservative groups were inappropriately targeted by the IRS, a report released by the Treasury Deparment's Inspector General in 2017 found that such inappropriate targeting dated back to 2004 and had affected liberal organizations as well, meaning the misconduct was non-partisan in nature [1].
Accountability is not magic. There are lots of problems out there with no great solutions or no solutions. People who say increase accountability, and things will work better are usually people who haven't had to deal with such problems. This is where Values have their biggest impact on Outcomes. There are lots of corporate robots who appear to be just mindlessly optimizing for ladder climbing, wealth, power etc but often times their Values are the only thing preventing them from turning into Putin, Epstein etc.
I don't think so. Software-powered decisions do share a feature with any bureaucracy: the inscrutability of it. Both the IRS and the Google datacenter are black boxes that are enormously powerful, automated, and dangerous. The ham-handed way in which they behave can sometimes be traced to the virtue of simplicity: do it the same way every time, without regard to context. Any programmer recognizes this as the simplest possible program! And if you decide to build in more "policy wiggle room", this often has a perverse effect of being immediately and ruthlessly exploited in ways that are difficult or impossible to enforce.
The new thing with software decisions is the unaccountably for responsible humans. It's like the corporate veil, but much worse. This one is..real, physical. A human can say not only didn't they perform the action (the same is true using a human underling) but now they can say a) I didn't set the policy, It did, and b) I don't know where It runs, or how to change It or stop It. I imagine it will be quite the fashion for hyper wealthy c-suite to delegate to an AI, enjoy life and the continued high remuneration and decrease of accountability and liability.
The other thing is that, in general, software accelerates complexity and never reduces it. We live in a truly science-fiction era where we have problems that our society can cheaply reproduce devices (chips) with 10^10 microscopic states, and we forbid each other from looking, or even knowing, what those states are. We buy and sell these tiny machines, and we don't really know what's inside them. We connect them to the internet! There is just so much space for things to hide, it's frightening. Now consider a typical bureaucracy tends to do more poorly the more complex its inner workings; now add thousands, millions of computers, with 10 generations of programmers blood soaked code (note: programmer generations are like 5 years long).