i’m not convinced by that framing for many reasons, but it’s worth noting that so far it seems that more than a few “laws of the universe” end with the phrase “empirical observations will fall along the following probability distribution”, which is not how we usually understand “absolute”.
This is just bureaucracy, which has existed long before AI.
Ok, it's not just bureaucracy, because the mistakes that AI makes to favor efficiency are different than those made by a large chain of overworked and sometimes corrupt people. But the root issue is not AI, it's the badly-implemented policies that the AI is designed to carry out. Look at any other strained system (food stamps, DHS, IRS, not government but Google and Amazon customer support), we have real humans carrying out bad policies and it's not much better.
Most people really do have morals and don't like hurting others, but those morals only go so far when you're part of an underfunded system, apathetic from burnout, and you literally don't have the resources or ability to make the morally-correct choice. All this debate on how AI lacks empathy and common sense place way too much value on humans who also lack empathy and common sense, and while an AI programmed to do something unethical won't hesitate, history and experiments show that some humans won't either.
We definitely should be learning and understanding the limits of AI and its hold on ethics, and not delegate policies like laws and income to 100% automated systems. But AI and improved efficiency really can help a lot of government institutions. Especially when the AI takes over the mundane, and the people in charge can handle the more difficult cases. Because then those people can spend more of their "human" empathy and decision-making better.
I don't think so. Software-powered decisions do share a feature with any bureaucracy: the inscrutability of it. Both the IRS and the Google datacenter are black boxes that are enormously powerful, automated, and dangerous. The ham-handed way in which they behave can sometimes be traced to the virtue of simplicity: do it the same way every time, without regard to context. Any programmer recognizes this as the simplest possible program! And if you decide to build in more "policy wiggle room", this often has a perverse effect of being immediately and ruthlessly exploited in ways that are difficult or impossible to enforce.
The new thing with software decisions is the unaccountably for responsible humans. It's like the corporate veil, but much worse. This one is..real, physical. A human can say not only didn't they perform the action (the same is true using a human underling) but now they can say a) I didn't set the policy, It did, and b) I don't know where It runs, or how to change It or stop It. I imagine it will be quite the fashion for hyper wealthy c-suite to delegate to an AI, enjoy life and the continued high remuneration and decrease of accountability and liability.
The other thing is that, in general, software accelerates complexity and never reduces it. We live in a truly science-fiction era where we have problems that our society can cheaply reproduce devices (chips) with 10^10 microscopic states, and we forbid each other from looking, or even knowing, what those states are. We buy and sell these tiny machines, and we don't really know what's inside them. We connect them to the internet! There is just so much space for things to hide, it's frightening. Now consider a typical bureaucracy tends to do more poorly the more complex its inner workings; now add thousands, millions of computers, with 10 generations of programmers blood soaked code (note: programmer generations are like 5 years long).
The IRS did audits based on political affiliations under Obama. I'm aware of some of these groups, and they were lucky to have really good accountants that kept great records and donated their time to handle the paperwork.
The news captured many examples of these selective audits that were based primarily on political whims. I'm still waiting to see the audit hammer to be thrown at Black Lives Matters on their money laundering tactics and commmingling of funds.
This is not wholly accurate. While it is true that under the Obama administration some conservative groups were inappropriately targeted by the IRS, a report released by the Treasury Deparment's Inspector General in 2017 found that such inappropriate targeting dated back to 2004 and had affected liberal organizations as well, meaning the misconduct was non-partisan in nature [1].
Accountability is not magic. There are lots of problems out there with no great solutions or no solutions. People who say increase accountability, and things will work better are usually people who haven't had to deal with such problems. This is where Values have their biggest impact on Outcomes. There are lots of corporate robots who appear to be just mindlessly optimizing for ladder climbing, wealth, power etc but often times their Values are the only thing preventing them from turning into Putin, Epstein etc.
> This is just bureaucracy, which has existed long before AI.
It's not inherent to bureaucracy. It's inherent to bureaucracy that's starved of resources and forced to implement policies which are contrary to the supposed purpose of the organization the bureaucracy manages. More to the point, blaming it on bureaucracy absolves the people in charge who are really responsible for things like idiotic means-testing punishing people for getting jobs, or C-levels who see customer support as a loss center.
probably the difference is we have systems in place. religion and other social structures. as systems go so can machines have systems but is less interpretive for better or worse.
Interest in the possibilities afforded by algorithms and big data continues to blossom as early adopters gain benefits from AI systems that automate decisions as varied as making customer recommendations, screening job applicants, detecting fraud, and optimizing logistical routes.
No doubt optimizing logistical routes is a problem for algorithm and has been studied since, idk, mid 1920s? With the invention of linear programming? Or maybe later but much earlier than the current wave.
In contrast, certain screen job applicants is done aggressively now, no doubt but there's not evidence imo that scanning does more than reduce interviewer workload.
And product recommendations is one of those constantly talked-of and generally bullshitty applications that average person can verify the ineffectiveness of.
Which is to say, perhaps algorithms today wind-up being used because they allow the meta-corp act a blind juggernaut, that this is the primary advantage the approach has, with the secondary advantage being justifying that operation and "greater efficiency" sometimes results but is more often an excuse.
I think the same may apply when people are trained and experienced in algorithms (and other technology) and not in values (and other aspects of humanities). The extreme corruption I perceive in SV could be, IMHO, partly a consequence.
Most issues in SV, such as free speech, fraud, AI's impact on society, disinformation and misinformation, labor rights, the concentration of wealth and power, the role of government, mob rule online, narcissism and megalomania in leaders, the corruption of power, etc. etc. are mainly humanities issues. Many in technology have disparaged and avoided humanities education - itself an act of basic egocentric bias and a lack of skepticism - and it shows in the outcomes.
This is absolutely true. The typical tech CEO is a guy who got one B in a college history course and decided the whole subject was just so far beneath his genius as to be completely unworthy of study. And so... we get what we've currently got.
Paul Graham, for his flaws, was at least a painter.
I know, right? Peter Thiel is so much more enlightened, with his philosophy and law degrees. Or Zuckerberg with his deep interest in psychology and fluency in Latin and devotion to the Stoics.
Where is the evidence that people with Humanities knowledge are more ethical than those without? Where is the evidence that STEM people are some kind of psychopath stereotype?
Related: How many STEM people have knowledge in the Humanities? How many Humanities people have STEM knowledge?
There's one part that narcissistically decided that they're above the "cold facts and rules" of the Sciences. There's one side that devalue the other, and it's not the STEM one.
I was just kidding. I had a serious point though: If we blame the failings of tech on lack of humanities education, as the comment I was replying to did, we should trust those tech companies whose leaders do have that education more.
> SV, such as free speech, fraud, AI's impact on society, disinformation and misinformation, labor rights, the concentration of wealth and power, the role of government, mob rule online, narcissism and megalomania in leaders, the corruption of power, etc. etc. are mainly humanities issues.
It's weird how absolutely none of these are actually classed as humanities subjects.
1. Free Speech: Political Science
2. AI's impact on society: Computer Science, Political Science, CS-oriented philosophy (which all ABET accredited degrees have to take)
3. Concentration of wealth and power: Economics, political science
4. Mob rule online: Sociology
5. Narcissism and megalomania: Psychology
6. Corruption of power: political science
7. Fraud: ???
8. Disinformation and Misinformation: Political science (these things are determined by the ruling class, not science).
It especially common to group sociology, anthropology, political science, and psychology into the humanities (I wish they were) but to most people they are considered sciences. Just not hard sciences. This is all to say there are actual fields with methods extending beyond "well how do YOU feel about it" specifically catered to address each of these issues. The uncultured swine in STEM you refer to are just doing their jobs. No high highfalutin education in philsophizing will stop someone from making a bad algorithm to feed their family.
That's specious and argumentative (and was addressed in the quoted assertion). Here's where the lack of humanities fails us: Almost all the responses are specious arguments, not advancing us and the discussion.
In my opinion, the problem is much bigger than just a class of algorithms (AI) and is largely a side-effect of late-capitalism.
The primary issue is that software systems are inherently rule-driven (algorithmic) and that encoded rules are just that--encoded, fixed--for some amount of time during a release cycle, a system will operate in a fixed way. Private companies are the ones designing and developing these systems, and as far as private companies are concerned the only value they care about is profit. Ethics hardly enters into discussion when it comes to the extraction of the maximum capital possible. As long as there's no pre-established juridical ruling against it, the capitalist is not going to eliminate any approach for ethical reasons. Worse, because the algorithm is the product, these rules are carefully guarded and as the product grows in popularity it is not only the experience, but the algorithmic mode of reasoning itself that begins to dominate--people are complacent about, e.g. horrendous privacy practices because an ethics of digital privacy was never a question to begin with and as soon as companies rolled out products built on this absence it became normalized.
Algorithms in themselves do not make values wither. A society in which anything other than monetary value was already never given consideration establishes an environment in which values wither, whether or not that erosion is realized by digital or mechanical means. It all comes back to capital baby. In other words, algorithms are one means of realizing the obfuscation an a-ethical modus operandi the grand abstraction of near unbridled capital already permits.
Socrates once had to admit that he did not understand the issue of names, because he was not able to afford the 50 drachma course on the matter. I'm now similarly embarrassed. I do not understand what it means for algorithms to rule (are laws algorithms? are procedures?) or for values to wither. I can only ask whether the fundamental problem in the robodebt scandal was not the shift in the burden of proof.
Laws are not algorithms, because a human legal system has wiggle room. You can negotiate, persuade, and the system is capable of dealing with unanticipated input. An algorithm is a strict series of steps, executed based on pre-determined allowable inputs. They are preferable to the ones defining them because A) they can be automated and B) they deflect blame: "It's not my fault, it's just policy" or "it's just the way the program works". They thus act as a proxy for the ruling class to operate through.
"Values" are ideas that we, as a society, "value". Compassion, understanding, etc. These tend to involve nuance, which a system designed in advance and applied blindly to a given circumstance inherently lacks.
Thus, in case you aren't simply feigning ignorance, the title could be expanded to: "When individuals with existing power over others design rigid systems to be applied indiscriminately to those beneath them for both convenience and to redirect the perception of responsibility, it leads to inhumane consequences."
I would push further because the difference is essential and massive. Human legal systems do not have 'wiggle room'. They are living, breathing bodies open to interpretation and change. In fact, radical change and radical interpretation.
There are times where laws have been strict (think of the US three strikes rule, or a US 90s favorite, the singaporean caning rule for littering) however the more rigid the set of rules the more authoritarian a system is assumed to be and generally authoritarianism is assumed to not be favorable.
Alternately, the rule of law means equality in the eyes of the law. Each and every person should be subject to the same rules, despite the variation in interpretation making for a variety of outcomes.
Algorithms lack this and we're seeing the consequences in everything from mundane customer service interactions to the chinese social credit system
The "wiggle room" built into the lega system is because reasoning of any sort is always carried out under conditions of epistemic uncertainty. This is built into legal systems through concepts such as burden of proof, reasonable doubt etc. that acknowledge that the legal system makes mistakes frequently due to a lack of, or distorted information, and therefore there should always be avenues through which decisions can be challenged.
The second concept that is embedded (at least theoretically) in legal systems is the concept of procedural justice, that an outcome is Just iff all participants in the procedure that led to that outcome consider it Just. The most obvious example of this is trial by a jury of your peers.
In light of this you can see two obvious problems with these types of algorithmic systems. First they tend to treat the model as perfect and are used as if they can spit out guilty / not guilty answers by the people using them, and second they are often black boxes where it is impossible for the subject to challenge the outcome, as they don't know how it has been reached.
Ultimately I think these systems need to have a human in the loop and to be reasonable (in the sense that they can be reasoned about)
> Laws are not algorithms, because a human legal system has wiggle room.
Is this "wiggle room" the fundamental difference between laws and algorithms, so fundamental that we have to fear values withering when algorithms rule, but not when laws rule?
Is it the reason the ruling class can't operate through laws, only through algorithms? Does it prevent humans executing policies from saying that "it's not my fault, it's just policy"?
> Is this "wiggle room" the fundamental difference between laws and algorithms, so fundamental that we have to fear values withering when algorithms rule, but not when laws rule?
It's more that human values have never been formalized before. It's not clear they can be formalized in a succinct form. This is actually the core problem of AI safety: if (when) we build a human-level-smart general AI, it will have some set of values, but unless those values are perfectly in line with ours, this exercise will most likely turn lethal to humanity.
Or, conversely, it seems that fully specifying what human values are is equivalent to building an aligned, human-level general AI.
Thus, the fundamental difference between laws and algorithms is that laws are executed by humans. The shared value system is implicitly embedded in the system. Even the best algorithms we can come up with today can't replicate that, which means treating their output as binding will result in judgements we'd generally consider immoral, unjust and wrong.
> this "wiggle room" the fundamental difference between laws and algorithms, so fundamental that we have to fear values withering when algorithms rule, but not when laws rule?
Yes. No rule is clairvoyant. All exceptions cannot be anticipated; exceptions have to be adapted to or stomped out. The law aims to do the former. Algorithms deliver the latter.
> the ruling class can't operate through laws, only through algorithms
Dictators’ decrees are closer to algorithms than law. They’re absolute in a way laws are not. To the degree they diverge, it’s in the enforcement, which is a sloppier version of the law.
Shifting the burden of proof wasn't the only change. They also retroactively shifted the burden of record-keeping, and removed the presumption of clerical error when they started automatically punishing people for being suspect.
From The Onion I recently learned about psychopathy[0], one[1] of the constituents of which is meanness.
> Meanness entails deficient empathy, lack of affiliative capacity, contempt toward others, predatory exploitativeness, and empowerment through cruelty or destructiveness.
I would not be surprised if any algorithm selected to be effective would wind up externalising its costs on others, hence displaying meanness.
[1] the other aspects are Disinhibition and Boldness. Comparing to Aristotle's Virtues, do we have reasonable matches between:
Boldness : excessive Fortitude
Disinhibition : insufficient Temperance
Meanness : Wisdom (knowing how to act for one's own benefit)
without Justice (knowing how to act for everyone's benefit) ?
And I say to any creature who may be listening, there can be no justice so long as laws are absolute. Even life itself is an exercise in exceptions.
-- Jean-Luc Picard