zaterdag 13 april 2019

Het Recht op Rationeel Pessimisme ~ De Parabel van de Bergbeklimmer // @destandaard @mboudry

Dit artikel is spontaan uit de pen gerold op Zaterdag 13 april en is vrijgegeven als gratis (no paywall) bijlage van De Standaard van die dag.  Een aanvullende paragraaf op het artikel "Alle soorten pessimisten hebben ongelijk" (link zonder paywall) van @MBoudry
-oOo-

Pessimisme in goede tijden is paradoxaal begrijpelijk én verrassend rationeel.  Hoe beter je het hebt, hoe meer je bewust bent van wat je allemaal kunt verliezen.  Hoe hoger je positie op de berg, hoe mooier het uitzicht, maar ook: hoe groter de kans dat je hoogtevrees gaat opspelen.

Ja hoor, blijf vooral helemaal zen en Buddha, maar kijk misschien toch beter niet langs de wand naar beneden.

En omgekeerd: enkel op de bodem van de Hel, is de optimistische leuze "The only way is up" een onbetwiste, tautologisch absolute zekerheid.

-oOo-

Optimisme is ontegensprekelijk een gezelliger aanvulling op contentement: Ja, het is nu goed, beter dan ooit zelfs; maar we hebben het hardnekkig recht (c.q. de morele plicht) te blijven zoeken en te geloven in "nog beter".

Toch heeft de pessimist op het feestje niet helemaal ongelijk: het verwoorden van "potentieel verlies" is rationeel verdedigbaar. Het is begrijpelijk om dat bewustzijn om te zetten in voldoende voorzichtigheid. Het is ook niet in conflict met een appreciatie van hoe goed het nu is. Het is gewoon goed advies:

De bergbeklimmer die  een onbekende hoogte bereikt voorziet best even een veilig verzekerend anker in de rotsmuur voor hij genoegzaam rondkijkt, genietend van het bereikte resultaat. Daarna kan hij in volle vertrouwen en met gerust gemoed weer toegeven aan de rusteloosheid van het zoeken naar het volgende hoogterecord.  De soloklimmer die dit in de wind slaat is "onnodig roekeloos".

-oOo-

Ondanks hun stellige beeld ervan is de effectieve toekomst van zowel de optimist als de pessimist nu nog onbekend; en daarom zijn hun versies erover beide samen waar. Ze bestaan in een soort quantumtheoretische superpositie van even grote waarschijnlijkheid. De golffunctie van waarschijnlijkheden valt enkel samen met de realiteit die we willen door zelf actie te nemen: bewust te kiezen voor het duurzaam verankeren van de vooruitgang die we hebben bereikt. Enkel zo verzekeren we de richting van de vooruitgang: "Altijd beter!"


woensdag 10 april 2019

A bots license to kill.

This is a lengthy reaction to FLI Podcast: Why Ban Lethal Autonomous Weapons?
(LAW == Lethal Autonomous Weapon)

First off: thanks to the makers for the insightful podcast. Even if it made me cringe often, it also made me question, rethink and better argument my own believes and concerns in this area.

However... :)

There are three elements I strongly missed, or rather, I oppose three implicit assumptions that seem to be unquestionable:
  1. why weapons
  2. why robots vs humans
  3. why not consider to promote

In reverse order:

Where are the opposing views? 

While I consider myself on the "ban"-side, I really miss an honest investigation into what benefits there are. What could we gain from having access (even massively) to these LAWs?
Seriously:
  • These life-death decissions, and their executions really look like one of those "dirty jobs" we'ld hapilly be ofloading to autonomous systems, right? Less stress for humans, less strain on humanoid conscience, ...
  • Humans are known to be error-prone. Doubly so when in stressful situations. There is no reason to see this as a benefit. When considering that single case of (only human possible) mercy, we are probably ignoring the ten unneaded victims he made.
  • Everyone having one of those LAWs might sound horrifyingly distopian, but it also kind of "leveling the playing field", better avoiding power-imbalance, distributing potential use of (protective) lethal violence more evenly. In this (utopian) view we all get to be assigned a robotic guardian angel, one to be reckonned with. Hell, yeah!

Better examples to be found; but in any case: I really think AI is in the corner of "large potential benificial progress" - I fail to see the a priori reason why this could not also be in the area of managing, controlling, countering, balancing lethal violence.
-oOo-

The strange case of two distinct ethics

Again and again I am puzzled by this artificial (and once more: upfront, unquestioned) divide between robots and humans in ethical questions. I understand these robots are new, and obviously not-human. With their development new ethical questions arise, new insights are to be found. But these are to be extensions, completions, adaptions of whatever ethics we already have: there is no second set of ethics to consider. This is about us.
I've written at length about this on my blog-series on ai-ethics: Turing's mirror - Turing's Duck - Turing's Razor
To me the separating screen in Turings test really functions as John Rawls' "veil of ignorance" or the blindfold on Lady Justice: "What if we didn't know if it was a man or a bot?"

So for me, unwilling to make a priori human vs bot distinctions, this podcast made a threasure trove of cringeworthy moments. Almost each time the 'human' is explicitely mentioned:
  • When "human dignity" is considered - as if the value of one's life is measured by the way it is taken away.
  • When killing robots are said to "dehumanize war" - as if man needed help to do that, really?
  • When we seek "meaningful human control" - as if we really trust humans?

As an aside: I live in Flanders, Belgium, near the visible remains and memory of the devestation of world war I. Maybe the CCW should have be held over here, in stead of in Geneva?

I loved the "do no harm" approach. But when it comes up, it gets to be the prerogative of the "medical community" (some special brand of humans?). Why is this not considered the core of the more broad "human nature" (military and law-enforcement communities included). And if so, why would this not naturely be expanded to a "universal guiding principle" that also drives and governs automates?

In our own history we continue to abuse abstract devisions to separate, to blame, to dehumanize, to oppress and finally to kill. Let us stop now. AI challenges should help us overcome differences and devisions between humans; let us not need one more distinction to achieve that.
-oOo-

LnAWs

So obviously. I find it awkward to hear fifty minutes of talk about banning lethal autonomous weapons, and just take for granted: the natural existance of lethal non-authonomous weapons.
Whatever rules we have on the latter, should naturally be extended and applicable to these new ones. And if the new ones make us reconsider: it must be because we hadn't thought well enough about the old ones in the first place? (No, I'm not baiting for NRA style reactions. This is not about the US)
Our current approach to lethal weapons shows we can be organisationally realistic and legislatively optimistic at the same time.
Legislation already states "You should not kill". Still we understand bad sheep will emerge, so we introduce 'law-enforcement', a state organized monopoly on violence to be used to avoid greater harm / more violence. This tends to work. The strange thing is: we know individuals within the law-enforcement fail. Apparently this proclaimed "meaningfull human control" is not flawless. This shows our trust does not go towards those individuals, but to the institute.
We are in fact *not* looking for individual human control, we are satisfied with an organizing controlling-society. A society defining and adjusting the policies and rules of engagement. One that supervises and reports back on actual execution. One that disciplines the abuse of trust. One we are part of ourselves.
-oOo-
To close up: I really think (paradoxically) that calling to "Ban LAWs" is the worst possible strategy at achieving a ban on LAWs. (or rather LWs of any kind)
In dossiers like this (strong opposing ethical matters of principle) the solution never comes from those digging-in or standing their ground. Not from those declaring their views as "non negotiable". In this case I would expect more practical result from carefully leaving the high moral ground towards a negotiable position of some objectively verifiable test. From changing the narrative from "Never cross this line" to "Ok, but only if it passes this level".
You might of guessed: In my world-view passing that very test that makes any law-enforcer deserving of "our (society) level of trust"; that test should be applicable to both man and bot. If we end up in a future where we can gradually raise the bar in that test, we should consider that a good thing. Even if those levels grow to be achievable only by 'them'.
One might recognize in this strategy the vague reflection of what Turing did in his paper "the imitation game": Facing the claim that "computers will never be intelligent", he simply made 'intelligence' into a falsifiable assertion: an objective test. By hiding the nature of the participants behind the screen, he left the a priori principle that it (intelligence) could not be achieved by one of them. In return we got into an interesting race of "ever better".

-oOo-

This comment displayed some agency in stubbornly requiring publication here, to balance the irony and frustration of getting killed off (i.e. marked as spam, probably for containing a perfectly valid URL) by, surely, a robot!