donderdag 22 juni 2017

Robots and Humans : the 360 ethics view in "Turing's Mirror"

How should robots and humans relate to each other? What level of autonomy or self consciousness in a robot will grant it the status of deserving liability and culpability in any harm it caused? When are we considering robots to be sentient, and how are we to relate to fellow humans obviously abusing such bots.

Few people ponder on the above questions.  For me it is growing into an increasingly fun method to think about our own human ethics.  The only trick you need for this fruitful crossover is blurring out the presupposed distinction between us and them.

That blurring takes essentially two forms, depending on your 'AI-believer' level. But as we will see, the net effect roughly stays the same. (although this distinction keeps yielding rather engaged disputes that strongly follow the adagium: "There exist no bigger fights than between two people saying the same thing in a different way.")

On the extreme one side of the spectrum you have the fundamentalists of what I call the Penrose group (after Roger Penrose, author of The Emperor's New Mind, my personal introduction to this line of thinking). Basically their conviction is that forged minds with clever algorithms will never reach sufficient levels of humanness.  In this view there remains something (almost God-like) inspired in us that fundamentally escapes materialistic simulation.
For this group the blurring exercise takes the form of considering transhumanism: the emergence of cyborg-like artificially augmented humans.

The pragmatic dreamers on the other side are still in the historic "Turing" camp. For them sufficient levels of human imitation is all that is required. If it walks and talks like a duck, there is no real point in pretending it is not.  No need to be fundamental or self-serving about it.  Beauty here is in the eye of the beholder: the observer is part of the test.

The clever setup of the Turing test factors in both the 'fooled ya!' imitation capabilities of the AI as well as the determination capabilities of the interrogator.  Passing the test blurs the line we can or are willing to draw between types of actors. In this article: actors, we make a moral judgement about.


Arriving at this similar blurred destination from the often found distinct positions in the field we see the true ethical divide more sharply.

The worry really is not around new clashing parties to be determined by their nature, or the purity of their human descent. Rather these new advances in technology bring us new thought experiments in the age old real issue: the ethics in relations between unequal parties.

When one has leverage over the other, should it not come with the responsibility to take care? And if so. Should the most sustainable nature of such 'taking care' not be obliged to avoid its own 'burden derived from privilege' by actively working towards a more balanced and thus stable power equilibrium?


Any dystopian prospect on the outcome of this question comes from a rather grim view on our own track record laid down in history. The more optimistic view not only sees progress, it also recognizes that even the pessimists use some natural and universal sense of 'Quality' and 'Goodness' to gauge the state of affairs at any time.  From there it should follow we will always be able to select the best role models, whatever their nature.


-oOo-

Much of this borrows from the African concept of Ubuntu: I become a person through the existence of other people. My connection with others is what grows kindness in me. My conscience and self image is but the harvest of that.

If robots are to provide an ever growing training in anthropomorphizing inanimate objects, we can hope that it helps us unlearn our habits of dehumanizing fellow humans too?

-oOo-

I'ld hate being accused of original thoughts, so here is my due list of references and accolades to:
- Nell Watson for the recent inspiring talk and the availability for discussion afterwards
- Kate Darling for (among other) making me think about "Who is Johnny"
- Sam Harris for (at least one pod-cast) introducing me to Kate and her work, and a perspective on the shape of any moral landscape.
- Kevin Kelly for (not) busting my myth.
- Bennie Mols for Turing's Tango

-oOo-

This article is part of a series: Turing's mirror - Turing's Duck - Turing's Razor

-oOo-
update
Some weeks after publishing this...  some news articles on AI bots inventing own gibberish languages emerged with less then comforting headlines.  The resulting twitter-discussion triggered me to bundle the essence of the above into: