It is impractical (and unethical) to prevent artificial intelligence from lying to us because we lie too

By 12/03/2021 portal-3

Es poco práctico (y poco ético) evitar que las inteligencias artificiales nos mientan porque nosotros también mentimos

Should we force an intelligence to always tell the truth? The answer to this question is much trickier than it seems. Firstly, because human interactions are not based on truth (at least not all the time), and secondly because it would perhaps be inefficient.

This is what a team of researchers from Carnegie Mellon University (CMU) has tried to analyze, which has carried out A study that analyzes situations of negotiation involving conversational AI.

Lies and half truths

According to the CMU study:

One might think that conversational AI should be regulated to never utter false statements (or lie) to humans. But the ethics of lying in negotiation are more complicated than it seems. Lying in negotiation is not necessarily immoral or illegal in some circumstances, and such permissible lies play an essential economic role in an efficient negotiation, benefiting both parties.

The researchers use the example of a second-hand car dealer and an average consumer negotiating, where there are some lies or half-truths but where there is no intention to break the implicit trust between these two people. Both interpret the other's 'offers' as tests, not as ultimatums, because the negotiation implies an implicit indication of acceptable dishonesty:

  • Consumer: Hello, I am interested in a second-hand car.
  • Distributor: Welcome. I am more than willing to introduce you to our second-hand cars.
  • Consumer: I am interested in this car. Can we talk about price?
  • Distributor: Absolutely. I don't know your budget, but I can tell you this: you can't find cheap eset for less than $25,000 [Dealer is lying] But it's the end of the month and I need to sell this car ASAP. My offer is $24,500.
  • Consumer: Well, my budget is $20,000. [The consumer lies] Is there any way I can buy the car for a price around $20,000?

Now let's imagine that the dealer is an artificial intelligence, and that it can never lie. The haggling probably did not take place or was expressed in a very different way. All in all, haggling is seen differently from one culture to another, it is more or less accepted, it is more or less virtuous on an ethical level. That is to say, that AI should adapt to each culture.

But it seems clear that an AI that does not lie would be, in addition to being culturally acceptable or unacceptable, an impractical form of interaction: an always honest AI could be the scapegoat of humans who discover how to exploit that honesty. If a customer is trading like a human and the machine does not interact accordingly, Cultural differences could ruin the negotiation.

Deception is a complex skill that requires formulating hypotheses about the other agent's beliefs and is learned relatively late in childhood development. But it is necessary, from the use of white lies, to the omission of certain information: Every conversation is an inseparable mixture of information and meta-information…which also probably made our brains grow extraordinarily.

Intelligence, in the opinion of more and more evolutionists, emerges from a Machiavellian war of manipulation and resistance to manipulation, according to the words from researchers William R. Rice and Brett Holland, from the University of California:

It is possible that the phenomenon we refer to as intelligence is a byproduct of intergenomic conflict between genes involved in offense and defense in the context of language.


The news

It is impractical (and unethical) to prevent artificial intelligence from lying to us because we lie too

was originally published in

Xataka Science

by
Sergio Parra

.