Each of us, lately, as a result of Covid-19, we choose the expert who says what we want to hear. We repeat what he says without paying as much attention to what he says. They are ready-to-wear expert opinions. And that is possible only for a twofold reason: experts are wrong, and there are many things that are not known and about which one can have different opinions.
Are the experts wrong? Aren't they experts? Yes, but things are not that simple.
Peter Principle
There is learned fools, ignorant people who hide behind academic titles but who lack true erudition due to their lack of curiosity and humility and their narrow-mindedness. There are also experts blinded by ideology. And other experts who have gotten where they are in ways that are far from meritocratic.
Posted by the professor of educational sciences at the University of Southern California Laurence J. Peter, Peter's principle or Peter's principle of incompetence states that people who do their jobs well are promoted to positions of greater responsibility, to the point that they reach a position in which they cannot even formulate the objectives of a job, and They reach their maximum level of incompetence.
A group of researchers from the University of Minnesota, the Massachusetts Institute of Technology (MIT) and Yale University verified this principle by studying more than one hundred American companies in which the employees who performed best in their role were promoted to manager positions. And when this rise happened, those bosses were not competent in a leadership role.
That is, one can be good at a specific thing, but then fail when given more responsibilities. This also happens because there are many promotion mechanisms that have little or nothing to do with meritocracy: one can be promoted for being a conformist (one who does not question anything within the company and assumes the role of mere executor of what the hierarchy dictates) , for latent deficiencies (those who are promoted either for reasons of seniority in the company, or for loyalty and a certain resolution in the previous position) and the best of all: flattering superiors, licking asses.
This certainly explains why there are many companies, even large companies or multinationals, that seem to be hanging on by just a couple of threads. They look like houses of cards about to collapse with a breath. I have been lucky or unlucky to work for many of these companies and see them from the inside, and be scared. Really scare me. It's like there's no one at the wheel. As if those who are dedicated to hiring professionals only got it right by chance. Random workers who only have one neuron. Naturally, this also explains the call Pareto's Law: It is enough for at least 20 percent of the workers to do their job well so that the company does not go to hell even if the remaining 80 percent do it poorly. This principle can be found in many other organizations: e.g. Wikipedia.
Two basic errors
Philip Tetlock, the youngest member of a committee of the National Academy of Sciences of the United States, undertook research in 1984 on the knowledge and judgment of experts. For 20 years, he studied experts (usually advisors on political and economic issues): political scientists, economists, lawyers, diplomats, etc. There were everything from journalists to university professors. More than half were doctors.
Tetlock's method to evaluate the quality of these experts' opinions was to ask them to make precise and quantifiable predictions (answering each other up to a total of 27,450 questions) and then waiting to see if they were fulfilled. The point is that they were rarely fulfilled. The experts were failing, and their inability to predict the future was just another symptom of their glaring failure to fully understand the complexities of the present.
The lesson we should learn from Tetlock's research is that we should not believe that experts are always sure of what they should do (even if they seem that way). That the experts know more than us, but not much more. And that experts still ignore much more than they admit (especially in the field of social sciences). The most interesting thing to know if we should trust an expert more or less, according to what Tetlock discovered, is that the experts who make the most mistakes have one or two of the following characteristics:
Superforecasters: The Art and Science of Prediction (Essays)
- They appear in the media
ication. The longer they appeared lecturing on television, for example, the more incompetent they appeared. - It's hard for them to change their mind. Popularly, it is considered that having a reputation for changing your mind makes you a “weather vane”, someone who does not have things very clear. However, it happens that the opposite is true: a reliable expert is one who admits his errors and continually corrects course, based on new circumstances or new data.
In the following video, I talk to you in more depth about some of these issues:
In short: for this there are such tough demands in science: because science does not trust experts, nor scientists. Science was precisely designed to overcome the fallibility of human beings, and scientists are also human beings. So 'science' and 'scientist' couldn't be more different things. Because scientists are also human beings and all humans have an ideology, or we are prey to biases, prejudices or various nonsense. That's what science exists for: to discard claims that fail to pass its strict verification method. Science is the last line of defense against the defects of human thought, including that of experts: faith, dogma, revelation, authority, charisma, mysticism, divination, visions, hunches or... the political ideology you support with the fervor of a believer.
–
The news
The two characteristics that combine the experts who make the most mistakes
was originally published in
Xataka Science
by
Sergio Parra
.