By Noah Solomon
Special to the Financial Independence Hub
Although the use of artificial intelligence (AI) has become increasingly popular over the past several years, the idea that algorithms make better decisions than humans has been around for over 60 years.
In 1954, psychologist Paul Meehl published Clinical vs. Statistical Prediction: A Theoretical Review of the Evidence. Following extensive research, Meehl claimed that mechanical, data-driven algorithms were better at predicting human behavior than trained clinical psychologis
Meehl’s controversial findings, which challenged the assumption of human rationality in modern economic theory, have since been corroborated by several scholars including Daniel Kahneman, who was awarded the 2002 Nobel Memorial Prize in Economic Sciences for his work on the psychology of judgment and decision-making. In his most recent book, Thinking Fast and Slow, Kahneman illustrates that in situations involving uncertainty and unpredictability, simple algorithms match or outplay humans and their “complex” decision making criteria essentially every time.
Terrifying but True
In The Undoing Project, author Michael Lewis describes a study conducted by the Oregon Research Institute on radiologists’ x-ray diagnoses.
The experiment started with the creation of a simple algorithm for determining the likelihood of an ulcer being malignant. The algorithm was based on an equal weighting of seven criteria that the doctors in the study had identified as being important for diagnostic purposes. The researchers then took a sample of 96 x-rays of different stomach ulcers and asked the doctors to judge the probability of cancer in each x-ray on a scale ranging from “definitely malignant” to “definitely benign.” Without telling the doctors, researchers showed them each ulcer twice.
The doctors’ diagnoses were all over the map. The experts didn’t agree with each other. More surprisingly, when presented with duplicates of the same ulcer, the doctors contradicted themselves and rendered more than one diagnosis. On the other hand, the model-driven diagnoses were far more accurate. The simple algorithm had not only outperformed the group of doctors as a whole, but actually managed to outperform even the most accurate doctor. If patients wanted to know whether they had cancer, they were better off using the algorithm than they were asking a radiologist to study an x-ray.
It’s about what A.I. doesn’t have
AI is often lauded for its ability to do what humans cannot. Algorithms can rapidly perform complex analysis on massive quantities of data to find patterns and identify predictive signals. Notwithstanding this clear advantage, it is the specifically “human” characteristics that AI doesn’t have that perhaps constitute its largest advantage over human decision-making. Continue Reading…






