Dr R spends most of his time alone, in the dark. On a typical day, he watches digital images reflected on monitors or looks at a series of black-and-white X-rays, thinks about and reports.
Radiology is a discipline divided into two processes: detection and interpretation. This means that the radiologist must first watch and then analyze what he perceives from what he watches, what it means, and what reasons there may be for the findings. This binary process is repeated second by second, minute by minute, hour by hour. The changes he should notice are sometimes a 1 millimeter thickening in the walls of a formation, and sometimes a slight increase in the density of an organ. He must be constantly alert and focused in order to notice these small changes that are essential for making a diagnosis. There is always the risk of missing a very important detail. Workload, unfavorable work environment conditions (other people entering the room constantly, questions, conversations, ringing phones, deoxygenated air, extreme heat or cold, etc.) and fatigue, dissatisfaction and intimidation often caused by these, skipping important details increases the risk.
The radiologist is usually expected to look and evaluate the images very quickly. At first glance, making the diagnosis is seen as a sign of good education. During the radiology training, systematic examination of every anatomical element in the film is taught; it is aimed to accelerate this systematic review over time and finally to “understand at a glance” the disease reflected in the image. Although this method is generally successful, it is known that even very experienced radiologists can miss important findings.
Dr R was startled when a colleague approached him with an MRI of the knee. “What do you think of this case?” his colleague asked. Dr R looked at the MRI and said “anterior cruciate ligament tear”. This is a very common sports injury. His colleague placed Dr R’s report in front of him. The report said: “The anterior cruciate ligament is normal”. “I was nice,” said Dr R. “It’s incredible that one time I looked at the movie and said something and the next time I saw that I skipped it before”. Dr R placed great emphasis on “first look” and did not systematically check every part of the knee.
In a study by Dr J Potchen at Michigan State University, in which he measured the ‘reading’ performances of more than 100 radiologists using 60 lung x-rays, the findings were as follows: “Is the film normal?” When asked, radiologists disagreed among themselves by 20% (this is called “inter-observer variability”). The next time a single radiologist viewed the same films, it deviated between 5% and 10% from previous assessments (this is called intra-observer variability). These variations are normal and appropriate to the nature of the business.
In a mammography screening, among 110 radiologists who examined the mammograms of 148 women, the rate of “detecting existing cancer” (sensitivity) was between 59% and 100%, and the rate of “correctly diagnosing non-cancers” (selectivity) was between 38% and 98%. The correct diagnosis rate generally ranged from 73% to 97%. Diagnostic errors, whether “false positive” or “false negative”, have a great impact on the patient: “false negative diagnosis”, that is, missed cancer, delays the patient’s treatment and prevents the patient from benefiting from the “advantages of early diagnosis”. A “false positive diagnosis”, that is, the normal anatomical structures or the formations that do not require biopsy, are called “suspicious”, which exposes the patient to unnecessary interventions. However, unnecessary interventions cause not only physical but also psychological and material damage, and further complicate subsequent radiological control and interpretation.
Physicians in other branches are unaware that there may be differences between radiologists and even between their own diagnoses. Many cannot perceive the importance of the radiologist in radiological diagnosis and think that the machine (radiology device) gives the diagnosis..! However, everything the radiologist sees depends on how he or she uses the machine. Other branch doctors overestimate the functions of the devices and have such false confidence in their high-tech images that many send their patients to radiology without a clinical history or even an examination, and their examination is based on the data of the radiology report. shapes accordingly.
One of the 60 films in Dr J Potchen’s cited study was missing a collarbone. 60% of radiologists did not notice the “lack of the collarbone”. When clinical data is added to exercise and radiologists are told that these 60 films “annual check-up” When they were informed that there were films made for the film, 58% of the radiologists again skipped it and evaluated the film as normal. However, x-rays “part of a series of studies to find cancer” When told that they were missing, 83% of radiologists identified the missing collarbone. This shows that a particular clinical cue specified can seriously affect performance because in this case the radiologist does a systematic search, not relying solely on his immediate vision. If the clinician does not provide sufficient information to the radiologist about the patient’s clinical history and examination findings and merely conveys the question in his mind, the radiologist adjusts the examination to answer the question only; For example, is there a clot in the lung or is the palpable thing in the left breast cancer? Then other important things can be skipped.
Radiology is not the only field where monitoring and evaluation results vary among physicians. For example, EKGs can be evaluated differently by doctors. A study in which 100 ECG outputs were evaluated by 10 experts is a good example: if a person who has had a heart attack goes to Dr A there is a 20% chance this will be missed, if a person who has not had a heart attack goes to Dr B they will be told they have a 26% chance of having a heart attack.
In one study, 13 pathologists evaluated 1001 cervical biopsies using a microscope. After long enough to forget their initial assessment, they repeated it. Agreeing with their previous comments was an average of 87% among senior pathologists, and only 51% between senior pathologists and junior pathologists. It is worth emphasizing once again: the accuracy of the diagnosis depends much more on doctors than on medical devices.
Personality structure, mood, and past experiences also affect radiologists’ “decision-making” performance. Some radiologists are reluctant to take risks, so they tend to call normal “abnormal” (these diagnoses are called “false positives”). Some radiologists are bolder in taking risks and tend to view certain abnormalities as “normal” (these diagnoses are called “false negatives”). Others also request additional films before making their final decisions.
Missing a cancer is devastating for the radiologist; Therefore, the mentality of “Let the patient endure the biopsy instead of I will have skipped the cancer” prevails. This understanding is extremely common in extremely sensitive issues such as breast cancer. A radiologist accused of surviving a breast cancer has since studied each case for several weeks, and eventually his report usually ends with a recommendation for a biopsy.
Physicians tend to stop searching and therefore stop thinking when they detect a significant finding that could explain the patient’s clinical condition. This applies even more to radiologists. For example, if the internist has stated that the patient has a fever, cough, and yellow sputum—which are signs of pneumonia—the radiologist will look for pneumonia on the chest x-ray and may miss another abnormality, such as cancer. Similarly, a surgeon may forget to ask the radiologist for an evaluation of the right breast when citing suspicious stiffness in the patient’s left breast; If the radiologist has conditioned himself to perform only the examination requested by the surgeon (such as left breast ultrasound, etc.), he may miss cancer in the right breast or other cancer foci in the left breast that can be found by another method (for example, by mammography).
* This article contains excerpts from examples and opinions from the chapter “Owner’s Eye” of Dr Jerome Groopman’s book, How Doctors Think. Therefore, the title of the article is deliberately a reference to this book.