Artificial Intelligence in Rhinology, Otolaryngology, and Beyond
Should AI in healthcare look and feel “human?”
Zara M. Patel, MD, Chair, AAO-HNSF Rhinology and Allergy Education Committee
Not a week has passed over the past year without the topic of artificial intelligence (AI) dominating the public conversation of our country and world. Deep machine learning, neural networks, and large language models have become phrases overheard at neighborhood parks and children’s birthday parties and are no longer the dominion of the precious few who have actual knowledge of how these models are developed and trained.
My first foray into the world of machine learning was spurred by my interest in inverted papilloma (IP) and the knowledge that certain imaging characteristics could be used to predict conversion of IP to cancer (IP-SCC).1 What that initial study taught me were the two basic tenets that apply to all AI. That in order to develop a highly accurate predictive algorithm, you need thousands of data points, at a minimum. Also, that the quality of any learning model will depend directly on the quality of the data you feed it. This has led me to expand our data collection of IP and IP-SCC to over 20 institutions worldwide, and we hope this will allow more accurate prediction of conversion, even in places lacking highly skilled radiology colleagues that can pick up on the nuances of CT and MRI.
It also led to the opportunity to chat about AI and rhinology with a few of my colleagues on a panel at a recent American Rhinologic Society meeting, a conversation we further arranged into an editorial.2 We discussed the incredible amount of research already published using machine learning and neural networks in rhinology, otolaryngology, and healthcare as a whole. These studies range from examination of radiologic, endoscopic, histologic, serologic, demographic, and socioeconomic data to create algorithms to guiding diagnosis and management, to examination of how AI can be useful to us in our clinics, operating rooms, and healthcare institutions or systems to increase quality and decrease cost of care. There is no question that the revolution and evolution of machine learning is upon us, in otolaryngology and healthcare, as well as every other realm of our lives.
The opportunity and danger therein are one and the same. As we go about diligently performing research studies and inputting those data points into neural networks, all the biases and faults inherent to our work as human beings will be directly absorbed by these algorithms. A constant awareness of that fact, and a persistent striving toward equity, inclusion, and fairness in this process is the only imaginable way forward.
A question I posed to my panel was whether we truly wanted AI in healthcare to look and feel “human.” We typically first lean into the idea that the more humanized we can make AI, the more relatable, trustworthy, and therefore effective it can be in treating other human beings. But this ignores the major failings of our profession in the past, and the irrefutable fact that precisely because doctors and scientists are also human, they have fallen prey again and again to the biases of their times—to racism, sexism, and simple assumption based on lack of data. So should we instead try and steer AI and its multiplicity of healthcare applications away from being “human?” Would that actually be better for our patients and ourselves?
Perhaps the question is moot at this point. AI has already grown past what many consider to be a point of no return, possibly beyond human “control,” and there are definitely those who have a fairly dark view of what AI will bring to this world and the people who inhabit it. I choose a different view.
I recently read about a monk whose main activism is outreach into the tech community working in the field of AI.3 He sees real danger in AI and assumes that without intervention, it will eventually enslave human beings and control our lives. To try and prevent this from happening, he is trying to convince the engineers working with AI to infuse “enlightenment” into the system. Now, of course, his idea of enlightenment is a specific form of Zen Buddhism, and many may not agree with this exact philosophy. However, I did find the very idea intriguing. What if we could purposefully imbue AI with a universal, agreed-upon good? What if we could imbue it with empathy?
Across religions, philosophies, politics, and cultures, empathy stands as a universal good. A quote by Walt Whitman often comes to mind when I am treating patients: “I do not ask the wounded person how he feels, I myself become the wounded person.” But, of course, we consider this act of feeling a very human thing. So instead of asking how “human” we want to make AI, perhaps we should ask, can we teach AI the best part of being human? Can we teach it to not just absorb the brilliance and errors and flaws of our minds, but also the great expanse of our hearts?
According to George Eliot, “The highest form of knowledge is empathy, for it requires us to suspend our egos and live in another’s world.” If empathy is knowledge, and we can teach AI knowledge, we should be able to teach it empathy. And if, in addition to learning all of human history, science, and medicine, AI can learn and actually integrate empathy into its processing, our future as both doctors and human beings is bright.
References
- Liu GS, Yang A, Kim D, Hojel A, Voevodsky D, Wang J, Tong CCL, Ungerer H, Palmer JN, Kohanski MA, Nayak JV, Hwang PH, Adappa ND, Patel ZM. Deep learning classification of inverted papilloma malignant transformation using 3D convolutional neural networks and magnetic resonance imaging. Int Forum Allergy Rhinol. 2022 Aug;12(8):1025-1033. doi: 10.1002/alr.22958. Epub 2022 Jan 18.
- Gudis DA, McCoul ED, Marino MJ, Patel ZM. Avoiding bias in artificial intelligence. Int Forum Allergy Rhinol. 2023 Mar;13(3):193-195. doi: 10.1002/alr.23129.
- https://www.theatlantic.com/ideas/archive/2023/06/buddhist-monks-vermont-ai-apocalypse/674501/ Accessed August 15, 2023.