ALGORITHMIC AUTHORITY VS. HUMAN TOUCH: A NARRATIVE REVIEW OF PATIENT TRUST AND CLINICAL AUTONOMY IN AI-ASSISTED DIAGNOSTICS

Authors

DOI:

https://doi.org/10.31435/ijitss.1(49).2026.4963

Keywords:

Artificial Intelligence in Medicine, Patient Trust, Clinical Autonomy, Automation Bias, Explainable AI (XAI), Shared Decision-Making

Abstract

Introduction: Contemporary medicine is undergoing an unprecedented transformation driven by the integration of advanced artificial intelligence (AI) and large language models (LLMs) into clinical workflows. While these technologies objectively enhance diagnostic precision, their implementation creates a fundamental paradox: the increase in technological efficacy often correlates with a decline in patient trust, known as the "AI trust gap." This review examines the tension between algorithmic authority and the necessity of the "human touch," analyzing the impact of digital innovations on clinical autonomy and the patient-physician-AI triad.

Materials and Methods: This study presents a detailed analysis of 44 peer-reviewed scientific articles published between 2022 and 2026. The review focuses on Clinical Decision Support Systems (CDSS) across key diagnostic areas, including radiology and pathology. The analysis encompasses the psychological mechanisms of AI acceptance, the risks of automation bias, and the potential of Explainable AI (XAI) to restore clinical transparency.

Key Findings: Research reveals that the mere disclosure of AI involvement can reduce patient trust (dropping from 0.50 to 0.30–0.34 in experimental settings). A "paradox of knowledge" was identified, where higher patient literacy regarding AI correlates with increased skepticism. Regarding clinical autonomy, a dichotomy exists: junior clinicians are prone to automation bias, while experts face the risk of "deskilling." The review also discusses the "Algorithmic Consultant" role and the necessity of "Triadic Decision-Making," where AI serves as a transparent partner rather than a black-box authority.

Conclusions: The integration of AI requires a reconfiguration of medical practice from a technology-first approach to a human-centered design. Preserving clinical autonomy depends on adopting a "trust but verify" model and implementing XAI strategies to mitigate transparency barriers. The success of algorithmic medicine relies on maintaining the physician's judgment as the cornerstone of care, ensuring that AI functions as a supportive co-pilot.

References

Woods, S. S., Greene, S. M., Adams, L., Cordovano, G., & Hudson, M. F. (2025). From e-patients to AI patients: The tidal wave empowering patients, redefining clinical relationships, and transforming care. Journal of Participatory Medicine, 17, e75794. https://doi.org/10.2196/75794

Chen, C., & Cui, Z. (2025). Impact of AI-assisted diagnosis on American patients’ trust in and intention to seek help from health care professionals: Randomized, web-based survey experiment. Journal of Medical Internet Research, 27. https://doi.org/10.2196/66083

Zondag, A. G. M., Rozestraten, R., Grimmelikhuijsen, S. G., Jongsma, K. R., van Solinge, W. W., Bots, M. L., Vernooij, R. W. M., & Haitjema, S. (2024). The effect of artificial intelligence on patient-physician trust: Cross-sectional vignette study. Journal of Medical Internet Research, 26. https://doi.org/10.2196/50853

Parchmann, N., Orzechowski, M., Brefka, S., & Steger, F. (2025). Evaluation of an AI-based clinical decision support system for perioperative care of older patients: Ethical analysis of focus groups with older adults. JMIR Aging, 8. https://doi.org/10.2196/71568

Traylor, D. O., Kern, K. V., Anderson, E. E., & Henderson, R. (2025). Beyond the screen: The impact of generative artificial intelligence (AI) on patient learning and the patient-physician relationship. Cureus. https://doi.org/10.7759/cureus.76825

Postle, R. D., & Forster, B. B. (2025). Patient perspectives of artificial intelligence in medical imaging. SAGE Preprints. https://doi.org/10.1177/08465371241298597

Park, H. J. (2024). Patient perspectives on informed consent for medical AI: A web-based experiment. Digital Health, 10. https://doi.org/10.1177/20552076241247938

McGhee, K. N., Barrett, D. J., Safarini, O., Elkassem, A. A., Eddins, J. T., Smith, A. D., & Rothenberg, S. A. (2025). Patient preferences for artificial intelligence in medical imaging: A single-center cross-sectional survey. Journal of Imaging Informatics in Medicine. https://doi.org/10.1007/s10278-025-01629-w

Foresman, G., Biro, J., Tran, A., MacRae, K., Kazi, S., Schubel, L., Visconti, A., Gallagher, W., Smith, K. M., Giardina, T., et al. (2025). Patient perspectives on artificial intelligence in health care: Focus group study for diagnostic communication and tool implementation. Journal of Participatory Medicine, 17. https://doi.org/10.2196/69564

Grosser, J., Düvel, J., Hasemann, L., Schneider, E., & Greiner, W. (2025). Studying the potential effects of artificial intelligence on physician autonomy: Scoping review. JMIR Preprints. https://doi.org/10.2196/59295

Ratkevičiūtė, K., & Aliukonis, V. (2025). Exploring opportunities and challenges of AI in primary healthcare: A qualitative study with family doctors in Lithuania. Healthcare, 13. https://doi.org/10.3390/healthcare13121429

Mache, S., Bernburg, M., Würtenberger, A., & Groneberg, D. A. (2025). Artificial intelligence in primary care: Support or additional burden on physicians’ healthcare work?—A qualitative study. Clinical Practice, 15. https://doi.org/10.3390/clinpract15080138

Frei, A. L., Khan, A., Oberson, R., Reinhard, S., Banz, Y., Meeuwsen, F., Janowczyk, A., Grobholz, R., Dawson, H. E., Lugli, A., et al. (2025). Computer-aided tumor cell fraction (TCF) estimation by medical students, residents, and pathologists improves inter-observer agreement while highlighting the risk of automation bias. Virchows Archiv. https://doi.org/10.1007/s00428-025-04163-w

Agur Cohen, D., Heymann, A. D., & Levkovich, I. (2025). Partners in practice: Primary care physicians define the role of artificial intelligence. Healthcare, 13. https://doi.org/10.3390/healthcare13161972

Choudhury, A., & Chaudhry, Z. (2024). Large language models and user trust: Consequence of self-referential learning loop and the deskilling of health care professionals. JMIR Preprints. https://doi.org/10.2196/56764

Marwaha, J. S., Yuan, W., Poddar, M., Elsamadisi, P., & Brat, G. A. (2025). The algorithmic consultant: A new era of clinical AI calls for a new workforce of physician-algorithm specialists. NPJ Digital Medicine. https://doi.org/10.1038/s41746-025-01960-0

Wallace, P. J. (2024). Gaining trust: Lessons and opportunities for artificial intelligence in health care. The Permanente Journal, 28(3), 168–171. https://doi.org/10.7812/TPP/24.064

Abbas, Q., Jeong, W., & Lee, S. W. (2025). Explainable AI in clinical decision support systems: A meta-analysis of methods, applications, and usability challenges. Healthcare, 13. https://doi.org/10.3390/healthcare13172154

Prinster, D., Mahmood, A., Saria, S., Jeudy, J., Lin, C. T., Yi, P. H., & Huang, C. M. (2024). Care to explain? AI explanation types differentially impact chest radiograph diagnostic performance and physician trust in AI. Radiology, 313. https://doi.org/10.1148/radiol.233261

Carriero, A., de Hond, A., Cappers, B., Paulovich, F., Abeln, S., Moons, K. G., & van Smeden, M. (2025). Explainable AI in healthcare: To explain, to predict, or to describe? Diagnostic and Prognostic Research, 9, 29. https://doi.org/10.1186/s41512-025-00213-8

Cabitza, F., & Parimbelli, E. (2026). Let XAI generate reliability metadata, not medical explanations. Computer Methods and Programs in Biomedicine, 273, 109090. https://doi.org/10.1016/j.cmpb.2025.109090

Zamir, M. T., Khan, S. U., Gelbukh, A., Felipe Riverón, E. M., & Gelbukh, I. (2025). Explainable AI-driven analysis of radiology reports using text and image data: Experimental study. JMIR Formative Research, 9. https://doi.org/10.2196/77482

Alkhanbouli, R., Matar Abdulla Almadhaani, H., Alhosani, F., & Simsekler, M. C. E. (2025). The role of explainable artificial intelligence in disease prediction: A systematic literature review and future research directions. BMC Medical Informatics and Decision Making. https://doi.org/10.1186/s12911-025-02944-6

Agrawal, R., Gupta, T., Gupta, S., Chauhan, S., Patel, P., & Hamdare, S. (2025). Fostering trust and interpretability: Integrating explainable AI (XAI) with machine learning for enhanced disease prediction and decision transparency. Diagnostic Pathology, 20. https://doi.org/10.1186/s13000-025-01686-3

Liu, Y., Liu, C., Zheng, J., Xu, C., & Wang, D. (2025). Improving explainability and integrability of medical AI to promote health care professional acceptance and use: Mixed systematic review. JMIR Preprints. https://doi.org/10.2196/73374

Yang, M., Chen, H., Hu, W., Mischi, M., Shan, C., Li, J., Long, X., & Liu, C. (2024). Development and validation of an interpretable conformal predictor to predict sepsis mortality risk: Retrospective cohort study. Journal of Medical Internet Research, 26. https://doi.org/10.2196/50369

Kücking, F., Hübner, U., Przysucha, M., Hannemann, N., Kutza, J. O., Moelleken, M., Erfurt-Berge, C., Dissemond, J., Babitsch, B., & Busch, D. (2024). Automation bias in AI-decision support: Results from an empirical study. In Studies in Health Technology and Informatics (pp. 298–304). IOS Press. https://doi.org/10.3233/SHTI240871

Nguyen, T. (2024). ChatGPT in medical education: A precursor for automation bias? JMIR Preprints. https://doi.org/10.2196/50174

Wang, D. Y., Ding, J., Sun, A. L., Liu, S. G., Jiang, D., Li, N., & Yu, J. K. (2023). Artificial intelligence suppression as a strategy to mitigate artificial intelligence automation bias. Journal of the American Medical Informatics Association, 30, 1684–1692. https://doi.org/10.1093/jamia/ocad118

Hedman, M., Kosuta, V., Lindmark, M., Sandström, J., Trinh, B., Sundvall, P. D., Rystedt, K., Werner, M., Öhberg, F., & Lundberg, T. (2025). Diagnostic accuracy of otitis media with and without a fictitious AI support among physicians in primary care and medical students. Scandinavian Journal of Primary Health Care. https://doi.org/10.1080/02813432.2025.2571936

Ahsan, Z. (2025). Integrating artificial intelligence into medical education: A narrative systematic review of current applications, challenges, and future directions. BMC Medical Education, 25. https://doi.org/10.1186/s12909-025-07744-0

Kim, S. H., Schramm, S., Riedel, E. O., Schmitzer, L., Rosenkranz, E., Kertels, O., Bodden, J., Paprottka, K., Sepp, D., Renz, M., et al. (2025). Automation bias in AI-assisted detection of cerebral aneurysms on time-of-flight MR angiography. La Radiologia Medica, 130, 555–566. https://doi.org/10.1007/s11547-025-01964-6

Hasanzadeh, F., Josephson, C. B., Waters, G., Adedinsewo, D., Azizi, Z., & White, J. A. (2025). Bias recognition and mitigation strategies in artificial intelligence healthcare applications. NPJ Digital Medicine. https://doi.org/10.1038/s41746-025-01503-7

Faust, O., Salvi, M., Barua, P. D., Chakraborty, S., Molinari, F., & Acharya, U. R. (2025). Issues and limitations on the road to fair and inclusive AI solutions for biomedical challenges. Sensors, 25. https://doi.org/10.3390/s25010205

As’ad, M., Faran, N., & Joharji, H. (2025). AI-supported shared decision-making (AI-SDM): Conceptual framework. JMIR AI, 4. https://doi.org/10.2196/75866

Abbasgholizadeh Rahimi, S., Cwintal, M., Huang, Y., Ghadiri, P., Grad, R., Poenaru, D., Gore, G., Zomahoun, H. T. V., Légaré, F., & Pluye, P. (2022). Application of artificial intelligence in shared decision making: Scoping review. JMIR Medical Informatics, 10, e36199. https://doi.org/10.2196/36199

Osmanodja, B., Sassi, Z., Eickmann, S., Hansen, C. M., Roller, R., Burchardt, A., Samhammer, D., Dabrock, P., Möller, S., Budde, K., et al. (2024). Investigating the impact of AI on shared decision-making in post-kidney transplant care (PRIMA-AI): Protocol for a randomized controlled trial. JMIR Research Protocols, 13. https://doi.org/10.2196/54857

Auf, H., Nygren, J., Lundgren, L. E., Petersson, L., & Svedberg, P. (2025). Healthcare professionals’ perspectives on AI-driven decision support in young adult mental health: An analysis through the lens of a shared decision-making framework. Frontiers in Digital Health, 7. https://doi.org/10.3389/fdgth.2025.1588759

Medical AI and AI for medical sciences: An editorial. (2025). JMA Journal, 8, 38–39. https://doi.org/10.31662/jmaj.2024-0355

Aljuraid, R. (2025). The illusion of control: AI chatbot dependency and the threat to clinical autonomy. Studies in Health Technology and Informatics, 332, 211–215. https://doi.org/10.3233/SHTI251529

Mwogosi, A. (2025). Ethical and privacy challenges of integrating generative AI into EHR systems in Tanzania: A scoping review with a policy perspective. Digital Health, 11. https://doi.org/10.1177/20552076251344385

Pham, T. (2025). Ethical and legal considerations in healthcare AI: Innovation and policy for safe and fair use. Royal Society Open Science. https://doi.org/10.1098/rsos.241873

Xie, H., Dai, X., Xie, J., Lei, S., Zeng, J., Yang, J., & Zhou, Y. (2025). Artificial intelligence adoption in surgery: Cognition, usage patterns and implementation barriers of DeepSeek among healthcare professionals in China’s tertiary hospitals. Journal of Multidisciplinary Healthcare, 18, 7719–7737. https://doi.org/10.2147/JMDH.S538723

Shiferaw, K. B., Roloff, M., Waltemath, D., & Zeleke, A. A. (2023). Guidelines and standard frameworks for AI in medicine: Protocol for a systematic literature review. JMIR Preprints. https://doi.org/10.2196/47105

Downloads

Published

2026-03-25

How to Cite

Maciej Kokoszka, Michalina Chodór, Julia Maria Kuczkowska, Judyta Bordakiewicz, Zuzanna Michalska, Donata Pokorska, Julia Świechowska, Zuzanna Zarzycka, Ingrid Samberger, & Magdalena Wiciak. (2026). ALGORITHMIC AUTHORITY VS. HUMAN TOUCH: A NARRATIVE REVIEW OF PATIENT TRUST AND CLINICAL AUTONOMY IN AI-ASSISTED DIAGNOSTICS. International Journal of Innovative Technologies in Social Science, 3(1(49). https://doi.org/10.31435/ijitss.1(49).2026.4963

Most read articles by the same author(s)