Physician Adoption of AI Assistant

Ting Hou, Meng Li, Yinliang (Ricky) Tan, Huazhong Zhao

Research output: Contribution to journalJournal

Abstract

Problem definition: Artificial intelligence (AI) assistants—software agents that can perform tasks or services for individuals—are among the most promising AI applications. However, little is known about the adoption of AI assistants by service providers (i.e., physicians) in a real-world healthcare setting. In this paper, we investigate the impact of the AI smartness (i.e., whether the AI assistant is powered by machine learning intelligence) and the impact of AI transparency (i.e., whether physicians are informed of the AI assistant). Methodology/results: We collaborate with a leading healthcare platform to run a field experiment in which we compare physicians’ adoption behavior, that is, adoption rate and adoption timing, of smart and automated AI assistants under transparent and non-transparent conditions. We find that the smartness can increase the adoption rate and shorten the adoption timing, whereas the transparency can only shorten the adoption timing. Moreover, the impact of AI transparency on the adoption rate is contingent on the smartness level of the AI assistant: the transparency increases the adoption rate only when the AI assistant is not equipped with smart algorithms and fails to do so when the AI assistant is smart. Managerial implications: Our study can guide platforms in designing their AI strategies. Platforms should improve the smartness of AI assistants. If such an improvement is too costly, the platform should transparentize the AI assistant, especially when it is not smart.
Original languageEnglish
Number of pages18
JournalManufacturing and Service Operations Management
DOIs
Publication statusPublished - 17 Jul 2024

Keywords

  • Chatbot
  • Field experiment
  • generative AI
  • Health intelligence
  • Medical platform
  • Operational transparency

Indexed by

  • SSCI
  • SCIE
  • ABDC-A*
  • FT

Cite this