TY - JOUR
T1 - Physician Adoption of AI Assistant
AU - Hou, Ting
AU - Li, Meng
AU - Tan, Yinliang (Ricky)
AU - Zhao, Huazhong
PY - 2024/7/17
Y1 - 2024/7/17
N2 - Problem definition: Artificial intelligence (AI) assistants—software agents that can perform tasks or services for individuals—are among the most promising AI applications. However, little is known about the adoption of AI assistants by service providers (i.e., physicians) in a real-world healthcare setting. In this paper, we investigate the impact of the AI smartness (i.e., whether the AI assistant is powered by machine learning intelligence) and the impact of AI transparency (i.e., whether physicians are informed of the AI assistant). Methodology/results: We collaborate with a leading healthcare platform to run a field experiment in which we compare physicians’ adoption behavior, that is, adoption rate and adoption timing, of smart and automated AI assistants under transparent and non-transparent conditions. We find that the smartness can increase the adoption rate and shorten the adoption timing, whereas the transparency can only shorten the adoption timing. Moreover, the impact of AI transparency on the adoption rate is contingent on the smartness level of the AI assistant: the transparency increases the adoption rate only when the AI assistant is not equipped with smart algorithms and fails to do so when the AI assistant is smart. Managerial implications: Our study can guide platforms in designing their AI strategies. Platforms should improve the smartness of AI assistants. If such an improvement is too costly, the platform should transparentize the AI assistant, especially when it is not smart.
AB - Problem definition: Artificial intelligence (AI) assistants—software agents that can perform tasks or services for individuals—are among the most promising AI applications. However, little is known about the adoption of AI assistants by service providers (i.e., physicians) in a real-world healthcare setting. In this paper, we investigate the impact of the AI smartness (i.e., whether the AI assistant is powered by machine learning intelligence) and the impact of AI transparency (i.e., whether physicians are informed of the AI assistant). Methodology/results: We collaborate with a leading healthcare platform to run a field experiment in which we compare physicians’ adoption behavior, that is, adoption rate and adoption timing, of smart and automated AI assistants under transparent and non-transparent conditions. We find that the smartness can increase the adoption rate and shorten the adoption timing, whereas the transparency can only shorten the adoption timing. Moreover, the impact of AI transparency on the adoption rate is contingent on the smartness level of the AI assistant: the transparency increases the adoption rate only when the AI assistant is not equipped with smart algorithms and fails to do so when the AI assistant is smart. Managerial implications: Our study can guide platforms in designing their AI strategies. Platforms should improve the smartness of AI assistants. If such an improvement is too costly, the platform should transparentize the AI assistant, especially when it is not smart.
KW - Chatbot
KW - Field experiment
KW - generative AI
KW - Health intelligence
KW - Medical platform
KW - Operational transparency
UR - https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=ceibs_wosapi&SrcAuth=WosAPI&KeyUT=WOS:001270697500001&DestLinkType=FullRecord&DestApp=WOS_CPL
U2 - 10.1287/msom.2023.0093
DO - 10.1287/msom.2023.0093
M3 - Journal
SN - 1523-4614
JO - Manufacturing and Service Operations Management
JF - Manufacturing and Service Operations Management
ER -