AI and employees in tandem: Consequences of disclosing human involvement

Prof. Dr. Oliver Hinz (Photo: Markus Haßfurter)

Should companies using AI chatbots be transparent about the involvement of human employees working behind the scenes? Do customer interactions change when human involvement is disclosed? If so, how and why? A recent paper published in Information Systems Research written by Oliver Hinz and other researchers offers some answers to these questions.

The findings suggest that disclosing potential human involvement before or during an interaction leads customers to adopt a more human-oriented communication style. Rather than using simple keyword-style queries, customers tend to use longer, more complex, and more natural sentences when human involvement is disclosed (vs. not disclosed). This effect is driven by customers’ impression management concerns. That is, customers are more concerned about making a good impression when they know that there are human employees who step in if the chatbot is unable to respond. Ultimately, the more human-oriented communication style increases employee workload. Due to the higher complexity of language used, fewer customer requests can be handled automatically by the chatbot and must therefore be delegated to a human. These findings are important because they help us understand how customers respond to human-AI hybrid service agents and reveal how transparency about human involvement can place additional workload on employees working in tandem with AI.

If you are interested in this paper, you can find further information here