Ongoing projects

Bauer, K., Jussupow, E., Heigl, R., Vogt, B., & Hinz, O. (2024). All Just in Your Head? Unraveling the Side Effects of Generative AI Disclosure in Creative Task. (Revise and Resubmit Information Systems Research).

Generative text-to-image artificial intelligence (GenAI) tools, such as Midjourney and Dall-E, are enabling the rapid production of synthetic visual artifacts and thereby reshaping creative processes. However, as regulatory and organizational mandates for disclosing GenAI use gain traction, understanding their downstream implications for creators and consumers becomes critical. While prior research has focused on the direct effects of GenAI disclosure on perceived creativity, this paper explores an overlooked dynamic: the indirect effects of anticipated disclosure on creators’ use of GenAI. Using a nested mixed-methods approach, we conducted two experiments to investigate how creators adapt their effort and reliance on GenAI when anticipating disclosure. The results reveal that creators reduce their effort and delegate more creative agency to GenAI, leading to the production of images perceived as more novel but less visually appealing and of lower clarity. These behavioral shifts are driven by changing second-order effort beliefs--creators’ expectations about how others would recognize the human effort that went into the creative process. Our findings underscore the socio-technical dynamics of GenAI adoption and highlight how disclosure policies influence creative practices and redefine humans’ role therein. We discuss the practical implications of our results for platforms considering the implementation of disclosure policies, emphasizing the importance of balancing transparency about GenAI use with highlighting invested human effort.

Yang, C., Bauer, K., Li, X., & Hinz, O. (2024). My Advisor, Her AI and Me: Evidence from a Field Experiment on Human-AI Collaboration and Investment Decisions. (Revise and Resubmit Management Science).
Amid ongoing policy and managerial debates on keeping humans in the loop of AI decision-making processes, we investigate whether human involvement in AI-based service production benefits downstream consumers. Partnering with a large savings bank, we produced pure AI and human-AI collaborative investment advice, which we passed to the bank customers and investigated their advice consumption in a field experiment. On the production side, contrary to concerns that humans might inefficiently override AI outputs, our findings show that having a human banker in the loop of AI-based financial advisory by giving her the final say over the advice provided does not compromise the quality of the advice. More importantly, on the consumption side, we find that the customers are more likely to align their final investment decisions with helpful advice from this human-AI collaboration, compared to pure AI, especially when facing more risky investments. In our setting, this increased reliance on human-AI collaborative advice also leads to higher material welfare for consumers. Additional analyses from the field experiment along with an online controlled experiment, indicate that the persuasive efficacy of human-AI collaborative advice cannot be attributed to consumers' belief in increased advice quality resulting from complementarities between human and AI capabilities. Instead, the consumption-side benefits of human involvement in the AI-based service largely stem from human involvement serving as a peripheral cue that enhances the affective appeal of the advice. Our findings indicate that regulations and guidelines should adopt a consumer-centric approach by fostering environments where human capabilities and AI systems can synergize effectively to benefit consumers while safeguarding consumer welfare. These nuanced insights are crucial for managers who face decisions about offering pure AI or human-AI collaborative services and regulators advocating for having humans in the loop.
Bauer, K., Grunewald, A., Hett, F., Jagow, J., & Speicher, M. (2024). Testing and interpreting the effectiveness of causal machine learning---an economic theory approach. Available at SSRN 5013225. (Submitted to Journal of Political Economy).
We demonstrate how causal machine learning (CML) enables treatment targeting to improve intervention effectiveness without context-specific historical data. In a field experiment with nearly 500,000 participants at an online retailer, CML-based targeting transforms an ineffective loss-framing intervention into one generating an 11\% revenue increase. By combining data from the RCT with a behavioral measurement experiment, we further document a strong correlation of CML-predicted targeting and individual loss-aversion. Hence, CML implicitly captures established theoretical constructs, enhancing both interpretability and transparency of its outputs. Furthermore, CML outperforms targeting directly based on measured loss-aversion, demonstrating its ability to uncover heterogeneity beyond existing models.
v. Zahn, M., Liebich, L., Jussupow, E., Hinz, O., Bauer, K. (2024). Please take over: XAI, delegation of authority, and domain knowledge. (Revise and Resubmit Information Systems Research)
Humans and artificial intelligence (AI) often possess complementary capabilities that can lead to substantial efficiency gains through collaboration. A potentially effective strategy to leverage these complementary capabilities involves humans allocating tasks between themselves and AI. Echoing Adam Smith's principles of efficient labor division, synergies emerge when humans assign tasks to AI where it has a higher likelihood of success and retain other tasks for themselves. However, previous studies indicate that human task allocation to AI is frequently suboptimal, thus forgoing potential gains in efficiency. A primary obstacle is humans' inadequate understanding of the scope and limits of their own task-relevant knowledge, i.e., a lack of metaknowledge. This paper explores whether eXplainable AI can improve human metaknowledge and thereby enhance delegation efficiency in human-AI collaborations. We devise a formal model and empirically validate its theoretical predictions through an incentivized field experiment with professional real estate experts in Germany. In our field study, experts decide whether to evaluate apartments themselves or to delegate tasks to an AI. After task allocation, both agents and AI independently assess assigned apartments. We exogenously vary whether the AI is a black box or features explanations about its learned logic of how apartment characteristics determine prices. Our findings reveal that explanations of the AI's pricing logic substantially increase both the frequency and quality of delegation decisions, fostering more effective human-AI collaboration and task performance. We show that improvement in delegation quality is largely due to an improved understanding of the scope and limits of their own task-relevant knowledge. Our results indicate that explainability can be a crucial catalyst to enhance not only humans' understanding of the AI's capabilities but also their own, leading to better delegation. Our findings have implications for the design of AI systems in collaborative delegation settings.
Zacharias, J., v. Schenk, A., Klockmann, V., Hinz, O., & Bauer, K. (2024). Decentralized Feature Selection for Machine Learning. (Under Review at Management Science)
In the age of machine learning (ML), consumers' personal data is widely used for personalized product recommendations. To address privacy concerns, regulations increasingly grant consumers control over their data. One implementation are ``opt-out of information use'' features that allow consumers to specify which of their collected personal data ML-powered recommender systems can harness. However, we conjecture that such features may have an unintended side effect: withholding data could inadvertently reveal insights about consumers' latent characteristics, thereby enhancing targeting possibilities. Through a controlled pre-registered experiment, we evaluate both consumers' perceptions and technical consequences of such opt-out features in the context of a typical search problem. Our results show that these features increase consumers' sense of control over the system and alleviate privacy concerns for those who actually withhold information. Paradoxically, withholding information can simultaneously be harnessed to improve the ML model's predictive accuracy. From a policy perspective, we highlight the need for additional regulations on how organizations may use information withholding decisions, particularly when consumers' interests conflict with those of the recommender provider.
Chen, J., Heigl, R., v. Zahn, M., Hinz, O., Bauer, K. (2024). Artificial (Partisan) Intelligence? Political Affiliations and Human-AI interaction. (Under Review at Journal of Management Information Systems)
Political polarization is intensifying globally, shaping group identities and influencing behaviors. This raises questions about whether discriminatory behaviors based on political affiliations extend to interactions with artificial intelligence (AI) systems developed by others. To explore this, we investigate how shared or opposing political affiliations between users and AI developers influence human-AI interaction. Using the ``computer-as-a-social-actor’’ paradigm and ``social identity theory’’, we develop a formal framework to test derived hypotheses through a survey on ChatGPT and a controlled experiment, where we exogenously vary the disclosure of AI developers' political affiliation before users interact with AI. Our results show that opposing political affiliations with developers reduce users' likelihood of accessing and utilizing AI systems, indicating an ingroup bias rather than changes in trust or accuracy perceptions. Our study offers insights for AI development outsourcing and upcoming regulations, such as the European Union’s AI act, which mandates transparency about developer identity.
Nofer, M., Abdel-Karim, B., Bauer, K., Hinz, O. (2023). The effect of discontinuing machine learning decision support. SAFE Working Paper (No. 370). (Revise and Resubmit at Business Information Systems and Engineering).
Advances in Machine Learning (ML) have led organizations to increasingly implement predictive decision aids to enhance employees’ decision-making performance. While such systems improve organizational efficiency in many contexts, they may inadvertently impact the development of human decision-making skills. Drawing on cognitive theories, this study examines how the use of ML-based decision aids impacts skill development and performance, particularly in scenarios where access to the system is disrupted, such as during system discontinuance, or when the system exhibits bias or errors. Using a novel experimental design tailored to address organizational challenges and endogeneity concerns, we identify causal effects of ML reliance on skill development in decision making. Specifically, we demonstrate that reliance on ML predictions can hinder the development of critical decision-making skills, resulting in significant performance drops when the system becomes unavailable. Furthermore, we find that the extent of trust in the system's predictions strongly influences the severity of this skill deficit. These findings highlight the need for thoughtful integration of ML decision aids, emphasizing the importance of balancing reliance with skill retention to mitigate risks associated with temporary or permanent system disruptions.
Bauer, K., Hett, F., Chen, Y., Kosfeld, M. (2024). Group Identity and Belief Formation: Implications for Political Polarization (Under Review at Journal of Political Economy)
To evaluate the impact of group identity on belief formation, we conducted online experiments before and after the 2020 US presidential election. We elicit participants' beliefs about future unemployment statistics and provide relevant news summaries. We find that people pay money to avoid information from political outgroups and attribute lower weight to this information when updating beliefs. An intervention which unlabels information sources decreases outgroup information avoidance by 50%, an effect driven by groupish participants. A debiasing intervention equalizing instrumental values of information sources reduces only universalists' information avoidance. We establish source utility as a key mechanism contributing to polarization.
Liebich, L., Kosfeld, M., & Bauer, K. (2024). Decoding GPT’s hidden rationality of cooperation. (Under Review at European Conference of Information Systems)
As large language models (LLMs) increasingly interact with humans in everyday life, understanding the strategic dimensions of their cooperation behavior becomes crucial. This paper contributes to our understanding of LLM strategic behaviors, specifically OpenAI's GPT, by experimentally shedding light on the evolution of its cooperative behaviors in the context of a social dilemma. We pose two questions: (i) Has GPT learned to effectively cooperate with humans in strategic situations? (ii) Can established models of human cooperation behavior explain GPT's cooperative strategies? Utilizing behavioral economic paradigms to assess the cooperative strategies of GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, and GPT-4o, we find that, overall, GPT effectively cooperates with humans, a tendency that becomes more pronounced as model iterations evolve. Estimating a behavioral model that incorporates conditional concerns for welfare as the strategic motive driving cooperation in our social dilemma, we can explain between 84.5% and 100% of GPT's strategies - using a sparse model with only two parameters. GPT's cooperation with humans thus emerges as highly-strategic behavior maximizing the wellbeing of the human it interacts with while balancing self-preservation. Methodologically, this study underscores how behavioral economic models, traditionally applied to human behavior, can be used to explain strategic profiles of LLMs.
Bauer, K., von Siemens, F., Kosfeld, M. (2023). Monetary Incentives, Self-Selection, and Coordination of Motivated Agents for the Production of Social Goods. SAFE Working Paper (No. 318) (Under review at Games & Economic Behavior)
We study, theoretically and empirically, the effect of incentives on the self-selection and coordination of motivated agents to produce ``social" goods in the presence of positive effort complementarities. Theory predicts that lowering incentives increases social-good production via the self-selection and coordination of motivated agents into low-incentive work environments. We test this prediction in a novel lab experiment that allows us to isolate the effect of self-selection cleanly. Results show that social-good production more than doubles if incentives are low, but only if self-selection is possible. The analysis identifies a crucial role of incentives in the matching and coordination of motivated agents.

Selected publications

Bauer, K., von Zahn, M., & Hinz, O. (2023). Expl (AI) ned: The impact of explainable artificial intelligence on users’ information processing. Information systems research, 34(4), 1582-1602. (ISR Best paper award 2024, AIS Senior Scholar Award 2024)
Bauer, K., & Gill, A. (2024). Mirror, mirror on the wall: Algorithmic assessments, transparency, and self-fulfilling prophecies. Information Systems Research, 35(1), 226-248.
von Zahn, M., Bauer, K., Mihale-Wilson, C., Jagow, J., Speicher, M., & Hinz, O. (2024). Smart green nudging: Reducing product returns through digital footprints and causal machine learning. Marketing Science. (forthcoming)
Bauer, K., Heigl, R., Hinz, O., & Kosfeld, M. (2024). Feedback loops in machine learning: A study on the interplay of continuous updating and human discrimination. Journal of the Association for Information Systems, 25(4), 804-866.
Knickrehm, C., Bauer, K. (2024). GPT, Emotions, and Facts. Proceedings of the International Conference on Information Systems (ICIS) 2024.
Nofer, M., Bauer, K., Hinz, O., van der Aalst, W., & Weinhardt, C. (2023). Quantum computing. Business & Information Systems Engineering, 65(4), 361-367.
Bauer, K., Hinz, O., van der Alast, W., Weihnhardt, C. (2021). Expl(AI)n it to me – Explainable AI and Information Systems Research. Business & Information Systems Engineering. Bus Inf Syst Eng 63, 79–82.