Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MD4 - BO4: Human machine interaction
Time:
Monday, 27/June/2022:
MD 16:00-17:30

Session Chair: Bryce Hunter McLaughlin
Location: Forum 8


Show help for 'Increase or decrease the abstract text size'
Presentations

On the Fairness of Machine-Assisted Human Decisions

Talia Gillis1, Bryce McLaughlin2, Jann Spiess2

1Columbia University; 2Stanford University

In this project, we study the fairness implications of using machine learning to assist a human decision-maker. Relative to a baseline where machine decisions are implemented directly, we show in a formal model that the inclusion of a biased human decision-maker can revert common relationships between accuracy and fairness. Specifically, we document that excluding information about protected groups from the prediction may fail to reduce, and may even increase, ultimate disparities.



Automation and Sustaining the Human-Algorithm Learning Loop

Christina Imdahl1, Kai Hoberg2, William Schmidt3

1Eindhoven University of Technology, Netherlands, The; 2Kuehne Logistics University, Germany; 3Cornell University, USA

In many practical settings, a human reviews recommendations from a decision support algorithm and either approves or adjusts the recommendation. Automation may reduce a ML system's longer-term ability to predict effective adjustments and leads to predictive performance degradation over time. We (empirically) demonstrate this effect and show how to include the loss of learning into the automation decision.



Algorithmic assistance with recommendation-dependent preferences.

Bryce Hunter McLaughlin, Jann Lorenz Spiess

Stanford University Graduate School of Business, United States of America

We provide a stylized model in which a principal chooses a classifier, D, with known properties for a Bayesian decision-maker who observes the outcome of D before determining their own label in a binary classification problem. The decision-maker has a utility which deviates from the principal ’s whenever they take an action which contradicts the classifier. We characterize the optimal posterior decision and show how the optimal classifier for assistance depends on the decision-maker's prior.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: MSOM 2022
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany