Conference Paper/Proceeding/Abstract 141 views
Towards a Cognitive Model for Inferring Dynamic Fairness Perception to Support Fairer Human-Robot Collaboration
HRI Companion '26: Companion Proceedings of the 21st ACM/IEEE International Conference on Human-Robot Interaction, Pages: 272 - 276
Swansea University Author:
Muneeb Ahmad
-
PDF | Version of Record
© 2026 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution International 4.0 License.
Download (919.66KB)
DOI (Published version): 10.1145/3776734.3794398
Abstract
Current research on measuring human perceptions of fairness in Human-Robot Teams (HRTs) has primarily focused on subjective metrics, such as rating statements either during or at the conclusion of interactions. This suggests a gap in examining the dynamic and evolving nature of fairness perceptions...
| Published in: | HRI Companion '26: Companion Proceedings of the 21st ACM/IEEE International Conference on Human-Robot Interaction |
|---|---|
| ISBN: | 9798400723216 |
| Published: |
New York, NY, USA
Association for Computing Machinery (ACM)
2026
|
| URI: | https://cronfa.swan.ac.uk/Record/cronfa71477 |
| Abstract: |
Current research on measuring human perceptions of fairness in Human-Robot Teams (HRTs) has primarily focused on subjective metrics, such as rating statements either during or at the conclusion of interactions. This suggests a gap in examining the dynamic and evolving nature of fairness perceptions objectively during human-robot collaboration. In this paper, we introduce a novel cognitive model that enables individuals to perceive fairness dynamically throughout an HRT experiment. This model is inspired by the Bayesian Theory of Mind, allowing us to infer perceptions of fairness in real-time. The core idea of the model is that fairness perception stems from a person's ongoing inference about the bias in a robot's value function. We establish an equation that translates this inference into a perceived fairness value, which is based not only on the inferred bias but also on the confidence of that inference. A qualitative comparison of the model's performance with a previous human-robot collaboration study suggests that it can effectively capture key trends in human fairness perception dynamically. These findings highlight the model's potential applicability, and it may be utilized in resource distribution algorithms in HRTs to promote fairer collaboration. |
|---|---|
| Item Description: |
Short Paper |
| Keywords: |
Human-Robot Interaction, Fairness, Task or Resource Allocation, Bayesian Theory of Mind, Second-order Theory of Mind |
| College: |
Faculty of Science and Engineering |
| Funders: |
This work was supported in part by JSPS KAKENHI Grant Number JP24K20846. |
| Start Page: |
272 |
| End Page: |
276 |

