STÓR

Pre-processing Techniques to Mitigate Against Algorithmic Bia

Heidarpour Shahrezei, Maliheh and Loughran, Roisin and McDaid, Kevin Pre-processing Techniques to Mitigate Against Algorithmic Bia. IEEE. (Submitted)

[thumbnail of short paper_final.pdf] PDF - Accepted Version
Download (352kB)

Abstract

A significant portion of current AI research is focused on ensuring that model decisions are fair and free of bias. Such research should consider not merely the algorithm but also the datasets, metrics and approaches used. In this paper, we work on several pre-processing techniques to achieve fair results for classification tasks by assigning weights, sampling and changing class labels. We used two well-known classifiers, Logistic Regression and Decision Tree, performing experiments on a popular data set in the fairness domain. This research aims to compare the effects of different pre-processing techniques on the resulting confusion matrix elements and the derived fairness metrics. We found that the Massaging technique with the Logistic regression classifier resulted in the Disparate Impact value that was closest to one. While, for the Decision Tree classifier, Reweighting and Uniform Sampling performed better than Massaging for all of our fairness metrics and both sensitive attributes

Item Type: Article
Subjects: Computer Science
Research Centres: Regulated Software Research Centre
Depositing User: Maliheh HeidarpourShahrezaei
Date Deposited: 26 Feb 2024 14:34
Last Modified: 26 Feb 2024 14:34
License: Creative Commons: Attribution-Noncommercial-Share Alike 4.0
URI: https://eprints.dkit.ie/id/eprint/871

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year