The aim of this service is to study bias and fairness aspects in AI-based systems (e.g., search engines, recommender systems, computational social systems).
Our aim is to study bias and fairness aspects in AI-based systems (e.g., search engines, recommender systems, computational social systems). One specific example, would be the investigation of popularity bias, i.e., the overrepresentation of interactions with popular items in AI training data, which could lead to the unfair treatment of unpopular items and/or users with little interest in unpopular items.
We offer consultancy and data science methods to detect and mitigate biases and fairness issues in your data and/or algorithms.
This includes activities such as:
- Exploring data and detecting potential harmful consequences due to hidden biases and/or fairness issues
- Developing evaluation methods/procedures/metrics to measure biases / fairness issues in your data/algorithms
- Consultancy regarding other AI blindspots in your data, e.g., privacy and transparency issues
- Developing novel algorithms to mitigate biases and fairness issues in your system
SPECIAL ACCESS CONDITIONS