- This event has passed.
To Err is Machine: Biases Failure and Fairness in AI
November 29 @ 6:00 pm - 8:00 pm
NB: Doors open at 5.30 pm
Fifth in a series of public discussion, the event will cover issues around machine biases, failures and fairness.
About the event
Everybody makes mistakes – and so do machines, sometimes. Despite their perceived capabilities and our trust in them, similarly to us, they are prone to biases. Machine biases result from current machine learning methods and the training data used. They may be brittle and fail in unexpected ways or be black boxes – their decision-making processes will be hard to understand and interpret.
Neither the public or software engineers and data science or the regulators are fully aware of the extent machine biases and failures.
But as these systems are becoming part of our daily lives, machine errors can have critical implication for recruitment, medical diagnosis, the safety of automated systems and political debate.
Come and listen to our panel discussing the consequences of the failures of AI systems and methods of developing, deploying, and utilising them.
In particular, we will be trying to establish how well we understand the decision-making processes of different AI systems? How to ensure public accountability and transparency of decisions made by AI? How to prevent algorithmic bias and other undesirable consequences? We will also wonder whether we should refrain from using AI systems when making certain decisions, like criminal sentencing?
Come and join the discussion.
This event is co-organised by EdIntelligence (Machine Learning Society), School of Informatics, University of Edinburgh and Edinburgh Futures Institute.
The event will be followed by a drinks reception/social.
NB This event might be streamed live, recorded and/or photographed.