Why the law must play a central role in the governance of Artificial Intelligence

Public authorities are making decisions about you based on Artificial Intelligence – but how do we ensure there is adequate accountability in law for these powerful technologies and their applications?

AI offers us many benefits; AI systems are proving better than humans at diagnosing some forms of cancer and predicting forest fires. In future, they could help to identify children at risk of abuse and prevent ‘high-risk’ individuals from escaping investigation and enforcement.

Yet like any new technology, AI systems (whether physical robots or software-enabled services) have flaws and unintended effects. Much existing concern has been expressed in terms of discrimination and bias, particularly against minority groups. For example, facial recognition (or ‘face-rec’), used to scan crowds in public places for suspected criminals, is more likely to falsely identify people from minority groups as suspects. Crime prediction systems are also prone to wrongly flag up people living in deprived areas and from minority groups, without reliable evidence of AI’s accuracy or effectiveness in preventing or deterring crime.


Yet there are even more fundamental issues at stake, striking at the very foundation of freedom and democracy, at least for societies that claim to be committed to respect for human rights and individual dignity.

'Lawyers and legal institutions must play a greater role in ensuring the safety and accountability of advanced data and analytics technologies,' says Professor Karen Yeung at the University of Birmingham. 'Facial recognition,' Karen says, 'fundamentally reverses the presumption of liberty on which British constitutional culture has long rested. It proceeds on the premise that "everyone is under suspicion and the state is entitled to monitor and identify individuals in real time".'

Karen argues that the public need to know what analytics are being applied to them if they are to participate in legal discussion and public deliberation. 'Without this, we are placing excessive faith in technology companies for, in essence, "marking their own homework". They proclaim that they will aspire to meet voluntary ethics guidelines, but these lack any meaningful oversight or enforcement, let alone democratic input.'

Karen is optimistic that laws and legal frameworks can deal with AI’s emergent risks. However, we need to move fast, as AI is already operational in our daily lives, and we need to watch out for an 'unholy alliance between government and the tech industry'.  She says: 'Governments say "if we could just grow the tech industry, then we will attract entrepreneurs, stimulate the economy, increase government revenue, and thereby enhance the well-being of everyone, so we mustn’t regulate because this would stifle innovation". This, in my view, is one of the main reasons why the law has been sidelined.'

Visit the University website to hear more from Karen.