Skip to main content

AI safety measures: a comparative chart

19 July 2024

As technology continues to develop rapidly, legislators and regulators around the world are racing to keep up – or indeed catch up – by implementing measures to protect those whose interests are affected by AI systems.

Unsurprisingly there is no global standard on AI regulation. But we when we look at the existing and proposed safeguards intended to protect individuals where AI systems are introduced into the workplace, clear categories emerge.

Set out in the table below is an analysis of how measures in key existing and proposed legislation could be categorised on this basis.

This is by no means a comprehensive list – draft legislation is being debated and progressed around the world. For example, Canada’s AI and Data Act envisages a risk based approach to regulation; the US Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence supports the creation of standards for trustworthy AI; and in the UK, the Department for Science Innovation has published detailed guidance on the responsible use of AI in recruitment.  However, it will be interesting to see the extent to which the “Brussels effect” sees the approach taken in the EU spread around the world.

Download AI safety measures: a comparative chart

Back To Top