[ad_1]
While AI is widely used by law enforcement agencies, rights groups say it is also abused by authoritarian regimes for mass and discriminatory surveillance. Critics also worry about the violation of people’s fundamental rights and data privacy rules.
The Vienna-based EU Agency for Fundamental Rights (FRA) urged policymakers in a report issued on Monday to provide more guidance on how existing rules apply to AI and ensure that future AI laws protect fundamental rights.
“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions,” FRA Director Michael O’Flaherty said in a statement.
FRA’s report comes as the European Commission, the EU executive, considers legislation next year to cover so-called high risk sectors such as healthcare, energy, transport and parts of the public sector.
The agency said AI rules must respect all fundamental rights, with safeguards to ensure this and include a guarantee that people can challenge decisions taken by AI and that companies need to be able to explain how their systems take AI decisions.
It also said there should be more research into the potentially discriminatory effects of AI so Europe can guard against it, and the bloc must further clarify how data protection rules apply to the technology.
FRA’s report is based on more than 100 interviews with public and private organisations already using AI, with the analysis based on uses of AI in Estonia, Finland, France, the Netherlands and Spain.
[ad_2]
Source link