Artificial intelligence: Faced with the risk of privacy, the UN asks for a moratorium on certain systems |UN info
The UN justifies the need to establish urgently, these safeguards or a moratorium on the sale and use of artificial intelligence systems (AI), by the "serious risk of damage to the rights of theman ".
"Artificial intelligence technologies can have negative, even catastrophic effects if used without taking into account the way they affect human rights," said Michelle Bachelet, High Commissioner of Human RightsThe UN.
Faced with AI's ability to fuel human rights violations with a "colossal scale", Ms. Bachelet's services call the planet to act now."The higher the risks to human rights, the more the legal obligations relating to the use of AI technologies should be strict," said the High Commissioner.
Unsplash/Possessed PhotographyL'intelligence artificielle (IA) est très prometteuse pour améliorer la prestation des soins de santé et la médecine dans le monde entier, mais seulement si l'éthique et les droits de l'homme sont placés au cœur de sa conception.AI systems used to determine who can benefit from public services
Ms. Bachelet wants a risk assessment of the different systems that rely on artificial intelligence."And since the assessment and taking into account the risks can take a certain time, states should impose moratories on the use of potentially high-risk technologies," she said.
This report, which was requested by the Human Rights Council, focused on the way in which these technologies have often been put in place without the way they operate or their impact having been properly evaluated.He thus analyzes the way in which AI, including profiling, automated decision -making and other automatic learning technologies, affects the law of the population to privacy and other rights, in particular the rights tohealth, education, freedom of movement, freedom of peaceful meeting and association, and freedom of expression.
The document has looked into one of the aspects of AI system, which are sometimes used to determine who can benefit from public services, decide who has a chance to be recruited for a job.
According to the UN, many people have already been "treated unjustly because of AI", such as being refused social security benefits because of defective AI tools or even being arrested because of'A facial recognition problem.
Challenges in the face of predictive methods
This report also examines how states and businesses have often rushed to integrate AI applications, "without showing reasonable diligence".
On this subject, the document raises in particular the increasingly frequent use of AI -based systems by the police including predictive methods."The deductions, forecasts and surveillance made by AI tools, including the search for information on human behavior models, also raise serious questions," said the document.
On another level, a set of biased data on which AI systems are based can lead to discriminatory decisions."And these risks are even higher for already marginalized groups", alert Ms. Bachelet's services.
The other challenge relates to "biometric technologies, which are becoming more and more a solution of choice for states, international organizations and technological companies.However, these technologies, which include facial recognition, are increasingly used to identify people in real time and remotely, which potentially allows unlimited monitoring of individuals.
Unsplash/Possessed PhotographyUne plus grande transparence est nécessaire quant à la manière dont les Etats conçoivent et utilisent l’IACompanies and invited States to show "more transparency"
The report reiterates calls to a moratorium on their use in public spaces.Especially since the data used to inform and guide AI systems may be wrong, discriminatory, obsolete or not relevant.
"We cannot afford to continue to try to make up for the train as regards the AI and allow it to be used without almost control and to repair the consequences on human rights afterwards,"insisted the High Commissioner.
More generally, the UN invites businesses and states to show "greater transparency as to the way they design and use AI".
According to the report, "the complexity of the data environment, algorithms and models that underlie the development and functioning of AI systems, as well as intentional recovery of information from government and private actorsare factors that undermine the means available to the public to truly understand the effects of AI systems on human rights and society ”.
10 Ways to Stay Safe When You Live Alone
Hotels, restaurants: tips paid by credit card will soon be tax-exempt
How to draw a rose: our methods
Will Belgian workers quit?