In 2019, border guards in Greece, Hungary and Latvia began testing a lie detector powered by an AI (artificial intelligence). The system, called iBorderCtrl, analyzed facial movements to try to detect signs that a person was lying to a border agent. The trial was driven by nearly $5 million in European Union research funding and nearly 20 years of search at Manchester Metropolitan University, in the United Kingdom.
The trial generated controversy. Polygraphs and other technologies built to detect lies of physical attributes have been widely reported by psychologists to be unreliable. Therefore, errors were also reported in iBorderCtrl. Media reports indicated that hislie prediction algorithm did not workand the project website itselfrecognizedthat the technology “may pose risks to fundamental human rights”.
This month, Silent Talker, a company spun off from Manchester Met that made the technology behind iBorderCtrl, was dissolved. But that’s not the end of the story. Lawyers, activists and lawmakers are pushing for a European Union law to regulate AI that would ban systems that claim to detect human migration fraud — citing iBorderCtrl as an example of what can go wrong. Former Silent Talker executives could not be reached for comment.
The ban on AI lie detectors at borders is one of thousands of amendments to the Law on AI being considered by officials in EU countries and members of the European Parliament. The legislation is intended to protect the fundamental rights of EU citizens, such as the right to live without discrimination or to declare asylum. It labels some AI use cases “high risk”, some “low risk”, and prohibits others. Those lobbying to change the AI Act include human rights groups, unions, and companies like Google and Microsoft, who want the AI Act to draw a distinction between those who make general-purpose AI systems and those who deploy them for specific uses. .
Last month, advocacy groups including European Digital Rights and Platform for International Cooperation on Undocumented Migrants called for the law to ban the use of AI polygraphs that measure things like eye movement, tone of voice or facial expression at borders.
Statewatch, a civil liberties nonprofit, released an analysis warning that the AI Act, as written, would allow the use of systems such as iBorderCtrl, adding to the existing “publicly funded border AI ecosystem” of the Europe. The analysis calculated that, over the past two decades, around half of the €341 million ($356 million) in funding for AI use at the border, such as migrant profiling, has gone to private companies.
Using AI lie detectors at borders effectively creates a new immigration policy through technology, says Petra Molnar, associate director of the nonprofit Refugee Law Lab, labeling everyone a suspect. According to her:
“You have to prove that you are a refugee and you are considered a liar unless you prove otherwise,” she says. “This logic underpins everything. It supports AI lie detectors and supports more surveillance and resistance across borders.”
Molnar, who is an immigration attorney, says people often avoid eye contact with border or migration officials for innocuous reasons — such as culture, religion or trauma — but this is sometimes misinterpreted as a sign that a person is hiding something. Humans often struggle with intercultural communication or talking to people who have suffered trauma, she says, so why would people believe a machine can do better?
The first draft of the so-called “AI Act“, released in April 2021, listed social credit scores and real-time use of facial recognition in public places as technologies that would be banned entirely. It labeled emotion recognition and AI lie detectors for law enforcement or border enforcement as high risk, meaning deployments would have to be listed on a public record. Molnar says that wouldn’t go far enough, and the technology should be added to the banned list.
Dragoș Tudorache, one of two rapporteurs appointed by members of the European Parliament to lead the amendments process, said lawmakers introduced amendments this month and expects a vote by the end of 2022. Parliament’s rapporteurs in April recommended adding predictive policing to the list. of banned technologies, saying it “violates the presumption of innocence as well as human dignity” but did not suggest adding AI border polygraphs. They also recommended categorizing patient triage systems in healthcare or deciding whether people receive health or life insurance as high risk.
As the European Parliament proceeds with the amendment process, the Council of the European Union will also consider amendments to the AI Act. There, officials from countries such as the Netherlands and France defended a national security exemption for the AI Act, according to documents obtained with a freedom of information request by the European Center for Non-Profit Law.
Vanja Skoric, the organization’s program director, says a national security waiver would create a loophole for AI systems that jeopardize human rights, such as AI-administered polygraphs, to escape and fall into the hands of police or border agencies. .
Final steps to pass or reject the law could take place at the end of next year. Before members of the European Parliament presented their amendments on June 1, Tudorache told the international website WIRED in an interview:
“If we get amendments by the thousands, as some people anticipate, the work to actually produce some compromise among thousands of amendments will be huge.”
Now, he says around 3,300 proposed amendments to the AI Act have been received, but he thinks the AI Act legislative process could be completed by mid-2023.
Concerns that data-based predictions could be discriminatory are not just theoretical. An algorithm deployed by the Dutch tax authority to detect possible child benefit fraud between 2013 and 2020 was found to have harmed tens of thousands of people and led to more than 1,000 children being placed in an orphanage. The system, which was found to be flawed, used data such as whether a person had a second nationality as a signal for investigation and had a disproportionate impact on immigrants.
The Dutch welfare benefits scandal could have been avoided or diminished if Dutch authorities had produced an impact assessment for the system as proposed by the AI Act, which could have raised red flags, says Skoric. She argues that the law should have a clear explanation of why a model earns certain labels, for example when the rapporteurs moved predictive policing from the high-risk category to a recommended ban.
Alexandru Circiumaru, European public policy leader at the UK-based independent research and human rights group Ada Lovelace Institute, agrees, saying the AI Act needs to better explain the methodology that leads to a type of AI system being re-categorized from prohibited. to high. He asks: “Why are these systems included in these categories now and why weren’t they included before? What is the test?”
More clarity on these issues is also needed to prevent the AI Act from nullifying potentially empowering algorithms, says Sennay Ghebreab, founder and director of the Civic AI Lab at the University of Amsterdam. Profiling can be punitive, as in the Dutch benefits scandal, and he supports a ban on predictive policing.
However, other algorithms can be useful – for example, to help resettle refugees by profiling people based on their background and abilities. A 2018 study published in Science calculated that a machine learning algorithm could expand employment opportunities for refugees in the United States by more than 40% and more than 70% in Switzerland, at a low cost.
“I don’t believe we can build perfect systems. But I believe we can continually improve AI systems by looking at what went wrong and getting feedback from people and communities.”
Many of the thousands of changes suggested in the AI Act will not be integrated into the final version of the law. But Refugee Law Lab’s Petra Molnar, who has suggested nearly two dozen changes, including banning systems like iBorderCtrl, says it’s an important time to clarify which forms of AI should be banned or deserve special care. According to the lawyer:
“This is a really important opportunity to think about how we want our world to be, how we want our societies to be, what it really means to practice human rights in reality, not just on paper. It’s about what we owe each other, what kind of world we’re building and who’s been excluded from these conversations.”
This article is a translation of the writing by Khari Johnson to the website Wired.
Want to know more about the news? Don’t forget to follow ADNEWS on social media and stay on top of everything!
The post Limits to the use of AI generate discussion in Europe (and the rest of the world) appeared first on DNEWS.