As artificial intelligence applications become more advanced, lawmakers worldwide are grappling with the possibility of unintended consequences: not just potential existential danger to humanity but also the more immediate risks of job losses, discrimination and copyright infringement.
The European Union, representing 450 million citizens across Europe, is a frontrunner in this regulatory race. Last Friday member nations signed on to the AI Act, which had been agreed upon last December by the European Council—a group of E.U. leaders that shapes the union’s political agenda—and the European Parliament. The act is expected to become law this year and would impose sweeping limits on companies whose AI tools are used in Europe, potentially restricting how these tools are developed and used across the globe. Since the act’s announcement, though, its text has changed because of internal political wrangling and lobbying, according to a recently leaked draft. And some experts are still worried about what seems to be left out.
The AI Act is one of many recent pieces of E.U. legislation that tackle tech issues, says Catalina Goanta, an associate professor of private law and technology at Utrecht University in the Netherlands. The act bans the use of emotion-recognition software in workplaces and schools, bans racist and discriminatory profiling systems and provides a strict ethical framework for building AI tools to which companies must adhere.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
To be effective, such regulations have to be applied across industries as a one-size-fits-all solution, Goanta explains—a tall order in the fast-moving tech sector, where new products drop weekly. “The struggle has been in finding a robust balance”…
Read the full article here