Ours is an era of new possibilities, but also new dangers. The fourth industrial revolution means the potential for human rights violations are now greater than ever before. And nowhere are the pros and cons of society’s digital transformation more vivid than in the fierce debate on artificial intelligence regulation that’s rocking the European Union.
When IRL doesn’t quite translate
AI will alter the world radically: it’s so omnipresent that we’re often oblivious to it. Anything that touches our lives so dramatically needs to be regulated. As tech is being designed, the rules that govern it are being written. In the EU, a founding principle guiding rule writing is that nothing should be permitted in the digital world that’s not permitted in the real world. Although this is an uncontroversial principle, in practice it is impossible to directly translate laws that have been designed for the physical world.
It’s a balancing act
Regulators must ask themselves two questions. First, does the new technology violate any fundamental human rights? Second, does the regulatory framework hamper Europe’s potential to innovate? There is a genuine risk of over-regulation, which would undermine innovation and competitiveness, and the whole continent would lose out in the long term. Yet under-regulation would endanger consumer rights. Clearly, a balance must be struck.
The back of the pack
A complicated conundrum, yet regulators must act with haste. It has become apparent that whosoever leads the world in the field of AI will lead the world more generally. New technologies will ramp up
productivity and lock in national security. This is a modern-day space race that all countries want to win.
Sadly, the EU is flailing. Of the 200 most important digital companies in the world, only eight come from Europe. China and the US are investing far more heavily in research and development. In fairness, the EU has indeed made plans for major investments and has a smattering of ongoing programmes. But these efforts are both insufficient and lacking the necessary involvement of all member states.
Crying out for framework
The European approach to regulating the AI system is based on risk assessment on fundamental rights.
The EU’s approach is rooted in the fact that not every application requires the same intensity of legal treatment. The real world impact varies by application. Put simply, gamers won’t suffer (much) if something goes wrong in the video game’s algorithm. But if a doctor treats someone incorrectly due to an algorithm, that’s a big deal. Products that are supposed to be used in this sector will have to comply with stricter rules.
A grim warning
The AI framework we end up with must not allow the construction of a social scoring system nor the implementation of mass surveillance. It must not endanger the democratic order or manipulate the most vulnerable. Unfortunately, such things are becoming a reality in some parts of the world. China’s social credit system powered by AI solutions is one example.
The regulating authorities must also address all criticisms and listen to businesses impacted by new rules. The business community is most concerned about the ambiguities of some requirements, such as what exactly means the data quality, how to keep technical documentation in high-risk systems. Anticipating more flexible regulation for start-ups is important to the private sector as well. It is now especially important to listen to those who will have to apply these rules on a daily basis.
Watch this space
It’s obvious that modern technologies have the potential to make the world a better place to live.
However, they also have the potential to produce the exact opposite, which would leave us in a real dystopia. Europe has a history as a global standards setter – and it’s a reputation that we need to live up to.
Karlo Ressler MEP is Vice-Chair of the Special Committee on Artificial Intelligence in a Digital Age (AIDA) in the European Parliament.