Artificial intelligence (AI) is a game-changing technology. It is forcefully influencing global security, reshaping international power dynamics and revolutionising the world of work.
Against this backdrop and in a world-first, Members of the European Parliament reached a historic consensus earlier this month on a comprehensive new legislation designed to govern the development and use of AI technologies.
This marks our commitment to responsible digital leadership and an ethical future for artificial intelligence in Europe.
As one of the leading negotiators working on the regulation, I was proud to see European lawmakers endorse an approach that aims to make the most of the benefits of this emerging technology while also setting up protections against its risks.
The secret to cooking a delicious, fuss free Christmas turkey? You just need a little help
How LEO Digital for Business is helping to boost small business competitiveness
‘I have to believe that this situation is not forever’: stress mounts in homeless parents and children living in claustrophobic one-room accommodation
Unlocking the potential of your small business
I have always said that the bedrock for regulating AI must be a risk-based approach. Future development of the technology needs to be human centric and responsible.
We cannot allow AI to grow in an unrestricted and unfettered manner. This is why the EU is actively implementing safeguards and establishing boundaries.
The objective of the AI Act is simple, to protect users from possible risks, promote innovation and encourage the uptake of safe, trustworthy AI in the EU.
This will mean that companies developing large language models and generative AI will have to follow new transparency rules in Europe if they wish to continue operating in the 27 Member States.
Chatbots and AI systems, which have the ability to create manipulated narratives and images such as deepfakes, will have to clearly show that their content is AI generated.
Now to the challenge: implementation and establishing a gold standard for AI regulation while developing a technological advantage for Europe in the field.
While this will be a fine balancing act, it can – and must – be done.
Leading companies in the AI field including Google, Microsoft and Meta acknowledge that AI regulation, of what is perhaps the biggest governance challenge of our time, is necessary.
It is now crucial for European leaders to take a far more active role in promoting the positive productive possibilities for AI, particularly in the areas of education, healthcare and tackling climate change.
We must deepen our relationships with industry, universities and up-and-coming innovators to strengthen our development and research capacity and invest in public digital infrastructure that will stand the test of time.
The AI Act will enable greater innovation, market competition and certainty which helps the uptake of AI within society. People expect AI to be safe and this can be ensured by regulation.
Brussels has been the first mover to put guardrails in place for AI; the rest of the world will have to follow.
Deirdre Clune is a Member of the European Parliament for Ireland and is the lead negotiator for the EPP Group on the Artificial Intelligence (AI) Act.