With GenAI tools such as ChatGPT, Gemini and Perplexity becoming household names, and business adoption increasing, how can organisations – and their employees – ensure that GenAI is safe to use in the workplace?
Generative AI is revolutionising the workforce by augmenting how we work rather than replacing the human role, says Nicola Flannery, digital trust and transparency lead at Deloitte Ireland. “This technology is transforming job functions and creating new roles, requiring a reskilling of 40 per cent of the global workforce. While 300 million jobs globally are impacted, the emergence of 97 million new roles is anticipated.
“The success of integrating GenAI hinges on a renewed focus on human-centric approaches, ensuring that technology complements rather than competes with human capabilities.”
Generative AI, if done properly, can have a significant impact on the workforce, agrees David Sullivan, director, privacy, digital trust and AI governance, at Forvis Mazars. “Many are using GenAI for various aspects of their role and at varying levels. Some roles are more appropriate for GenAI than others, but it can have its place nearly everywhere. We are seeing it across multiple sectors, including the public sector, healthcare, financial services and others.”
READ MORE
O’Sullivan says a multitude of use cases is being seen, the most successful being specific and designed to solve a particular problem.
Adoption of GenAI is increasing year on year; however, many organisations still have either no policy at all or a restrictive one that does not allow for the use of GenAI, says O’Sullivan. “The cat is out of the bag in many ways, so despite policy or lack thereof, many of the workforce are using GenAI anyway, known as ‘shadow AI’.”
In Ireland, 73 per cent of adults are aware of Gen AI, with almost half having used it, according to the most recent Deloitte Digital Consumer Trends report, Flannery says. “However, usage remains sporadic, with 46 per cent using it less than monthly due to concerns and issues.”

Despite this, two-thirds of those using GenAI for work report a productivity boost, although only a quarter of companies actively encourage its use, she points out.
Businesses are increasingly recognising the potential of Gen AI, yet many lack formal policies to guide its integration, says Flannery. While 24 per cent of companies encourage its use, there is a need for strategic planning to harness its benefits fully. Business leaders are urged to develop AI policies, redesign roles to incorporate a human/AI mix, and identify new skills required.
“Effective change management, including training and support for innovation, is crucial for smooth integration. Transparency and communication are key to addressing concerns about job displacement, as 60 per cent of respondents worry about AI reducing job availability.”
Many businesses are still early in their AI journey and are only starting to think of the different ways they can use it as they try to understand the risks, how to manage them and what governance is needed, says O’Sullivan. “As AI agents become more common, it is possible we will see a change in how workflows are designed as they try and leverage the technology in more detail.”
The benefits depend on the use case, but there are clear advantages in many areas, including research, data analysis, creating new content, reviewing content, and even getting started with building new strategies, he adds. “For individuals, proper use of GenAI could free up time to focus on more value-adding work that is more fulfilling and meaningful.”
Key risks are a lack of AI literacy/fluency leading to over-caution in the use of AI or misuse and over-reliance without understanding the risks, cautions Flannery. “A lack of data governance, including data integrity, can also lead to misinformation or hallucinations, as well as data privacy and security risks. Other risks can stem from bias in training data, lack of transparency and ethical and legal challenges.”
The key to implementing Gen AI safely is to ensure a risk-benefit analysis is carried out, and having a transparent AI strategy hinged on the value that can be realised from Gen AI.
“Clear usage policies are important, [as is] ensuring a trust-by-design approach is taken to any AI development or vendor policies,” says Flannery. “Other important considerations are investment in upskilling and training, strong data governance and controls, sandboxes for safe experimentation, use cases and proofs of concept that build in human-centricity and transparency, and risk analyses which take bias, ethics, data privacy and security into account.”
A starting point is to examine the principles of trustworthy AI from the European Commission that the Irish Government adopted in its guidelines on the use of AI in the public service, says O’Sullivan. “Making sure there is a strong policy, good governance and effective training are all key to safely using GenAI in business. If done properly, this will allow accountability for decision making and use of AI, compliance with various laws and alignment with corporate strategy.
“Since February this year, AI literacy training has been a requirement under the EU AI Act, but despite compliance requirements, good training is very beneficial to ensuring that organisations maximise their investment while managing risk.”