Anyone remotely wired into technology newsfeeds – or any newsfeeds for that matter – will know that AI (artificial intelligence) is the topic of the moment. In the past 18 months alone, we’ve borne witness to the world’s first AI Safety Summit, a bizarre and highly public leadership drama at one of the world’s top AI companies, and countless prophecies of doom. And yet, even after all that, it seems businesses have largely failed to take meaningful action on AI.
In early May 2024, ISACA, a leading IT governance association, released new research revealing the extent of the AI problem. The crux of the issue is that most businesses use AI, and are worried about AI, but too few are actually doing anything about it. Let’s look at some of the key findings from ISACA’s report so we can better understand what they call “The AI Reality.”
AI Risks
We’ll start with the most concerning findings. According to a poll of 3270 digital trust professionals, 70% of organizations use AI, 60% use generative AI, only 15% have AI policies, and 40% don’t offer any AI training.
Ignorant or untrained staff present a huge risk for businesses using AI. Uninformed AI use can result in serious ethical, financial, reputational, or even legal consequences, especially in the workplace. For example, a staff member unthinkingly using AI could inadvertently:
- Expose sensitive company information
- Reflect existing biases or prejudices in their work
- Breach intellectual property laws
And much more.
It’s astonishing that nearly 18 months after the launch of ChatGPT and the entrance of generative AI tools into the public consciousness, many businesses are either unaware or actively ignoring the enormous risks associated with AI in the workplace.
Stranger still, while 60% of respondents reported being worried or very worried that bad actors will exploit AI, with 81% saying the top risk is disinformation/misinformation, only 35% said that AI risks are an immediate priority for their organization. It’s unclear what exactly we can attribute these discrepancies to, but one may speculate that while digital trust professionals understand AI risks, decision-makers in their organizations don’t.
Similarly, during a period of economic difficulty, many organizations may be overzealous in pursuing AI's productivity and financial benefits while being reluctant to spend money on addressing AI risks; this is obviously an unwise approach but perhaps an understandable one.
AI Uses
But let’s step away from the doom and gloom and look a little closer at how AI benefits the modern workplace. Somewhat predictably, ISACA’s findings reveal that AI is used for:
- Increasing productivity (35%)
- Automating repetitive tasks (33%)
- Creating written content (33%)
Of course, these are all extremely valuable uses for modern staff. In today’s ultra-competitive, ultra-saturated market, any advantage an organization can gain is essential for business success and continuity, and AI can grant those advantages.
AI’s ability to automate repetitive tasks is a particularly interesting use case. Task automation has always been an almost ubiquitous business goal and one that has changed the world countless times over. The printing press allowed for the mass production of literature and was a critical factor in the birth of Protestantism; in the 18th and 19th centuries, automated machines replaced manual labor and brought about the Industrial Revolution; today, automation processes in smart factories have facilitated production on a scale never before seen.
However, task automation has a dark side: job losses.
AI and the Job Market
ISACA’s respondents were reasonably confident that AI will change the job market, with 45% saying that many jobs will be eliminated over the next five years and 80% saying that many jobs will be modified; this is a sensible assumption, especially when compared to the impact of technological advancements throughout history:
- The printing press eventually put scribes out of work.
- The Industrial Revolution rendered countless skilled roles obsolete.
- Smart factories drastically changed the skills needed to work in the manufacturing sector.
Considering these examples, why should AI be any different?
Fortunately, however, digital trust professionals are cautiously optimistic about their place in the AI age: 78% said AI will have a neutral or positive impact on their careers. More encouraging still, 85% of those professionals accept that they must increase their AI skills and knowledge within two years. The future is clearly bright for the digital trust sector.
Overall, it’s clear that digital trust professionals are confident they can handle AI's introduction in the workplace, but the wider business world seems woefully underprepared. As advancements in AI come thick and fast and more organizations implement the technology into their business processes, more decision-makers must wake up to the need for improved AI training and policies; it may cost now, but failing to address AI risks will cost a lot more in the future.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire.