The release of ChatGPT last November transformed public awareness, perception, and discourse about Artificial Intelligence (AI). Prior to the release, AI has long existed in now familiar technologies, devices, and processes. Perhaps one of the most common uses of AI is the Google search engine. Search engines rely on AI to scan the internet to provide responses within seconds. Some examples of how AI is used as part of daily life include voice assistants such as Siri, and Alexa, image recognition for face unlock technologies in smart devices, and Google maps.
If AI is already part of our normal lived experience, why is there such fervent disapproval about ChatGPT? One possible reason is that the general public now has access to such a powerful tool in a way that they can create endless possibilities. Another reason is the fear that tools like ChatGPT have the potential to upend society as we know it. And sure, it will. There are instances of threat actors using the tool to maximize the impact of cyberattacks.
Defining AI
There is a misconception about what AI is. It is easy to assume that a chatbot like ChatGPT is an intricate creation from a cauldron of science and sorcery, waiting to be conjured by a curious user. While it appears to be magical, AI is developed through a combination of interdisciplinary fields such as computer science, neuroscience, psychology, and linguistics. It is the ability of computer systems to perform tasks that would normally require human intelligence, such as recognizing patterns, learning from experience, and making decisions.
AI development is based on the idea that human thought can be modeled and simulated by a machine. Some of the key technologies and techniques used in AI include machine learning, natural language processing, neural networks, robotics, and expert systems. According to a report PwC, AI can be categorized into four types, depending on the level of human involvement. These are assisted, automated, augmented, and autonomous intelligence.
Concerns about AI Bias
AI is revolutionizing many aspects of our lives, from healthcare and transportation, to education, business operations, finance, media, and energy. At the same time, there is a growing concern that AI bias could also adversely disrupt society. There are instances of AI bias in race-based mortgage algorithms, gender-based recruiting, race-based facial recognition, race-based clinical prediction inaccuracies, and repression of perceived undesirable language.
Back in 2018, Gartner predicted that 85% of AI projects would deliver erroneous outcomes due to bias in data. According to DataRobot’s State of AI Bias Report, 36% of organizations surveyed suffered losses due to AI bias. These include lost revenue (62%), lost customers (61%), and lost employees (43%). Despite efforts in reducing AI bias through testing, according to the report, 77% of organizations discovered bias after testing.
The focus has been on a machine’s ability to sense, think, learn, and respond to external signals. However, to address the issue of bias, the emphasis must shift to how AI is trained to simulate human intelligence. AI bias exists because of human bias. Human bias is introduced in the decision about the type of data selected to train models, to find patterns, and how models are structured. The output produced reflects human decisions and assumptions.
“We will not be able to stop AI from training on different data”, says Dr. Dimitry Mihaylov, AI Scientist and Associate Professor with the National University of Singapore. “It is more important to label different AI engines based on data they were trained in. AI trained on 18+ content can’t be used in applications for children.”
AI and ESG Implications
The rapid advancement of AI has raised important questions about its impact on how businesses align with environment, social, and governance (ESG) expectations. The core of ESG addresses how organizations implement policies that affect environmental and social issues. AI bias leads to social and economic inequalities, adversely impacting organizations’ commitment to equity, diversity, inclusion, as well as racial and gender justice.
Modern environmental sustainability efforts respond to how big data is collected and used to train models. Depending on the assumptions, preferences and decisions of data collectors, AI bias can lead to training models to behave and respond in a way that may benefit specific interests. AI-powered technologies, such as autonomous vehicles or drones, could contribute to congestion, pollution, and other negative impacts if they are not designed and used responsibly.
With 74% of companies lacking the capabilities necessary to prevent bias in the data used to train AI models, it is critical that appropriate governance structures are in place to prevent AI bias in how organizations implement their AI program. There are regulatory implications, such as the EU AI Act, the US FTC Act, and China’s law governing AI development, that organizations should consider as noncompliance could impact investment decisions and customer trust.
Dr. Mihaylov argues that “limiting and stopping AI is impossible”. While he agrees that “regulations are important”, “lawmakers should be very flexible and focus on segmenting AI applications”, he concludes.
AI’s Potential to Advance ESG Goals
AI can play a significant role in advancing ESG goals through sustainable development. For example, AI-powered technologies such as machine learning and predictive analytics can help organizations optimize their resource use, reduce their carbon footprint, and minimize waste. AI can also be used to track and monitor environmental impacts, such as water usage or air pollution, and to identify potential risks and opportunities related to ESG.
AI can be leveraged to support social causes and promote diversity and inclusion. For instance, AI can be used to analyze large datasets and identify patterns of inequality, or to design and implement equitable policies and processes. It can also be used to support education, training, and job placement programs, particularly for disadvantaged or marginalized groups.
An ESG-driven AI Strategy
AI presents businesses with enormous opportunities for growth. A report shows that AI could contribute up to $15.7 trillion to global economy by 2030. One way for businesses to benefit from this growth is to incorporate ESG in their AI strategies. An ESG-driven strategy will help organizations to develop and implement AI-powered solutions that are both effective and socially and environmentally responsible. There is also an opportunity for AI developers to engage with ESG experts to ensure AI bias is effectively reduced while AI training and development align with ESG goals and values.
AI is currently evolving, and there are indications that the potentials and pitfalls are also enormous. It requires collaboration from businesses, the government and AI developers to create responsible AI solutions. Incorporating ESG values in AI development will not only benefit businesses, but it will also benefit society by creating a more sustainable and equitable future for all.
About the Author:
Funso Richard is an Information Security Officer at a healthcare company and a GRC Thought Leader. He writes on business risk, cybersecurity strategy, and governance.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire, Inc.
Mastering Security Configuration Management
Master Security Configuration Management with Tripwire's guide on best practices. This resource explores SCM's role in modern cybersecurity, reducing the attack surface, and achieving compliance with regulations. Gain practical insights for using SCM effectively in various environments.