Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Inaccurate and Opaque GenAI: The Hidden Risks for Businesses

Inaccurate and Opaque GenAI: The Hidden Risks for Businesses
source : Computerworld

The Inaccuracy and Opacity of GenAI for Business Use

Large language models (LLMs), the algorithmic platforms on which generative AI (genAI) tools like ChatGPT are built, have been found to be highly inaccurate when connected to corporate databases. Moreover, these models are becoming increasingly opaque, making it difficult for businesses to rely on them for accurate and transparent results. Two studies, including one conducted by Stanford University, shed light on the challenges posed by LLMs in the context of business use.

The Growing Difficulty in Tracking Data Genesis

The Stanford University study revealed that as LLMs continue to ingest massive amounts of information and grow in size, the origin of the data they use is becoming harder to track down. This lack of transparency poses a significant challenge for businesses that need to ensure the safety and reliability of the applications built on commercial genAI foundation models. Additionally, it hampers the ability of academics to rely on these models for research purposes.

Implications for Businesses and Consumers

The inaccuracy and opacity of LLMs have far-reaching implications. From a business perspective, the lack of transparency makes it difficult for regulators to pose the right questions and take appropriate action. This poses a risk to consumer protection, as it becomes harder for consumers to understand the limitations of these models or seek redress for any harms caused. Furthermore, the Stanford study highlights the parallels between the lack of transparency in commercial foundation models and the vast ecosystem of mis- and disinformation on social media, which has led to consumer protection concerns.

Concerns from Executives and Security Risks

A separate survey conducted by cybersecurity and anti-virus provider Kaspersky Lab found that almost all senior executives (95%) believe genAI tools are regularly used by employees, with more than half (53%) stating that it is now driving certain business departments. However, the same survey revealed that 59% of executives express deep concerns about genAI-related security risks that could jeopardize sensitive company information and lead to a loss of control over core business functions.

The Problem with LLMs: Inaccuracy and Lack of Context

LLMs, including GPT-4, have faced criticism regarding their overall accuracy since their release. A study conducted by data.world, a data cataloging platform provider, tested LLMs connected to SQL databases and found that they returned accurate responses to most basic business queries only 22% of the time. The accuracy plummeted to 0% for intermediate and expert-level queries. The study suggests that LLMs lack internal business context, which is crucial for accuracy.

The Need for Transparency and Accuracy

Transparency is crucial for public accountability, scientific innovation, and effective governance of digital technologies. The Stanford study evaluated 10 LLMs and found that the mean transparency score was just 37%. Without transparency, regulators cannot pose the right questions or take appropriate action. To address the accuracy issues, companies need to invest in strong data foundations, such as Knowledge Graph representation of their SQL databases, to increase the accuracy of LLM-powered question-answering systems.

Overall, the inaccuracy and opacity of LLMs pose significant challenges for businesses relying on genAI tools for various applications. The lack of transparency inhibits regulators, academics, and consumers from fully understanding the limitations and potential harms of these models. Addressing these issues is crucial to ensure the safe and reliable use of genAI in the business world.

The Impact of Inaccurate and Opaque GenAI for Business Use

The inaccuracy and opacity of large language models (LLMs) in the context of business use have significant effects on various stakeholders. These effects range from challenges in decision-making and trust erosion to potential security risks and limitations in leveraging genAI tools for business optimization.

Challenges in Decision-Making and Innovation

The inaccuracy of LLMs connected to corporate databases poses challenges for businesses in making informed decisions. When relying on these models for answering business queries, the low accuracy rates found in studies can lead to flawed decision-making processes. Inaccurate responses from LLMs can result in presenting inaccurate numbers to the board or regulatory bodies, which can have severe consequences for businesses. The lack of accurate information hampers the ability to develop effective strategies and can hinder innovation.

Erosion of Trust and Consumer Protection

The opacity of LLMs and the lack of transparency in their functioning erode trust among consumers and stakeholders. When businesses cannot fully understand the limitations and potential harms of these models, it becomes challenging to ensure consumer protection. Consumers may not be aware of the model’s limitations or be able to seek redress for any harms caused. This lack of transparency creates a barrier to building trust between businesses and their customers, which can have long-term negative effects on brand reputation and customer loyalty.

Security Risks and Loss of Control

The use of genAI tools in business departments raises concerns about security risks. Executives express deep concerns about the potential risks associated with genAI-related security breaches. These risks can jeopardize sensitive company information and lead to a loss of control over core business functions. The lack of transparency and accuracy in LLMs increases the vulnerability of businesses to data breaches and compromises their ability to protect proprietary and sensitive information.

Limitations in Leveraging Data for Optimization

The inaccuracy of LLMs connected to SQL databases limits the ability of businesses to leverage their data effectively. When LLMs cannot provide accurate responses to business queries, it hampers the optimization of key performance indicators, metrics, and strategic planning. Businesses invest significant resources in cloud data warehouses, business intelligence tools, and data systems to leverage data for better decision-making. However, the limitations of LLMs in providing accurate and context-specific responses hinder the potential value that can be derived from these investments.

The Need for Transparency, Accuracy, and Strong Data Foundations

To mitigate the effects of inaccurate and opaque genAI for business use, there is a need for increased transparency, accuracy, and strong data foundations. Regulators need transparency to pose the right questions and take appropriate action. Businesses need accurate and transparent genAI tools to make informed decisions and build trust with consumers. Investing in strong data foundations, such as Knowledge Graph representation of SQL databases, can increase the accuracy of LLM-powered question-answering systems and improve the reliability of genAI tools for business optimization.

Overall, the impact of the inaccuracy and opacity of genAI for business use is far-reaching. It affects decision-making, trust, security, and the ability to leverage data effectively. Addressing these effects requires a concerted effort to improve transparency, accuracy, and data foundations in the development and use of genAI tools.

#

If you’re wondering where the article came from!
#