Beyond the Hype: Real-World Custom Implementations of Generative AI

By Justin “Hutch” Hutchens | Trace3 Innovation Principal

The Value of Custom-Tailored Solutions

It has been over a year since OpenAI released ChatGPT, sparking renewed global interest in artificial intelligence (AI). Despite the time that has passed, many organizations are still grappling with understanding the role that generative AI (GenAI) should play within their operations. Those who have used this technology know how powerful and impressive it can be. However, despite these capabilities, many organizations have faced challenges in operationalizing GenAI.

In most cases, the initial step in GenAI adoption has involved enabling generic third-party implementations – with new interfaces and feature sets for existing solutions. While these implementations can improve operational efficiency, particularly in relation to those specific solutions, this only scratches the surface of what is possible. To unlock the true potential of GenAI within organizations, it is crucial to consider use cases specifically tailored to unique business operations. Many organizations face bottlenecks and inefficiencies that are uniquely their own, and these challenges do not have simple, turn-key solutions. Simply purchasing a product and pressing “GO” will not resolve most of these problems. Effective use of GenAI to address these issues requires creativity, thoughtfulness, and a deep understanding of both business processes and the capabilities and limitations of GenAI.

 

Innovation-GPT: A Real-World Case Study

The Trace3 Innovation Team recently embarked on its own journey to identify ways that GenAI could be leveraged to streamline and optimize its business processes, specifically regarding the research and interaction with data on emerging technology solutions that the team is tracking. In this write-up, we will provide an overview of the end-to-end process the team used to build these capabilities and discuss the lessons learned along the way. Additionally, we will explore how organizations can leverage a similar approach to identify opportunities to streamline their operations using GenAI, how to design and architect custom solutions related to those opportunities, and important factors to consider throughout the process.

The solution, dubbed Innovation-GPT, leverages large language models (LLMs) and a custom retrieval augmented generation (RAG) architecture to streamline two different business processes, to include both research and knowledge management / retrieval.

1. Research

When new funding events occur, the Trace3 Innovation Team manually researches the company and solution by searching the web (the company’s website, press releases, and other third-party perspectives). To streamline this process, the team has created a solution that spiders and scrapes the websites of newly funded enterprise technology solutions, compiles the data into robust company profiles, aggregates the data into structured data records (JSON) based on category, generates verbose metadata annotations for those records, vectorizes the data, and then stores that data for subsequent retrieval.

Picture1-Sep-06-2024-08-21-04-6805-PM

Figure 1. Custom automated research solution using generative AI technology

 

2. Knowledge Management / Retrieval

The number of companies and solutions the team is tracking at any given moment culminates into an overwhelming amount of information. Keeping track of this information and being able to recall it effectively is particularly challenging. The innovation team frequently must sift through notes and documents to find solutions that fit specific use cases. To address this challenge, the data generated from the automated research process is integrated into a custom RAG architecture that allows real-time interaction with that data via a natural language processing (NLP) or “chatbot” interface. This allows the team to ask questions related to companies, solutions, or use-cases, and get relevant answers in return.

Picture2-Sep-06-2024-08-21-04-6754-PM

Figure 2. Custom RAG architecture for interacting with enterprise technology solution data

Now that we have described the GenAI solution that we have developed, we want to reflect on the journey of how we got here, and some of the lessons learned along the way.

 

Identifying Opportunities for Process Enhancement

Identifying appropriate use cases for GenAI optimization requires two key elements: a thorough understanding of your business processes and a clear grasp of GenAI’s capabilities and limitations. The practice of identifying good candidates for process enhancement involves cross-referencing this understanding of GenAI capabilities with the tasks your team currently performs manually.

 

Understanding your Business Processes

First, you need to understand how operations are conducted within your business or organization. This can sometimes be accomplished by a knowledgeable insider who is familiar with all of the business processes, but to get an outside perspective, data flow mapping is particularly useful. Data flow mapping involves documenting the lifecycle of information as it relates to a particular business process. Data flow diagrams, the output of data flow mapping, visually illustrate how data is created, collected, validated, stored, secured, managed, enriched, shared, accessed, used, deprecated, and destroyed through each step of a given process. By understanding how data is managed throughout your processes, and recognizing the types and complexities of that data, you can establish an effective baseline for identifying possible GenAI integrations.

 

Limitations and Capabilities of GenAI

Another important consideration when determining a good candidate for custom GenAI implementations is evaluating whether you are using the correct tool for the job. While GenAI can be profoundly useful when applied to appropriate use cases, there are situations where simple automation or classic data analytics would be more suitable. The table below examines the types of use cases that are most appropriate for each, and their associated characteristics:

Picture3-Sep-06-2024-08-21-04-9624-PM

Adopting GenAI for a particular use case involves a trade-off between generalization and predictability. GenAI models excel at generalization – they are adaptive and can handle highly variable or unexpected input values. However, this adaptability comes at the cost of predictability, as it becomes challenging to anticipate all possible outcomes and the model’s behavior in response to unexpected inputs. This trade-off can be worthwhile if the risks are appropriately managed and if the complexity of the task cannot be optimized using simpler methods.

The use of GenAI (with its associated loss of predictability) is only appropriate when the input data is highly variable and unstructured, or when the decision-making process requires complex and highly variable rationale. In such cases, simple automation and classic data analytics are insufficient to address the problem, making the additional risk of using GenAI a reasonable consideration. However, applying GenAI to use cases that could be effectively solved with automation or data analytics incurs additional cost without added benefit.

 

Cross-referencing the Two

By examining the processes of our own Trace3 Innovation Team, the most obvious candidate for improvement was the acquisition and management of the company profiles we track. In monitoring the broad market of enterprise technology solutions, the team routinely deals with highly variable, unstructured data (detailed descriptions of companies and their solutions) and consistently addresses complex and variable questions about this market from both our customers and internal business units.

 

Managing the Risk

As previously mentioned, the use of GenAI is a trade-off, offering increased generalization capabilities at the cost of decreased predictability. GenAI technology generates output based upon probability, and not based upon truth or accuracy. While probability can often serve as a useful proxy for truth, they are not necessarily correlated. As such, GenAI can sometimes produce results that are inaccurate or unreliable. In all GenAI implementations, these risks must be considered and appropriately managed. It is possible to reasonably address these risks while simultaneously reaping the benefits by leveraging a governance framework such as the NIST AI Risk Management Framework (AI RMF). The AI RMF provides high-level guidance on how to map, measure, and manage risk related to AI model implementations.

Two specific controls applied by our team to minimize the risks of problematic output included in-context fine-tuning and adhering to a human-in-the-loop approach. For fine-tuning, we provided examples of expected and desirable output (a technique referred to as multi-shot in-context learning). Additionally, we instructed the model to disclose if requested information was not included within its available dataset. This technique can effectively minimize the likelihood of hallucinations (a problem where LLMs confidently return false or inaccurate information).

Most importantly, all output generated by the model is only considered to be a starting point for the team. Every output is thoroughly fact-checked and supplemented with manual research and analysis to ensure our team consistently arrives at the best possible conclusions. As an industry, we are still in the early stages of GenAI. With proper risk management, this technology can be a force multiplier for good. However, without it, GenAI can create more problems than it is worth. It is critical for technology professionals to thoroughly understand the potential challenges and implications of any use case considered. For most use cases, leveraging GenAI to augment human operations is a better risk-adjusted approach rather than fully offloading responsibility to AI. The day may come when critical processes can be confidently handed over to GenAI systems without human oversight, but we are certainly not there yet.

 

Final Reflection

The journey of integrating GenAI into business processes can be both challenging and rewarding. At Trace3, we have learned that the most impactful implementations often involve customizing solutions to meet the unique needs of each organization. By understanding specific pain points and leveraging the strengths of GenAI, businesses can achieve significant improvements in efficiency and innovation. However, it is crucial to approach this technology with a careful balance of enthusiasm and caution, ensuring robust risk management practices are in place. As we continue to explore the possibilities of GenAI, we invite other organizations to embark on this transformative journey with us, guided by thoughtful strategy and a commitment to continuous learning and adaptation. Together, we can harness the power of GenAI to drive meaningful change and unlock new opportunities for growth and success.

If you're interested in further exploring the possibilities of GenAI implementation in your organization, feel free to reach out to us at innovation@trace3.com

Hutchens_Headshot[4]
Justin “Hutch” Hutchens is an Innovation Principal at Trace3 and a leading voice in cybersecurity, risk management, and artificial intelligence. He is the author of “The Language of Deception: Weaponizing Next Generation AI,” a book focused on the adversarial risks of emerging AI technology. He is also a co-host of The Cyber Cognition Podcast, a show that explores the frontier of technological advancement and seeks to understand how cutting-edge technologies will transform our world. Hutch is a veteran of the United States Air Force, holds a Master’s degree in information systems, and routinely speaks at seminars, universities, and major global technology conferences.
Back to Blog