By Justin "Hutch" Hutchens | Trace3 Innovation Principal
The Trace3 team recently visited Grand Rapids, Michigan, to engage with the hacking and cybersecurity community at GrrCON 2024.
As we navigated the DeVos Place convention center, it became clear something unique was unfolding. In an ironic twist, an Amish conference was being held at the same venue and time. The contrast between these conferences and their attendees was a reminder of the trade-offs between the value technology brings and the risks that come with it. But before we dig into this innovation introspection, let's first set the stage.
On the morning of September 26, 2024, DeVos Place was bustling with people. On one side, you had the GrrCON attendees - some of the biggest tech enthusiasts around. As is typical at hacker events, attendees sported brightly colored hair, mohawks, kilts, and shirts adorned with tech logos. They carried all sorts of electronic devices – circuit board badges with LEDs and interactive interfaces, software-defined radios, large wireless antennas, and of course, their laptop computers and mobile devices. Digital labs were set up in front of vendor booths to showcase cutting-edge cybersecurity technology. There was even an entire line of dystopian electronic pods people could step inside to interact with one another in a virtual reality mech-robot battleground.
On the other side of the venue, the Amish community gathered, representing the opposite end of the technological spectrum. They were dressed in traditional and conservative clothes. Traditional religious names were on full display on the attendee name tags. Jebediahs, Ezekials, and Jacobs walked around with unshaved beards and overall suspenders. Ruths, Abagails and Esthers sat with their children who remained quiet, respectful, and closely by their sides. Their displays included log cabins, traditional stoves, cooking supplies, woodworking kits, and simple mechanical farming devices.
In the shared common area, the two worlds collided. Across tables, technophiles and technophobes sat in clear clusters, occasionally exchanging curious glances. I found myself observing this interaction, people-watching in what became one of the most striking social dynamics I have ever witnessed.
Eventually, I struck up a conversation with two Amish attendees. While they were friendly and non-judgmental, there was a clear disconnect between our two worlds. I attempted to explain our massive $200 billion dollar industry dedicated to attempting to solve all the problems related to cybersecurity, but constantly struggling to secure our digital lives. And in that moment, I had an interesting revelation – our problems are not their problems. They do not have to worry about their identity being compromised or their private data being exposed. They do not have to worry about phishing emails, intrusion detection systems, zero trust architectures, and endpoint detection. These seemingly ubiquitous threats for us are not even a matter of concern for them. They’ve achieved peace-of-mind and personal security without spending a dime – a stark contrast to the endless efforts of the cybersecurity world.
The reason cybersecurity is an ever-growing problem, despite massive investments, is the increasing complexity of the technological world. The law of unintended consequences states as systems grow more complex, the likelihood of unexpected problems increases. By integrating technology into our lives and businesses, we expose ourselves to new risks.
This contrast between the Amish and our world of technology led me to reflect on the costs of this ever-increasing complexity. I’m not suggesting we all abandon technology, but we must be more mindful of how we approach it. More complexity in systems and organizations always comes at a cost, and we need to strike a balance between complexity and the risks it introduces.
In the post-generative AI era, complexity has become fashionable. Many organizations are adopting an “AI First” strategy, often without clear value propositions. AI is becoming the new buzzword, integrated into every process and product, even when the complexity outweighs the benefits. As Oguz Acar wrote in the Harvard Business Review:
“As organizations increasingly prioritize AI above and over everything else, they risk forgetting that technology’s primary purpose is to solve problems. An AI-first approach could rapidly drive AI deployment across business operations, not because it solves “real” organizational or customer problems, but because AI implementation becomes an end in itself. The likely outcome is a lot of AI solutions in search of problems, or worse, solutions that create new problems.[1]”
Generative AI is undoubtedly powerful, but it brings with it unprecedented complexity. Key factors include:
Opaqueness: Large-scale neural networks are black boxes. When something goes wrong, it’s nearly impossible to pinpoint the failure.
Non-Deterministic Models: Unlike traditional models, generative AI outputs vary, even with the same inputs, making systems less predictable.
Data Integration: AI solutions connected to sensitive data, like in Retrieval Augmented Generation (RAG) systems, add new layers of complexity. A common strategy to achieve these implementations is vectorization of data. Data vectorization exacerbates the black-box problem because it transforms complex, often human-understandable data (like text, documents, and images) into high-dimensional numerical vectors that are not intuitively interpretable.
New Use Cases: Unlike traditional data models which were only used by a small number of groups within an organization, GenAI is being adopted broadly across all departments and functions, increasing both its diffusion and complexity.
Agentic Workflows: GenAI implementations that leverage APIs to interact with other systems add further risks as decisions are made by AI agents based on probabilistic outputs.
What’s the takeaway? We could eliminate cybersecurity risks by disconnecting from technology entirely, but that would mean losing the benefits it provides. The key is finding the right balance. In system design, this concept is known as elegance – the fusion of simplicity and power. As Azad Madni explains, elegant systems are simple yet powerful, offering predictability and creative functionality.[2] Organizations should prioritize simplicity in their innovation efforts. Generative AI is a remarkable tool, but not every problem requires such a complex solution. Sometimes, simpler models or conditional logic are more effective. As generative AI adoption grows, the focus should be on necessity. When AI is the only viable option, organizations must implement processes to minimize risks through governance and controls, such as the NIST AI Risk Management Framework (AI RMF).
While generative AI’s allure is undeniable, it’s crucial to weigh its benefits against the risks and complexities it introduces. At Trace3, we advocate for a strategic and measured approach to AI. We help organizations navigate the complexities of AI adoption and leverage the NIST AI RMF to ensure responsible, effective implementation.
[1] Acar, O. A. (2024, March 18). Is your AI-first strategy causing more problems than it’s solving? Harvard Business Review. https://hbr.org/2024/03/is-your-ai-first-strategy-causing-more-problems-than-its-solving