Generative AI and the Risk of Knowledge Leakage

by Andrew Cardno

We live in a world of knowledge. Until generative AI arrived, that knowledge mostly stayed where it was born; it lived in people’s minds and was communicated through things like email threads, Word documents, spreadsheets, and PowerPoint decks. That human layer of knowledge is where a tribal resort’s strategy is debated, exceptions are explained, and decisions are justified. It contains the instincts, context, and reasons behind the data.

The data sitting underneath an operation has always mattered, but much of it reflects the outcomes of that human knowledge. Organizations measure performance, revenue, labor, and countless key performance indicators, but without the lived understanding of how a tribal resort business runs, the data itself is like an island isolated from meaning.

For a long time, computers could mainly consume numbers: trends, revenue, counts, timings, and performance metrics. That supported decision-making, but the most valuable asset – narrative knowledge – remained locked inside human communication. Generative AI changes that. Now machines can read the narratives in human knowledge and connect them at speed.

When the Inbox Becomes a Training Set

Once organizations accept that generative systems can digest unstructured text, leaders have to rethink what counts as sensitive.

Imagine an agent with broad access to communications: email threads, attachments, meeting follow-ups, negotiations, incident discussions, HR issues, vendor disputes, marketing strategy, internal debates, and the reasoning behind decisions. That agent doesn’t just “see” a tribal resort. It learns it.

Over time, it gains something close to institutional understanding – how a team thinks, what it prioritizes, how it negotiates, what it tolerates, and where it compromises. Attachments matter because they often contain forecasts, budgets, pricing, and internal models. Email isn’t just communication; it’s institutional memory.

The hardest part is visibility. If a generative system extracts patterns from content, leaders may not be able to detect it. This doesn’t look like theft where files disappear or a breach where alarms fire. It can look like nothing at all, while the essential “how” of a business is distilled somewhere else.

Who Gets to Build Models on a Tribal Property’s Human Knowledge?

This is where the conversation shifts. Generative tools are real productivity multipliers, but the underlying economics reward access to high-quality text.

Model builders benefit enormously from rich written material, including books, articles, manuals, and the endless output of modern work. They gather massive datasets, curate and refine them, and compete on the breadth, depth, and quality of what their models learn.

So, organizations face an uncomfortable question: when a cloud-based assistant is used to draft an email or summarize a document, can it be trusted that the content won’t be used, directly or indirectly, to improve someone else’s model?

Maybe a vendor promises it won’t. Maybe the product page says “privacy-first,” but governance can’t be built on assurances alone. Incentives matter, especially as competitive pressure rises and AI investment grows. If a company is losing ground and sitting on high-value text, the temptation to broaden usage (e.g., quietly, gradually, or through changing terms) can be strong. Even if misuse occurred, it’s a struggle to prove it after the fact.

Copyright Battles Are a Warning Sign, Not a Distraction

Organizations don’t need to follow every lawsuit to understand that the current legal climate is signaling that boundaries are still being fought over, and the rules are not fully settled.

When major organizations argue in court over whether copyrighted content can be used for training, it highlights two realities leaders should care about:

• Training data is extremely valuable.

• The acceptable use of content is being contested in real time.

If some companies are willing to fight to normalize broad ingestion of copyrighted work, it’s fair to ask whether internal organizational content could be treated with similar “it’s transformative” logic over time, especially when that content flows through consumer tools or poorly governed environments.

Organizations Can’t Stop Their People From Using AI

Many leaders want a clean policy: “Generative AI is not being used.” In practice, that’s hard to enforce and often counterproductive. These tools compress timelines, polish writing, and reduce friction across roles. Even if they are prohibited, employees can still use them: they can paste text into a personal app, take a photo of a complicated email, or use a free tool that feels harmless because it’s convenient.

The real question isn’t whether AI enters an organization; it’s how it enters, where it touches institutional knowledge, and whether leaders control the boundaries.

The Five-Year Fork: Leakage or Sovereignty

If leaders fast-forward five years, they face a fork in the road.

Path one: gradual knowledge leakage. Human operating knowledge slowly seeps into external platforms through everyday workflow: drafts, summaries, copilots, and embedded features. This isn’t a dramatic breach; it’s a steady drip. In this future, an organization’s playbook becomes easier to imitate because it becomes machine-readable and portable.

Path two: generative data sovereignty. Organizations still embrace productivity gains, but they do it on their terms. They adopt tools and infrastructure that keep sensitive knowledge inside controlled environments. They get the multiplier without surrendering the blueprint.

For tribal gaming enterprises, this isn’t just an IT preference. Their knowledge is strategic. It reflects governance, decision rights, compliance posture, vendor leverage, and operational identity. Protecting it is part of protecting what makes an enterprise distinct.

What Can Be Done Now: Practical Steps That Reduce Risk

Leaders don’t have to solve everything at once. They can start with the highest-velocity channels where sensitive knowledge moves every day.

Use privacy-first communications for sensitive text/voice. Move candid coordination to end-to-end encrypted tools. If a messaging app offers AI writing help, assume it can read what users type.

Treat “free” AI as a high-risk lane. If a tool is free, assume data is part of the price – don’t feed it sensitive material.

Plan for policy drift. Terms and features change. Build controls and governance that don’t rely solely on trust.

The Strategic Answer: Bring AI Closer to Home

There is a reason leaders are hearing more discussion about “on-prem” generative AI. It’s not just a tech trend. It’s a sovereignty trend.

When generative tools run inside an organization’s environment, the equation changes. Teams can still summarize, draft, analyze, and automate, but they don’t have to funnel sensitive communications into a third party’s ecosystem. For tribal gaming enterprises, that matters because their knowledge is not generic. It’s competitive differentiation.

The strongest organizations in the next five years won’t be the ones that avoided AI. They’ll be the ones that adopted it aggressively while protecting their institutional knowledge.

The Real Risk Isn’t Data Theft – It’s Knowledge Replication

Organizations spend years, possibly decades, building an enterprise that knows how to operate, including how to market responsibly, how to respond to changing patron behavior, how to manage risk, how to navigate compliance, how to negotiate with vendors, how to hire and train, and how to lead.

That “how” is the advantage. Generative AI makes the “how” readable. It makes the “how” transferable. It makes the “how” potentially replicable.

Adopting generative AI is the only realistic option, but leaders do have to acknowledge the shift: emails and documents are no longer just “content.” They are an operational blueprint.

Generative AI can be one of the most important technology multipliers an organization ever adopts. The question is whether a tribal resort adopts it in a way that strengthens the business or quietly dissolves what makes it unique.   

Andrew Cardno is Co-Founder and Chief Technology Officer of Quick Custom Intelligence (QCI). He can be reached by calling (858) 299-5715 or email [email protected].