Your employees are using AI right now. Some have permission. Most probably do not.
The research tells a consistent story: workers adopt AI tools faster than companies can write policies about them, and 68% of employees who use ChatGPT at work do so without telling their supervisor. This creates a situation where organizations face real liability while pretending the problem does not exist.
The instinct is to ban everything. Lock it down. Problem solved.
Except that approach has never worked with any technology, and it will not work here either. People find workarounds because the tools are too useful to ignore, and shadow IT becomes shadow AI operating completely outside your visibility.
So you need a policy. But here is the uncomfortable truth that most guides will not tell you: the policy itself matters far less than whether people will actually follow it. A perfect document that sits ignored in SharePoint protects nothing.
The Policy Extremes That Both Fail
Organizations tend to swing between two poles. Neither one works.
The Total Ban
Samsung famously banned all generative AI tools company-wide after employees accidentally leaked confidential source code by asking ChatGPT to review it. The incident made headlines and sparked corporate panic across industries.
Bans feel safe. They create clear rules and eliminate gray areas that make lawyers nervous. They also push AI usage underground where it becomes invisible and unmanageable.
One Hacker News commenter captured this tension directly: “We are forcing non-use because of compliance. There is a fear that the models will scan and steal our proprietary code.”
That fear is real. But blanket prohibition creates its own risks when employees use personal devices and consumer AI tools to work around restrictions, creating data exposure your IT team cannot even see.
The Free-For-All
The opposite extreme is no policy at all. Let people figure it out themselves, trust their judgment, move fast and adapt later.
This approach treats AI like a personal productivity choice rather than an organizational risk. It ignores that employees are making data handling decisions every time they paste information into a prompt, and most have never thought about training data policies or where their inputs might end up.
Some companies have gone further than laissez-faire. Another Hacker News user reported their employer’s mandate: “At least 20% of code must be AI generated with the goal of at least 80% by the end of the year.”
Aggressive adoption targets without corresponding guardrails create pressure to use AI everywhere regardless of appropriateness. When the metric is AI usage rather than output quality, people find ways to hit the number whether or not it makes sense for their specific work.
What Policies Should Actually Cover
Effective AI policies share common elements, but the emphasis varies based on your industry, risk tolerance, and organizational culture. Start with these five areas and customize from there.
Approved Tools and Access
Be specific about which AI tools are sanctioned for company use. Enterprise versions of AI tools often have different data handling terms than consumer versions, and that distinction matters.
The free version of ChatGPT and the enterprise version have fundamentally different privacy guarantees. The free version may use your inputs to train future models. The enterprise version typically does not. Your policy should reflect which versions are acceptable and which are prohibited.
Include a process for requesting new tools. AI capabilities shift rapidly, and your approved list will need updates. Build a lightweight evaluation process rather than forcing people to use outdated tools or route around restrictions.
Data Classification
This is where most policies succeed or fail. You need clear, specific rules about what information can be shared with AI tools and what cannot.
Vague guidelines like “use good judgment with sensitive data” provide no actual guidance. People need concrete categories they can apply without calling legal every time they want to draft an email.
Consider a tiered approach: data that can never be shared with AI regardless of tool (customer PII, credentials, unreleased financials), data that can be shared only with approved enterprise tools (internal documents, general business information), and data that can be shared freely (public information, general knowledge questions).
The goal is not to eliminate all risk. The goal is to make the risk calculation simple enough that people can follow the rules without excessive friction.
Acceptable Use Cases
What tasks can AI support, and what is off limits?
AI excels at drafting, summarizing, brainstorming, and handling routine text manipulation. It struggles with factual accuracy, nuanced judgment, and anything requiring genuine understanding of your specific business context.
Most policies should prohibit AI from making decisions that affect individual people: hiring, firing, performance reviews, credit decisions. The combination of AI’s tendency toward confident errors and the stakes of these decisions creates unacceptable risk.
Similarly, legal documents, regulatory filings, and any communication where you need to guarantee accuracy should require human creation rather than AI assistance. AI can help research and draft, but humans must own the final product.
Quality Standards
Every policy should address the fact that AI makes mistakes. Hallucinations are not bugs that vendors will eventually fix. They are inherent to how large language models work.
Define review requirements based on the stakes of the output. Internal brainstorming notes might need only self-review. Customer-facing communications should require peer or manager review. Anything with legal or compliance implications needs subject matter expert verification.
Be explicit that AI-generated statistics, quotes, and citations must be independently verified. AI confidently invents sources that do not exist and statistics that were never published. Anyone who has fact-checked AI output knows this happens constantly.
Disclosure Requirements
When must AI use be acknowledged? This question has no universal answer, but your policy should provide one for your organization.
Some contexts clearly require disclosure: any situation where someone would reasonably expect to be interacting with a human, any content where attribution matters, any regulated communication where disclosure might be legally required.
Other contexts do not necessarily need disclosure: AI used as a drafting tool with heavy human editing, AI used for research or preparation that informs human work, routine productivity use where the final output reflects human judgment.
The line between these categories involves judgment calls your policy should help people navigate.
The Mistakes That Kill Policies
Good intentions produce bad policies all the time. Watch for these failure patterns.
Writing for Lawyers Instead of Users
Policies drafted by legal teams often read like legal documents. They cover every edge case, use precise terminology, and provide comprehensive protection against liability.
They also sit unread because regular employees cannot extract actionable guidance from dense legalese. If your policy requires interpretation by someone with a law degree, it will be ignored by everyone without one.
Write for the person who needs to make a quick decision about whether they can paste something into ChatGPT. Make that decision easy. Save the comprehensive legal framework for the internal documentation your legal team maintains.
Pretending Technology Stands Still
AI capabilities change monthly. Claude, GPT, Gemini, and dozens of other tools release updates constantly. Features that were experimental become standard. New risks emerge while old ones get addressed.
Policies written for ChatGPT in 2023 may not fit the AI landscape in 2026. Build review cycles into your policy and assign ownership to someone who will actually conduct those reviews. Annual is probably too slow. Quarterly may be appropriate for fast-moving organizations.
Skipping Training Entirely
A policy that exists only in documentation is a policy that does not exist. People need training to understand rules, context to understand why, and practice to apply guidelines in real situations.
SHRM research found that 75% of workers expect their roles to shift due to AI in the next five years, but only 45% have received recent upskilling. That gap between expectation and preparation creates confusion, fear, and policy violations born from ignorance rather than malice.
Budget time for training. Make it interactive rather than just distributing a document. Include real scenarios people will actually encounter.
Creating Rules Nobody Enforces
Policies without consequences are suggestions. Decide before you publish how violations will be handled and communicate that clearly.
This does not mean treating every mistake as a firing offense. Proportional responses work better: informal correction for minor violations, formal documentation for repeated issues, serious consequences for intentional data exposure.
What kills policy credibility is inconsistency. If executives ignore rules that apply to everyone else, or if violations are handled differently depending on who commits them, your policy loses all authority.
Ignoring the Reality of Shadow AI
Pretending employees are not using personal AI tools does not make it true. Any honest policy acknowledges this reality and provides paths that address it.
If your approved tools are significantly worse than consumer alternatives, people will use the consumer alternatives. If your approval process takes months, people will work around it. If your restrictions create excessive friction for routine tasks, people will find ways to reduce that friction.
The policy you need is one that makes compliant behavior the path of least resistance, not one that depends on people choosing the harder path because a document told them to.
Keeping Policies Practical
The best AI policies share a quality that is hard to specify but easy to recognize: they feel reasonable. They acknowledge legitimate concerns without creating bureaucratic obstacles that serve no real purpose.
Start with Principles, Then Add Rules
Before writing specific rules, articulate the principles behind them. People can apply principles to novel situations that rules cannot anticipate.
The principle might be: we want our people to benefit from AI while protecting data that could harm customers or the company if exposed. That principle guides decision-making when the rulebook does not have an answer.
Rules then become examples of the principles in action rather than an exhaustive list that implies anything not prohibited is permitted.
Make Compliance Easy
Every friction point in your policy is an opportunity for violation. Each time someone has to stop, think, navigate an approval process, or use a worse tool, you create incentive to work around the rules.
Audit your policy from the user’s perspective. What does it actually take to comply? If compliance requires significant effort, ask whether that effort produces proportional risk reduction. Sometimes the answer is yes. Often it is not.
Build Feedback Loops
Your policy will have gaps and errors. The people living under it know where those problems are. Create ways for them to surface issues without fear.
Regular surveys, anonymous feedback channels, office hours where people can ask questions without judgment: these mechanisms catch problems before they cause incidents and make people feel heard rather than policed.
Accept Imperfection
No policy eliminates all risk. Perfect compliance is not achievable or even necessarily desirable if it comes at the cost of destroying productivity.
The goal is risk reduction to acceptable levels, not risk elimination. Define what acceptable means for your organization and design a policy that achieves it without requiring perfection from everyone all the time.
Starting Without Starting Over
If you have no policy, start small. Cover the highest-risk scenarios first: what data can never be shared, what decisions AI cannot make, what review is required for external communications. You can expand from there.
If you have a policy that is not working, resist the urge to rebuild from scratch. Talk to people about what is failing. Often the problems are fixable without wholesale replacement.
Policy is not a document. It is an ongoing conversation between the organization and the people who work there about how to use powerful tools responsibly. The document just captures where that conversation has landed so far.
The organizations getting AI policy right treat it as a living thing: regularly reviewed, openly discussed, adjusted based on experience. The ones getting it wrong produce a document and forget about it until an incident forces attention.
Your team is already using AI. The question is whether they are doing it safely, and whether you have made safety the easy choice. That is what a good policy accomplishes. Not restriction for its own sake, but guardrails that let people move fast without driving off a cliff.
Which kind of policy do you have?