Listen to this post

Everyone is talking about AI, but not everyone understands what AI is. AI, or artificial intelligence, refers to technology that enables machines to perform tasks that traditionally required human thinking—things like reading, writing, analyzing information, and making decisions.

A particularly powerful form of AI today is the large language model (or LLM), which is a type of AI trained on vast amounts of text that allows it to understand and generate human language, making it the engine behind popular tools like chatbots and AI writing assistants. Think ChatGPT or Google Gemini.

Despite how sophisticated it may seem, a large language model is not a mysterious, all-knowing brain—it is, at its core, a very powerful next-word prediction engine. When you type a question into an AI chatbot, the model is essentially doing one thing over and over: looking at the words that came before and calculating which word (or “token”) is most likely to come next, based on patterns it learned from enormous amounts of text. Think of it as a highly advanced version of the autocomplete feature on your phone, just operating at a scale and sophistication that can produce remarkably human-like responses.

Now that we have a sense of what AI is and how it works, the natural next question for any business owner or executive is: so, what does this mean for my business? Three takeaways for you:

(1) Establish a clear and realistic internal AI policy—one that acknowledges employees are already using AI and channels safely and productively, rather than pretending it is not happening.

(2) Choose your AI vendors carefully, with particular attention to how they handle your data during the relationship and upon termination: who owns it, whether it is used to train their models, and what rights you retain. Also, make sure those answers are locked into a contract specifically designed for AI, not a recycled IT agreement.

(3) Document your human oversight. Keep records showing that real people are reviewing, approving, and monitoring AI-driven decisions, because that paper trail may be your strongest shield against liability if something ever goes wrong. AI is not going away, and neither is the legal responsibility that comes with using it.

1) You Have to Have an Internal AI Policy, or Your AI Policy is “Do Whatever You Want.”

      If your company has no AI policy, then your AI policy is effectively this: “Use it however you want.” And if your policy is simply to ban AI altogether, here is the hard truth: your employees are already using it; they just are not telling you. Ignoring AI does not make the risks go away. It just means those risks are invisible to you. The smarter approach is to establish a clear internal AI policy that sets the ground rules for responsible use: defining which AI tools are approved, what types of information may (and may not) be entered into those tools, who is accountable for AI-generated outputs, and how employees should flag errors or concerns.[1]

      Questions to Ask When Creating Your Internal AI Policy:

      • What business functions or tasks are employees currently using AI for—officially or unofficially?
      • Which AI tools and platforms are approved for use, and who decides what gets added to the list?
      • What types of data or information are employees allowed (or not allowed) to input into AI tools?
      • Who is responsible for reviewing, validating, and approving AI-generated outputs before they are used or shared?
      • How should employees report mistakes, errors, or unexpected results from AI tools?
      • What training or resources do employees need to use AI responsibly and effectively?
      • How will we monitor compliance with our AI policy and update it as technology evolves?
      • What happens to employees that violate the policy?

      A well-crafted AI policy does not stifle productivity but unlocks it, giving employees the confidence to use AI effectively while protecting the company from the legal, security, and reputational risks that come with unguided use.

      2) Think Twice Before Using Public AI Tools Like ChatGPT for Work — and Choose Your Vendor Carefully

      Having a strong internal policy is only the first step. Equally important is understanding how and where  your employees are using AI in their day-to-day work, especially when it comes to public, off-the-shelf tools. Free, publicly available AI tools are tempting. They are powerful, easy to use, and cost nothing. But for business use, they come with a serious hidden cost: your data. When you type confidential business information, client details, or proprietary strategies into a public AI model, that information may be used to train the model further, potentially exposing it to other users down the line. From a legal standpoint, this could mean that your trade secrets are no longer “secret”—and once that protection is lost, it is difficult to get back. Worse, if your team is working on an invention and shares those details with a public AI tool before a patent application is filed, you may have just created a “public disclosure.” This means the information is no longer truly private, and a competitor could potentially access it and use it to challenge or preempt your patent rights entirely.

      In a recent article, we noted that a federal court in the Southern District of New York issued a landmark ruling in United States v. Heppner, holding that a client’s conversations with a publicly available, non-enterprise generative AI platform are not protected by attorney-client privilege or the work product doctrine. The court reasoned that sharing privileged information with a public AI system is equivalent to sharing it with a third party—meaning privilege is waived—and that AI platforms are not confidential, as their terms of service permit data review and third-party disclosure.

      This is why avoiding publicly available AI tools and exercising great care in selecting an AI vendor matter. Not all AI products are created equal when it comes to protecting your data and intellectual property. When evaluating an AI vendor, your company should be asking:

      • Does this vendor use my data to train their models?
      • Who owns the outputs the AI generates?
      • What happens to my data if I end the relationship?

      These are not just technical questions. Standard IT agreements were not designed with AI in mind and often fall dangerously short.

      Key Issues to Address in Your AI Vendor Contracts:

      • Training Data Usage: Specify how your data can (and cannot) be used for training the vendor’s AI models.
      • Output Ownership: Clarify who owns the output generated by the AI: your company, the vendor, or someone else.
      • Performance Standards: Set clear expectations for the reliability, accuracy, and availability of the AI system.
      • Liability and Risk: Define who is responsible—and what remedies are available—if the AI system fails or causes harm.
      • Data Rights: Be wary of broad definitions of “usage data” or “service data” that grant the vendor excessive rights to your information.
      • Linked Terms: Watch for additional terms incorporated by reference (such as through hyperlinks) and review them carefully.

      Choosing the right AI tools and vendors is about more than just features and price. It’s about protecting your company’s data, intellectual property, and legal interests. Overlooking these issues can expose your business to significant risks that standard contracts and policies may not cover. As we’ll explore in the next section, understanding and addressing AI liability is just as critical as selecting the right technology itself.

      3) You Cannot Blame the Robot: AI Liability and Why Human Oversight Matters

      The legal community in the U.S. is still figuring out exactly who should be held responsible when AI causes harm[2] — but one thing is already clear: you cannot simply point to an AI and say “the AI did that, not me” to escape liability. As a company, if you deploy AI and something goes wrong, the responsibility lands on you. This is especially true for so-called “agentic” AI— sophisticated AI systems that can take actions, make decisions, and even execute tasks on your behalf with little to no human involvement. Think of an AI that autonomously drafts and sends contracts, makes purchasing decisions, or manages customer interactions without anyone reviewing what it is doing. The less human oversight there is, the greater the legal exposure when things go sideways.

      What Can Your Company Do? Key Questions to Consider:

      • Are there human checkpoints in place?
        For any high-stakes decision or action, is a real person reviewing and approving what the AI is doing before it happens?
      • Who is allowed to use AI, and for what tasks?
        Have you set clear rules around which employees can use which AI tools, and for which specific business functions?
      • Do you have escalation paths?
        Is there a process for employees to escalate issues or unexpected outputs produced by AI systems?
      • Are you keeping proper records?
        Are you maintaining logs, alerts, and documentation of how your AI systems are being used? Could you demonstrate to a court or regulator that your company took “reasonable care”?
      • Is there an incident response plan?
        Do you have a procedure in place for when things go wrong—one that allows you to reconstruct what happened, preserve evidence, and respond quickly?

      Conclusion

      Artificial intelligence is rapidly reshaping the business landscape, but its benefits come with real responsibilities. By setting clear internal policies, choosing your AI vendors wisely, and ensuring strong human oversight, your company can unlock AI’s potential while minimizing risk. Thoughtful, proactive management, not avoidance or blind adoption, will set your organization up for success in this new era.


      [1] See Megan Pekarske, But Wait: Things to Consider Before Adopting AI Tools In Your Hospice, Husch Blackwell News & Insights (Apr. 23, 2025), https://www.huschblackwell.com/newsandinsights/but-wait-things-to-consider-before-adopting-ai-tools-in-your-hospice.

      [2] See Heidi Salow & Shannon Kapadia, California Legislature Advances Sweeping AI Bill: Implications for Businesses and Developers of “Companion Chatbots”, Byte Back (Sept. 19, 2025), https://www.bytebacklaw.com/2025/09/california-legislature-advances-sweeping-ai-bill-implications-for-businesses-and-developers-of-companion-chatbots/ ; Kimberly Chew, Hilda Akopyan & Nick Morgan, Emerging Legal Challenges: Artificial Intelligence and Product Liability, Product Perspective: Complex Tort & Product Law (Oct. 20, 2025), https://www.productlawperspective.com/2025/10/emerging-legal-challenges-artificial-intelligence-and-product-liability/; Chew K, Snyder K, Pert C. How Physicians Might Get in Trouble Using AI (or Not Using AI). Mo Med. 2025 May-Jun;122(3):169-172. PMID: 40747395; PMCID: PMC12309835.