Almost every enterprise software contract signed before 2023 has the same problem: it wasn't written for a world where the vendor is using AI to process your data, generate outputs on your behalf, or make decisions that affect your customers. The standard data processing agreements, liability clauses, and SLA structures that legal teams rely on were designed for a different technology environment. They are not protecting you against the risks that actually matter now.
The Four AI Risk Gaps in Standard Vendor Contracts
1. Training data provisions
Many AI vendors' default terms permit the use of customer data to improve or train their models. This isn't always clearly disclosed. Your DPA may prohibit the vendor from sharing your data with third parties while simultaneously permitting internal use for model training. For organizations subject to GDPR, CCPA, or sector-specific regulations, this creates a compliance exposure that standard privacy review won't catch.
What to look for: Does the vendor's DPA explicitly prohibit use of your data β including input prompts, outputs, and usage logs β for model training purposes? If the answer isn't a clear yes, assume no.
2. AI-generated output liability
When an AI system embedded in a vendor's platform produces an incorrect analysis, a discriminatory decision, or a harmful recommendation β who is liable? Standard contracts assign liability based on software defects and service failures. AI outputs don't fit cleanly into either category. They're not bugs. They're probabilistic outputs that can be wrong in ways that harm real people.
Most vendor contracts are silent on this. Organizations deploying AI in customer-facing or decision-making contexts need explicit contractual language that addresses who bears liability when AI outputs cause harm.
3. Subprocessor AI components
Your vendors use vendors. When your CRM vendor integrates an AI summarization tool from a third-party AI provider, that third-party's data handling practices become your exposure β even if you've never heard of them. Standard subprocessor disclosure requirements don't always capture AI components, particularly when they're integrated as features rather than standalone services.
The question to ask every vendor with AI features: "What AI providers process our data, under what terms, and what is your contractual right to change those providers without notice?"
4. Explainability and audit rights
Regulatory frameworks including the EU AI Act, NYDFS guidance, and emerging US state AI laws are beginning to require that organizations using AI in consequential decisions be able to explain those decisions. If you're using a vendor's AI to make or inform decisions about customers, employees, or counterparties, you need the ability to audit how those decisions were made.
Standard contracts provide audit rights for data access logs and security practices. They rarely provide the right to inspect model behavior, decision logic, or the ability to produce audit trails for AI-assisted decisions.
The Vendor Assessment Questions You Need to Ask
- Does your AI use any of our data for model training, fine-tuning, or evaluation? Under what terms, and can we opt out?
- What third-party AI providers do you use, and what data do they have access to?
- Can you provide a data flow diagram that includes all AI components?
- What is your incident response procedure for AI-related failures that affect our customers?
- What accuracy and bias testing do you perform on your AI models, and can you share those results?
- How do we produce an audit trail for AI-assisted decisions for regulatory purposes?
Contract Language to Add Before Signing
For any vendor deploying AI in a material capacity, your legal team should push for:
AI subprocessor notification: Obligation to notify you before adding any AI provider as a subprocessor, with 30-day opt-out window.
Output disclaimer and liability allocation: Clear language on who is responsible when AI-generated outputs are incorrect or harmful.
Explainability commitment: Obligation to provide decision audit trails sufficient for regulatory compliance in your applicable jurisdictions.
AI incident response SLA: Separate SLA for incidents involving AI system failures, including rollback capability and notification timelines.
Not every vendor will accept all of these. Tier your requirements to risk: the vendor providing your core AI-powered customer decisioning platform gets the full treatment. The vendor providing AI-assisted meeting notes gets a lighter review. The risk framework determines the negotiation investment.
Third-party AI risk is the next wave of vendor risk management. The organizations building these frameworks now will be ready for it. The ones that aren't will discover the gap when an auditor or regulator asks for something their contracts don't provide.