ai contracts
The pace of recent technological advances is forcing tech giants and other software providers (from Zoom to OpenAI) to regularly update their terms of service. This underscores the importance of ensuring that existing contractual frameworks remain fit for purpose in the age of artificial intelligence. If you are procuring AI services, we have compiled a list of five things that you can do right now to help de-risk your contracts with suppliers of AI solutions.

In today's rapidly evolving digital landscape, the promise and allure of generative artificial intelligence (GenAI) has captivated businesses globally. Recent data from the McKinsey Global Institute suggests that over 60% of enterprises have adopted or at least tried AI to enhance their operational efficiency and customer experience.

The pace of recent technological advances is forcing tech giants and other software providers (from Zoom to OpenAI) to regularly update their terms of service. For example, Google has recently updated its terms and is now offering to indemnify paying users of its generative AI products against claims that either the training data or the output infringes a third party's rights. This underscores the importance of ensuring that existing contractual frameworks remain fit for purpose in the GenAI age.

The procurement of AI technology presents new challenges that generic SaaS or software development, and licence contracts are unlikely to cover. If you are procuring a new, fancy AI tool, ensuring that your business is adequately protected can be tricky. On one hand, you may be tempted to lean on broad, generic provisions such as requiring the supplier to provide the AI tool “in accordance with good industry practice”. You could also seek to impose detailed, specific obligations mandating transparency, explainability and auditability of the supplier’s AI tool and associated training data.

However, neither approach works particularly well. In an industry that is in a state of flux and lacks recognised benchmarks, what counts as “good industry practice” is likely to be highly debatable, difficult to enforce, and so, of little comfort to a customer. At the same time, indiscriminate copying and pasting of obligations from the EU’s draft Artificial Intelligence Act (“EU AI Act”) or other similar legislative initiative or guidelines (as we have also seen!) can be premature and fail to capture the nuanced aspects of AI software delivery.

Emerging model clauses

In the absence of an established market practice, various communities and institutions are devising their own AI-specific contractual clauses. An example is the UK’s Society for Computers and Law, which has recently unveiled sample clauses for transactions involving AI systems. While the clauses are short-form and generic, they serve as a useful checklist of issues to think about when considering AI vendor contracts.

In parallel, the European Commission has also published an updated version of its EU model contractual AI clauses to pilot in procurements of AI (“EU AI SCCs”). The EU AI SCCs contain provisions specific to AI systems and covered by the EU AI Act (which is currently in the final stages of negotiations between the EU lawmakers). The clauses come in two flavours; for high risk and non-high risk systems, and are drafted as a schedule that can be easily appended to an agreement. While the EU AI SCCs were originally prepared for public organisations procuring AI systems, given the scope of the EU AI’s Act, they contain learnings for any organisation procuring (or providing) AI systems.

AI, however, presents a unique conundrum, its omni-use nature – where the same technology can be used to identify a malignant tumour or an adversary on the battlefield – demands that any such contractual clauses are highly contextualised, and their operational feasibility discussed with the relevant teams.

Any short-cuts? Unfortunately, not, but here are our 5 tips.

If you are short on time to do a deep dive on the above-mentioned contractual clauses, there are some (relatively) easy ways to help de-risk your contracts with suppliers of AI services. Below, we have compiled a list of five things that you can do right now. These, of course, should be considered in addition to the usual provisions found in software agreements including (but not limited to!) those relating to software specifications, acceptance, payment or liability.

1. Consider use of GenAI materials and tools

It is difficult not to draw parallels between use of open-source software and materials generated using GenAI. Many software development agreements include provisions restricting the use of open-source software (“OSS") as inclusion of OSS in deliverables produced for a customer may jeopardise the customer’s rights in and the ability to commercially exploit them. Similarly, given the difficulty in establishing data provenance when using GenAI tools, inclusion of GenAI materials in deliverables creates a risk of inadvertently infringing third-party intellectual property rights. Depending on the context, your business may not be prepared to accept this risk.

Another complicating factor is that many jurisdictions around the world require human input for works to be protected by copyright. For that reason, content created by AI tools may not be protected by intellectual property rights (see recent decision of a US judge in respect of artworks created by computer scientist, Stephen Thaler, using “Creativity Machine”).

While, as a starting position, you might include a restriction on using GenAI tools or incorporating GenAI materials into the deliverables, this should, ideally, trigger a broader conversation with your service provider around the practicalities of the project. This conversation should include considering the types of tools being used, types of data that the parties envisage being input into the relevant AI system, what you'll need the output for, and how that aligns with your policy on responsible AI use (if you have one). It may also be that some of the concerns arising from the use of AI systems can be addressed through technical measures such as running the relevant AI models on your tenant.

2. Bolster confidentiality and IP provisions

It is a known risk that by inputting data into AI tools, such data could be copied, retained and further processed so that the tool can improve performance and produce better and more tailored results.

Mitigation strategies should include both operational and contractual measures. The former includes conducting thorough due diligence on the supplier, understanding the security controls that apply to the tool and limiting the data that is provided to it. The latter could involve adopting a multi-layered approach to intellectual property (for example, by distinguishing between high- value and low-value data and applying different restrictions to each of them), requesting exclusivity or restricting the supplier’s key personnel from working for competitors (as working with certain types of data is a valuable skill that the vendor’s personnel could go on and deploy on engagements with your competitors).

3. Factor in future regulatory changes

Traditional outsourcing agreements often contain a clause which deals with regulatory changes and what happens if those changes affect the provision of the services, including which party should bear the costs of such changes. These types of provisions are standard in heavily regulated industries such as finance, but are rarely seen in simpler software development agreements. This may need to change.

The EU AI Act, among other things, includes a list of “high risk” AI systems and makes them subject to strict requirements. As certain AI solutions become restricted or prohibited by regulation, the respective value of the agreement to the parties will change. While it may not yet be possible to transpose all of the requirements of the EU’s AI Act (as its yet to be finalised) into contractual terms, the parties should, at the outset, consider the potential impact of future regulatory changes (and who will pick up the cost of any such changes) by including appropriate protections in their AI supplier agreements.

4. Expect the unexpected

The rapid pace of AI developments in the past year mandates an approach that not only anticipates regulatory but also technological changes. Traditional outsourcing and SaaS agreements already, to some extent, cater for the evolving nature of technology via provisions such as those relating to updates, upgrades or change control. However, some of the changes in the AI world are likely to be more abrupt and so, will require a swift response. Building in additional mechanisms that allow to pause ongoing work and shift resources to another project, easily change project priorities or methodologies, or “circuit break” or “roll back” of an AI solution or system that does not perform in the manner anticipated, could prove vital to help withstand future seismic changes in AI.

5. Check your insurance requirements!

Your business’ ability to claim for breach of a supplier’s obligation or under an indemnity will ultimately depend on that supplier’s financial standing. While the AI market is brimming with an increasing number of start-ups and scale-ups offering a variety of novel AI tools, many of these will falter and cease to exist. Therefore, unless you are contracting with one of the established tech giants, checking the insurance provisions in your agreements is not merely a procedural detail but a crucial consideration to ensure viable recourse if things go wrong.

If you need any support, please get in touch and our team can help you navigate these tricky contracts or help prepare an AI addendum.

Authors