Compliance Guide

SRA-compliant AI content
workflows for law firms

AI can accelerate your firm's content production. But publishing on behalf of a regulated practice means compliance isn't optional. Here's how to use AI without putting your practising certificate at risk.

Updated February 2026
Written by legal SEO specialists
References SRA Standards and Regulations

Why SRA compliance matters for AI content

Every piece of content published on a law firm’s website is subject to the SRA Standards and Regulations. This applies equally whether the content was written by a senior partner, a marketing assistant, an external agency, or an AI model. The SRA does not distinguish between production methods — it evaluates the output.

This creates a specific challenge with AI-generated content. Large language models like ChatGPT, Claude, and Gemini produce fluent, professional-sounding text. They also produce factual errors, outdated legal references, and occasionally fabricated citations — all with the same confident tone. In a non-regulated context, an inaccurate blog post is embarrassing. In a regulated legal context, it’s a compliance risk that could trigger SRA investigation.

The firms using AI most effectively for content are not the ones publishing raw AI output. They’re the ones that have built review workflows specifically designed to catch the errors AI introduces, while still benefiting from the speed AI provides. This guide describes that workflow.

This is not an argument against using AI. AI-assisted content production is faster, more consistent, and — when properly reviewed — produces results that match or exceed fully manual processes. The argument is for using AI within a compliance framework that protects your firm, your clients, and your practising certificates.

The specific SRA rules that apply

Understanding which SRA provisions apply to AI-generated content is the foundation of a compliant workflow. The relevant rules are scattered across several SRA documents, so we’ve consolidated the key provisions here.

Code of Conduct for Firms

Paragraph 8.9 requires that all publicity material is “not misleading.” This is the broadest and most frequently applied provision. AI content risks breaching this rule when it overstates success rates, implies guaranteed outcomes, misrepresents fee structures, or describes services the firm doesn’t actually offer. AI models tend toward optimistic, generalised language — exactly the kind that raises regulatory flags.

Paragraph 3.1 requires managers to ensure compliance with the SRA’s regulatory arrangements. This means the managing partner or COLP (Compliance Officer for Legal Practice) has oversight responsibility for all published content, including AI-generated material. Delegating content creation to AI doesn’t delegate the compliance obligation.

SRA Transparency Rules

For firms that provide services to the public, the Transparency Rules mandate publication of pricing information for specific service areas: residential conveyancing, probate, immigration applications, motoring offences, employment tribunal claims, and licensing applications for individuals.

AI can draft fee pages based on your current pricing, but the published information must be accurate and current. If your fees change and the AI-drafted page isn’t updated, the firm is in breach — regardless of who or what originally wrote the page.

SRA Principles

Principle 2 — act in a way that upholds public trust and confidence in the solicitors’ profession. Publishing low-quality, inaccurate, or misleading content undermines this trust, whether produced by AI or otherwise.

Principle 7 — act in the best interests of each client. While website content doesn’t constitute client advice, informational pages that provide inaccurate guidance could lead prospective clients to make poor decisions before instructing a solicitor.

These principles don’t prohibit AI content. They require that whatever you publish meets the standard of accuracy and professionalism expected of a regulated firm.

Common compliance pitfalls with AI content

Based on our review of law firm websites across the UK, these are the most frequent SRA compliance issues in AI-assisted content. Every one of them has appeared on actual law firm websites.

Overstated success language

AI models default to promotional language. Phrases like “our solicitors consistently achieve outstanding outcomes” or “we have an exceptional track record in employment tribunals” sound professional but may breach paragraph 8.9 if they cannot be substantiated. The SRA expects claims to be verifiable. If your firm has a 78% success rate in unfair dismissal claims, you can say so — with the caveat that past results don’t guarantee future outcomes. If you don’t track success rates, don’t imply them.

This is the most dangerous pitfall. AI models sometimes state incorrect limitation periods, misquote fee levels, confuse English and Scottish law, or reference superseded legislation. A page that states “you have six years to bring an employment tribunal claim” (the correct limitation for most claims is three months less one day) could mislead a prospective client into missing their deadline. That’s not just a compliance issue — it’s a genuine harm.

Fabricated citations

AI models occasionally generate references that don’t exist — invented case names, non-existent statutory provisions, or fabricated statistics. On a law firm website, a fabricated citation undermines credibility with both readers and the SRA. Every citation in AI-assisted content must be verified against the primary source.

Implied guarantees

“We will resolve your dispute” or “You will receive compensation” are guarantees that no solicitor can make, and no AI-generated page should contain. These phrases sometimes appear in AI drafts because the model is optimising for persuasive language without understanding regulatory constraints. Review every page for language that implies certainty of outcome.

Outdated fee information

AI models have training data cutoffs and cannot access your firm’s current fee schedule. A page drafted in January might reference fee levels that changed in April. Fee information must be verified against current rates at the point of publication and checked quarterly thereafter — particularly for services covered by the SRA Transparency Rules.

The compliance-safe content workflow

This workflow is designed to capture the speed benefits of AI while maintaining the compliance standards a regulated firm requires. We use this process for every client in our AI SEO automation service.

Stage 1: AI-assisted drafting

AI generates a structured first draft based on a detailed content brief. The brief specifies the target keyword, the page’s purpose, the intended audience, the jurisdiction (England and Wales unless stated otherwise), and any specific practice area details. The more specific the input, the fewer compliance issues in the output.

At this stage, the content is a working document — not a publishable page. It contains useful structure and broadly accurate information, but it has not been verified for legal accuracy, fee correctness, or SRA compliance.

A qualified practitioner reviews every legal statement in the draft. This is non-negotiable. The reviewer checks:

  • Are limitation periods correctly stated?
  • Are court procedures accurately described?
  • Do fee ranges reflect current pricing?
  • Are legal definitions correct?
  • Is the jurisdictional scope clear (England and Wales, not UK-wide unless specified)?
  • Have any legislative changes since the AI’s training data affected the content?

The reviewer marks corrections directly in the document. Common corrections include updating fee ranges, adjusting procedural descriptions to reflect current Practice Directions, and removing statements that conflate English and Scottish law.

Stage 3: SRA compliance review

A separate review — either by your COLP, a compliance-aware partner, or a reviewer trained in SRA requirements — checks the content against the compliance checklist:

  • No misleading claims about outcomes or success rates
  • No implied guarantees
  • Fee information is accurate and current
  • Testimonials and case studies are genuine and consented
  • Disclaimers are present and appropriate
  • Content does not constitute legal advice (for informational pages)
  • Transparency Rule requirements are met (for applicable service pages)

This step takes 10–15 minutes per page and prevents the vast majority of compliance issues from reaching publication.

Stage 4: SEO optimisation

With the content legally accurate and SRA compliant, the SEO specialist handles final optimisation: title tag, meta description, heading structure, internal links to relevant service pages and supporting resources, schema markup, and keyword placement. This stage doesn’t introduce new content — it ensures the approved content is visible and effective in search.

Stage 5: Author attribution and publication

The content is attributed to a named author — typically the solicitor who reviewed it — with their credentials visible on the page. This attribution serves both EEAT purposes (Google rewards content by identifiable experts) and compliance purposes (it’s clear who within the firm has taken responsibility for the content’s accuracy).

Publication happens only after all four review stages are complete. No shortcuts. No publishing a draft “to get it live” while waiting for review. The workflow exists because the consequences of publishing non-compliant content on a regulated firm’s website are real and potentially serious.

Chatbots and interactive AI: additional considerations

AI chatbots on law firm websites introduce compliance considerations beyond static content, because the AI is generating responses in real time rather than from pre-approved text.

Disclosure requirement

The SRA’s overarching principle of transparency means visitors must know they’re interacting with an AI system, not a solicitor. This disclosure should be:

  • Visible at the start of every conversation (not buried in terms and conditions)
  • Worded clearly: “You’re chatting with our AI assistant. For specific legal advice, please speak with one of our solicitors.”
  • Impossible to miss — not a small-text disclaimer below the chat window

Our AI chatbot service implements this disclosure as a mandatory opening message that cannot be removed or modified by the chatbot itself.

Response boundaries

A law firm chatbot must not:

  • Assess a visitor’s legal position (“based on what you’ve described, you likely have a strong case”)
  • Predict outcomes (“you would probably receive compensation of around £X”)
  • Recommend specific legal actions (“you should issue proceedings”)
  • Claim or imply it’s a solicitor or legal professional

It can:

  • Provide general information about your services and fees
  • Describe typical processes and timescales
  • Answer factual questions drawn from your approved website content
  • Qualify enquiries by practice area and urgency
  • Offer to connect the visitor with a qualified solicitor

The boundary between information and advice isn’t always obvious, which is why proper configuration and regular testing are essential. We conduct monthly compliance reviews of chatbot conversation logs to identify any responses that approach the boundary.

Data handling

The ICO’s UK GDPR guidance applies to all personal data collected through chatbot conversations. Visitors must be informed about data collection before the conversation begins. Consent mechanisms must be clear. Data retention periods must be defined and enforced. And the chatbot provider’s data processing must meet UK GDPR standards — including the requirement that personal data is not used to train AI models without explicit consent.

Building an AI policy for your firm

An internal AI policy protects your firm by establishing clear rules about how AI tools are used in your practice. Several professional indemnity insurers now ask whether firms have an AI usage policy as part of their renewal process.

What the policy should cover

Permitted tools. Specify which AI tools staff may use for firm work. Consumer-tier tools (ChatGPT’s free version, for example) typically use conversations for training data and are unsuitable for any input that could identify clients. Enterprise-tier tools with contractual data protections are appropriate for content drafting and research.

Content review requirements. All AI-assisted content must complete the four-stage review process before publication. No exceptions for “quick updates” or “minor changes.” Define who has review and approval authority.

Client data prohibition. AI tools must not be used to process client-identifying information unless the tool has a compliant Data Processing Agreement, data is stored on UK/EEA infrastructure, and the specific use case has been approved by your DPO or data protection lead.

Accountability. The named author on any published content is responsible for its accuracy. Using AI as a drafting tool does not transfer responsibility to the tool or its provider.

Training. Staff who use AI tools should understand both the capabilities and the limitations — particularly the tendency of AI models to generate confident but incorrect statements. A 30-minute training session covering the basics is sufficient for most firms.

Template structure

A functional AI policy for a law firm needn’t exceed two pages. Cover: purpose and scope, approved tools, prohibited uses, content review workflow, data protection requirements, and the signature of the managing partner or COLP confirming adoption. Review annually or when SRA guidance on AI evolves.

Data protection when using AI tools

Data protection is the area where AI use in law firms carries the most tangible risk. The consequences of a data breach involving client information processed through an AI tool are severe — both regulatorily (ICO enforcement) and professionally (SRA investigation, professional indemnity implications).

The fundamental rule

Do not input client-identifying information into any AI tool unless you have verified that the tool’s data handling meets UK GDPR standards and a Data Processing Agreement is in place. This includes client names, case details, addresses, financial information, and any data that could identify an individual or their legal matter.

Consumer vs enterprise AI tools

Consumer-tier AI tools — including free versions of ChatGPT, Google Gemini, and similar services — typically include terms that allow the provider to use conversations for model training. This means client data entered into these tools could be processed and potentially reproduced in responses to other users. This is fundamentally incompatible with the duty of confidentiality owed to legal clients.

Enterprise-tier tools from the same providers typically offer contractual commitments that data is not used for training, is processed on specific infrastructure, and is subject to a formal Data Processing Agreement. If your firm uses AI tools that process any personal data, the enterprise tier is the minimum acceptable standard.

Practical safeguards

For content creation workflows — which is the focus of this guide — the data protection risk is manageable because website content should never contain client-identifying information anyway. The risk increases if staff begin using AI tools for tasks beyond content creation: drafting client correspondence, summarising case files, or preparing court documents. Your AI policy should draw clear lines between content-creation use (lower risk, manageable with standard precautions) and case-related use (higher risk, requiring enterprise tools and specific approval).

The Law Society has published guidance on technology adoption in legal practice that covers data protection considerations. The ICO’s UK GDPR guidance provides the regulatory framework for all data processing, including AI-assisted workflows.

For a comprehensive approach to AI-powered content that maintains compliance at every stage, explore our AI SEO automation service — where every piece of content passes through the workflow described in this guide before publication.

Common
questions

The questions that usually decide whether a firm books a call, starts with an audit, or keeps comparing options.

16 Questions answered clearly and without filler.

Can't find your answer? We'll point you to the right next step.

Get in touch
01 Start here

Does the SRA have specific rules about AI-generated content?

The SRA has not published rules that specifically address AI-generated content as a distinct category. Instead, existing principles apply: content published by or on behalf of a regulated firm must be accurate, not misleading, and compliant with the SRA Standards and Regulations regardless of how it was produced. The production method is irrelevant — the compliance obligation applies equally to content written by a partner, a marketing agency, or an AI tool.
02 Question

Can my law firm be sanctioned for publishing AI-generated content?

Your firm won't be sanctioned for using AI as a tool. It could face regulatory action if AI-assisted content makes misleading claims, provides inaccurate legal information, implies guaranteed outcomes, or fails to meet the standards of the SRA Code of Conduct for Firms. The sanction relates to the content itself, not how it was produced. This is why human review by someone who understands both the law and the SRA's expectations is essential.
03 Question

Do we need to tell clients that our website content was created with AI?

There is no SRA requirement to disclose AI assistance in content creation. The SRA's concern is that content is accurate, not misleading, and complies with advertising standards — not how it was drafted. That said, if your firm's brand positioning emphasises personal expertise and human touch, transparency about your content process is a matter of brand consistency rather than regulatory obligation.
04 Question

What SRA principles apply to law firm marketing content?

The core principles are found in the SRA Standards and Regulations. Paragraph 8.9 of the Code of Conduct for Firms requires that marketing is 'not misleading'. The SRA Transparency Rules require published pricing information for certain services. Principle 2 (public trust) and Principle 7 (acting in the best interests of clients) apply broadly. AI-generated content must meet all of these standards — and the responsibility for ensuring compliance rests with the firm, not with the tool.
05 Question

How do we handle fee information in AI-generated content?

The SRA Transparency Rules require that certain firms publish pricing information for specific services — residential conveyancing, probate, immigration, motoring offences, employment tribunals, and licensing for individuals. AI can draft fee pages based on your current pricing, but the figures must be verified by your firm before publication and updated whenever fees change. Publishing outdated or inaccurate fee information breaches the Transparency Rules regardless of whether AI drafted the page.
06 Question

Can AI content include client testimonials or success stories?

The SRA permits testimonials and case studies provided they are genuine, not misleading, and do not breach client confidentiality without consent. AI cannot verify whether a testimonial is genuine or whether a client has consented to their case being referenced. These elements must be sourced and verified by a human. If AI drafts a case study framework, a solicitor must confirm that the facts are accurate, the outcome is not overstated, and any necessary consents are in place.
07 Question

What happens if AI content contains incorrect legal information?

Publishing incorrect legal information on your firm's website creates two risks: regulatory (the SRA may consider it misleading under the Code of Conduct) and professional (a prospective client who relies on inaccurate published guidance may have grounds for complaint). AI models sometimes state incorrect legal positions confidently — misquoting limitation periods, confusing jurisdictional rules, or citing superseded legislation. This is why every legal statement in AI-assisted content must be verified by a qualified practitioner before publication.
08 Question

How should we handle disclaimers on AI-assisted content?

Standard website disclaimers — stating that content is for general information only and does not constitute legal advice — should appear on all informational pages regardless of how the content was produced. For AI-specific risks, consider adding: 'This content was reviewed by [named solicitor] on [date]. For advice on your specific situation, please contact us.' This provides accountability, a freshness signal, and a clear call to action.
09 Question

Can AI chatbots on our website be SRA compliant?

Yes — with proper configuration. The chatbot must clearly disclose that the visitor is interacting with an automated system. It must not give legal advice, make outcome predictions, or represent itself as a solicitor. It must handle personal data in compliance with UK GDPR. And any information it provides must be accurate and drawn from approved firm content. Our AI chatbot service includes all of these safeguards as standard.
10 Question

Who is responsible for AI content compliance — the firm or the agency?

The firm. The SRA holds the regulated entity responsible for all marketing published on its behalf, regardless of who produced it. If your agency publishes AI-generated content that breaches SRA guidelines, your firm bears the regulatory risk. This is why your content workflow must include a firm-side review step — even if you trust your agency completely. We build this review step into every content production cycle.
11 Question

Should we have an AI policy for our firm?

Yes — and many professional indemnity insurers are beginning to require one. A basic AI policy should cover: which AI tools staff may use, what types of content can be drafted with AI assistance, the mandatory review process before publication, how client data is handled (AI tools must not process client-identifying information), and who has final approval authority. The policy doesn't need to be lengthy — a one-page document covers most firms' needs.
12 Question

Can we use AI to draft client communications, not just website content?

AI can assist with drafting standard client communications — engagement letters, status updates, procedural explanations. However, any communication that contains legal advice, case-specific guidance, or information that might influence a client's decisions must be reviewed and approved by the supervising solicitor. The SRA's duty of competence (Principle 4) applies to all client communications, and delegating to AI without review would likely fall short of this standard.
13 Question

What are the data protection implications of using AI for content?

Do not input client-identifying information into AI tools unless you have a data processing agreement with the provider and the tool's data handling meets UK GDPR standards. Most consumer AI tools (ChatGPT's free tier, for example) use conversations for model training — making them unsuitable for any data that could identify a client or their matter. Use enterprise-tier tools with contractual data protections, or anonymise all input data before processing.
14 Question

How do we maintain EEAT when using AI for content?

EEAT — Experience, Expertise, Authoritativeness, and Trustworthiness — is maintained by ensuring every piece of content is attributed to a named, qualified author. That author's credentials (SRA number, qualifications, years of experience) should be visible on the page. The content itself must contain evidence of genuine expertise: specific fee ranges, realistic timescales, practical process descriptions, and nuanced advice that could only come from someone who practises in the area. AI assists with production. Humans provide the expertise.
15 Question

What should our AI content review checklist include?

At minimum: (1) Are all legal statements accurate under current law in England and Wales? (2) Are fee ranges current and compliant with SRA Transparency Rules? (3) Does any language imply guaranteed outcomes? (4) Are testimonials or case references genuine and consented? (5) Is the content attributed to a named, qualified author? (6) Are disclaimers present and appropriate? (7) Do all external links point to authoritative, current sources? (8) Has a qualified solicitor reviewed and approved the content? If any answer is 'no', the content is not ready for publication.
16 Question

Are other UK law firms using AI for content, and has the SRA commented?

Yes — a significant number of UK law firms are using AI tools in their marketing and content workflows. The SRA has acknowledged the increasing use of AI across the legal sector and has published risk assessments relating to AI in legal services. Their position is consistent: firms must ensure that AI use does not compromise the principles of the SRA Standards and Regulations. The SRA has not prohibited AI use — but it has made clear that firms remain fully accountable for output quality and compliance.
Want compliant AI content done for you?

AI-powered content that
meets every standard

Book a free 30-minute visibility review. We'll look at your current rankings, local presence, and tell you exactly where the biggest opportunities are without wasting your time on theatre.