Deepfakes and Attorney Ethical Rules

Arizona is changing the ethical rules for attorneys. Effective January 1, 2026, Arizona is adding a new comment to Rule 3.3. This rule says a lawyer may not offer evidence that the lawyer knows to be false. If a lawyer thinks there may be an issue with the evidence, but does not know it to be false, the lawyer should resolve those doubts in favor of their client. In the era of AI-generated evidence and deepfakes, it is far easier for parties to falsify evidence. Unless the lawyer knows that the evidence is false, they can offer it in court. Lawyers have no obligation to further investigate evidence that may be false, but cannot ignore an obvious falsehood.

For Arizona lawyers, this will change in January. A new comment to the rule adds a duty of “reasonable inquiry” if you “reasonably believe that evidence has been materially altered or generated with intent to deceive the court.”  

What does reasonable inquiry mean? In the error of deepfakes, it can be nearly impossible to tell that something is not real.

I’ve spoken to thousands of lawyers over the last year on this issue. In an AI CLE, I show real and AI-faked pictures and ask the audience to guess which is which. With thousands of lawyers having seen this presentation, only a handful have been successful at identifying all the AI-altered images. We are terrible at determining what is real and what is fake.

This gets even harder with new tools released by Apple and Google. Right now, in Google Photos, there is an “AI Enhance” button you can click to improve your images. Is this now considered a deepfake? Where is the line between real and AI, between admissible and not?

Maybe the rule means you must look at the metadata to see if it is real. But how are you supposed to know what real metadata looks like versus AI-faked metadata? In a photo, there are dozens of EXIF (a type of metadata) fields, and their values vary from phone to phone or camera to camera. Some attorneys and experts have even found that AI can hallucinate metadata, making the metadata of AI-generated documents appear real.

Does reasonable inquiry mean an attorney has to hire an expert to opine whether the piece of evidence is real? That could drastically increase case costs and make it difficult to litigate smaller cases or those in which the parties lack the budget for experts.  

This appears to be an ethical minefield. It’s already tough to separate real content from deepfakes. As AI continues to improve, this will become more challenging. Attorneys cannot be expected to be experts in AI deepfake detection, nor can they be expected to hire experts for every case to review all images, videos, audio, and documents, and check for signs of AI manipulation.

As far as I am aware, Arizona is the first state to enact such a change. However, several other states have discussed making similar changes.

If you are an Arizona attorney, be careful with this change. Be sure to document your file on how you handle any evidence you think may have been manipulated, altered, or falsified. At a minimum, you should question the person who provided the possibly altered evidence to explore the origins. Consider bringing in an expert if the case is sufficiently large and the evidence is critical to your claim or defense.

AI-alterations and deepfakes are already causing problems in court. The problem is only going to get worse in the coming months. Unfortunately, I do not think this ethical rule change is the solution.  

Next
Next

RCS vs. SMS – The new standard of text messages is not tracked by the carrier.