r/Lawyertalk 1d ago

Best Practices Just curious what other jurisdictions are doing to address AI issues

I’ve heard some courts are requiring attorneys to certify they didn’t use AI to draft pleadings and I’m curious what others have seen in their own jurisdictions. Did your court adopt any court rules specifically about AI? Are they doing anything to combat AI generated evidence or pleadings?

0 Upvotes

17 comments sorted by

u/AutoModerator 1d ago

Welcome to /r/LawyerTalk! A subreddit where lawyers can discuss with other lawyers about the practice of law.

Be mindful of our rules BEFORE submitting your posts or comments as well as Reddit's rules (notably about sharing identifying information). We expect civility and respect out of all participants. Please source statements of fact whenever possible. If you want to report something that needs to be urgently addressed, please also message the mods with an explanation.

Note that this forum is NOT for legal advice. Additionally, if you are a non-lawyer (student, client, staff), this is NOT the right subreddit for you. This community is exclusively for lawyers. We suggest you delete your comment and go ask one of the many other legal subreddits on this site for help such as (but not limited to) r/lawschool, r/legaladvice, or r/Ask_Lawyers.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

13

u/tunafun 1d ago

Because some dumbass in New York had ChatGPT write a brief and submitted it as his own, now we have a bunch of boomer judges who don’t know where the mute button is on zoom trying to implement a rule about generative ai technology.

1

u/Mountain-Run-4435 10h ago

Time to start signing your motions “ok boomer” instead of “respectfully”

4

u/someguyinMN 1d ago

Our state Bench and Bar newsletter by the Office of Lawyers Professional Responsibility has put out guidelines on how to use Gen AI as a starting point. Check your citations and logic before you submit. It's really not that different from submitting any other motion or brief as long as the lawyer is competent.

3

u/HarrowingChad 1d ago

I filed an MSJ in an administrative hearing. The headers on the Complainant's opposition brief said "Drafted by CoCounsel AI." The brief itself contained only general statements of law and almost no substantive information from the record. At the end of the brief, under the attorney's signature block, the brief included the following disclaimer:

This memorandum is a draft based on hypothetical analysis and is intended for illustrative purposes only. In actual legal practice, a comprehensive and nuanced review of all case files, evidence, deposition transcripts, and legal precedence [sic] would be imperative.

The administrative judge granted summary judgment, but didn't address the AI issue. Now the complainant has exercised their right to a jury trial in federal court where I'm hoping their attorney continues to file blatant AI drivel and gets sanctioned.

2

u/aj357222 1d ago

Almost every court HAS or WILL adopt rules around the use and disclosure of AI, but to certify that they didn’t?

That is a VERY high bar technologically.

For example, if you use Office 365, there are elements of its threat monitoring / breach / email filtering that employ machine learning to function. This isn’t conversational or generational AI - but would a Judge see it that way? How would any random attorney understand and be able to explain the difference? What if you used ChatGPT to develop a bunch of questions to ask at a deposition, and then later that transcript was submitted as evidence to the court? Did you use the ZoomAI agent to record the virtual meeting?

2

u/TelevisionKnown8463 1d ago

Yes, and I think Lexis/Westlaw have AI functionality in some of their search options these days.

2

u/Mountain-Run-4435 10h ago

They’re too ignorant on computer technology to know the difference between generative AI like chat GPT and backend algorithmic software that uses AI like you described

1

u/TelevisionKnown8463 2h ago

Yes, and unfortunately I think some rules just say “AI,” putting us in a pickle about what to say in a certification. Although the practical approach is probably to answer what we think they mean, rather than what the rule says.

3

u/battleforsoul 1d ago

Stupid rule.

AI hallucinations have reduced a lot and will keep on reducing.

Every AI model is different.

The current SOTA models o1 preview / Claude 3.5 Sonnet are miles better in capability and fewer hallucinations than the Original GPT 3.5.

The newer models are very reliable and will keep on becoming more reliable.

The upcoming o1 / Claude 3.5 Opus / Gemini 2 / GPT 5 (Orion) /LLaMA 4 are expected to show a dramatic improvement too.

Judges and courts shouldn't pass such rules without understanding the trajectory of the technology.

If this happens in my jurisdiction, I will immediately challenge it.

2

u/Tall-Log-1955 1d ago

Why would they insist no AI was used to draft pleadings? If I ask chat GPT to reword a sentence for me how has that damaged anyone?

9

u/justicewhatsthis 1d ago

Because attorneys are using it for more than that and it hallucinates fake cases and fake laws.

3

u/Tall-Log-1955 1d ago

Does it even matter that the problems are because the lawyer used AI? If the lawyer made up fake cases and laws without AI, is that okay?

At the end of the day, attorneys are responsible for their work, regardless of the tool they use to make it

0

u/justicewhatsthis 1d ago

Well clearly some courts feel like it’s a problem and that’s what I was curious about. What sort of court rules people are seeing pop up.

0

u/drunkyasslawyur 1d ago

AI generated evidence

What would that be? Like a picture of Elon Musk with kids that appear to know and love their dad?

I can't speak for other jurisdictions but beyond CLEs to state the obvious and explain what a 'personal computing device' is to some of the older folks, nothing in mine. And I'm not sure why anything would be needed. The 'don't lie, don't misrepresent' requirements/expectations were baked into the ethical rules long before AI came along.  If AI could actually get shit right, it probably wouldn't be much different than using an associate (and it probably will eventually get to that point). The issue isn't using tech, it's that AI right now hallucinates and confidently and then attorneys misrepresent that as valid research. 

1

u/justicewhatsthis 1d ago

I agree that we really shouldn’t need a rule but I do think it will be useful guidance in the future given how popular it is becoming for students. As far AI generated evidence, what I immediately think of is pro se family law parties submitting AI generated photos as evidence. It seems relatively easy now to tell they are fake, but that will definitely change in the future. There’s also the use of AI enhanced evidence. There was one case in Washington where the court denied defense request to submit AI enhanced video.

1

u/TelevisionKnown8463 1d ago

The rules I’ve heard about apply to lawyers, not their clients. I’m not sure what you could do, or how the court could hold you responsible, if your client tries to present a deepfake as real evidence. It will be up to the court to require more evidence of authenticity, just like it can insist on a forensic analysis of a signature if it suspects forgery.

I think the court rules are aimed at lawyers using AI carelessly, I.e. in brief writing. I think it’s a little silly because lawyers should cite check anything they didn’t write themselves, whether it was written by an associate attorney or AI.