KPC

The Top 5 Issues with AI-Generated Legal Content

by Kristi Patrice Carter, JD

Artificial Intelligence (AI) is taking over the world and is changing how businesses create content. For attorneys, it shortens legal research and analysis, assists with document review, automates tasks, and more. In essence, it enables attorneys to focus on high-value tasks, such as marketing or preparing cases for settlement or trial. Business Wire recently reported, “Over 47% of legal professionals use AI within their legal practice.” Although this is a large percentage, attorneys relying on AI-generated content must understand its pitfalls and take steps to eliminate these risks. Here are the top five issues attorneys must know when using AI-generated content:

#1 It May or May Not Be Accurate

AI-generated content does not have analytical depth, nor does it utilize legal reasoning. Since a human is not conducting a thorough analysis, AI is merely crafting responses based on a prompt. This is very problematic for attorneys because it can lead to misinterpretations of statutes, inappropriate applications of case law, or flawed arguments. There have been instances where AI models create fictional statutes and cases leading to nonexistent or contradictory precedents. This complicates attorneys’ reliance on inaccurate legal content and subjects them to court-imposed sanctions and malpractice suits. Instead, they must ensure that any content is factual and properly verified. If lawyers don’t, their reputations can be negatively impacted, and they may face serious repercussions.

#2 Ethical Concerns for Lawyers

Attorneys take an oath; as such, they must abide by strict ethical rules, including the duty to be diligent and competent. The American Bar Association (ABA) and other state bar associations stress that lawyers must adequately use technology and ensure adherence to professional ethical standards. Relying on incorrect, misrepresented, or misleading AI-generated content (without proper attorney review) may result in serious ethical violations and sanctions. For instance, in Beaumont, Texas, a federal judge penalized attorney Brandon Monk for submitting an AI-generated court filing that included nonexistent quotations and case law. Attorney Monk had to pay a $2000 fine and attend a generative AI course.

#3 Data Security and Confidentiality Concerns

Lawyers must realize that AI tech stores sensitive client data, which can lead to breaches. Many AI tools will not generate responses until the user puts data into the program. They also use documentation previously uploaded into its system. Attorneys who input sensitive client information to create customized responses subject clients to data leaks, unauthorized access, or violations of the attorney-client privilege. Further, most AI platforms do not guarantee data security, making them non-compliant with state regulations and legal confidentiality obligations under the ABA’s Standing Committee on Ethics and Professional Responsibility opinion. Formal Opinion 512 states that lawyers and law firms using generative artificial intelligence (GAI) must always consider ethical obligations and do everything within their power to protect client information while charging reasonable fees when they use GAI models.

#4 Lacks Authority

Although AI-generated content can appear authoritative, it is typically low-quality, rehashed content that appears all over the ‘net’. It may or may not correctly address local legal nuances and lack jurisdiction-specific accuracy as laws vary by state and country. AI-generated content may also violate ethics rules. Because it often resembles existing material, AI-generated content doesn’t establish attorneys as credible legal authority.

#5 It Is Not Creative

AI content is often based on past data that has been rehashed repeatedly. It is not original, profound, or creative. It is a carbon copy of an idea and lacks passion or empathy. In most instances, AI-generated content is read and quickly forgotten because it does not instill deep emotions or forge an intimate connection with the reader. Although this type of legal writing may be sufficient for an interoffice meeting memo, it will not work for marketing or establishing a unique and reputable brand or a solid reputation as an industry leader. Search engines, like Google, penalize legal sites that use rehashed AI-generated content. They want to create a unique user experience and do not see copy-cat content as worthwhile or useful. As such, these search engines increasingly do not allow this type of content to rank closer to the top of the search results for a specific query. Instead, they penalize “duplicate content” and rank it lower than original and more legally sound content so that viewers will be less likely to see it.

Conclusion

AI-generated content can help streamline lawyers’ tasks, but professionals who publish incorrect or faulty AI-generated content are at risk of malpractice suits and court sanctions. To lessen risks, attorneys must double-check all information to ensure that it is accurate, up-to-date, valuable, ethical, and not contradictory. By doing this, they will preserve client trust, their reputations, and the integrity of the profession.

How KPC Marketing Can Help

Due to the reasons mentioned above, KPC Marketing does not use AI-generated content. Our marketing writers have JDs or are licensed attorneys with over 10 years of experience within the legal field. We produce unique, high-quality, well-written, compelling, informative, and legally accurate content that appeals to human readers and search engines. Contact KPC Marketing today at 866-457-2627 or visit www.kpcmarketing.com for a free initial consultation.