Facial Recognition — Surveillance in the Lakes Region - What are the REAL Consequences?
CONTINUATION OF PREVIOUS SURVEILLANCE BLOG POST. Because of space limitations, this is a follow-up outreach blog post to the Lakes Region community. It follows our previous post: “Impacts of Surveillance “Statecraft” in the Lakes Region — “Takings” of Personal Data”. LINK The subject of this blog post in combination with the previous one, it is applicable to the Lakes Region in many tangible ways. For example with recent changes concerning Laconia’s adoption of “social districts” where community members within them are now allowed to drink (as allowed by the ordinance permitting drinking outside) will Facial Recognition and software be deployed by a tethered group of drones (also recently purchased by Laconia) pursuant to Requests for Proposals revealed in our previous blog post? Will license plate reader cameras and geofencing DATA collection from cell phones also be deployed from drones and/or hand-held cameras such as are used in other municipalities? “2 New Hampshire cities vote to OK "social districts" that allow public drinking” (CBS News, Nov. 5, 2025)
This post drills a bit deeper on Facial Recognition in the context of misleading or fabicated evidence presented in courtrooms whether it be: video footage, photographs, voice recordings, location DATA and metadata. Some readers may question whether ai generated content has progressed to the point where it cannot easily (or at all) be distinguished from real video footage, photographs, voice recordings, location DATA and metadata. The short answer is YES IT HAS! This video contains the following: Description: “Hyper-realistic AI-generated news videos are flooding social media, making it harder to tell real reports from fakes. Experts warn the technology is advancing so quickly that misinformation can spread before it’s verified, raising new concerns about trust in what we see online.”
Please watch the video all the way through and use your own Critical Thinking skills.
Facial Recognition and collection of personal, private DATA from generally unsuspecting Americans may appear to be a new technology; yet it has a history of deployment for at least a decade. And its usage to collect DATA is now pervasive in both public and commercial applications; of which readers may be unaware. One of the venues addressing its impacts are courtrooms where DEEP FAKE current technology arises in the form of challenges to evidence: i.e., video footage, photographs, voice recordings, location DATA and metadata under challenges by lawyers as to its authenticity and veracity of the evidence.
SOURCE National Law Review Aug. 2025: “When a high school principal's voice went viral making racist and antisemitic comments, the audio seemed authentic enough to destroy careers and inflame community tensions. Only later did forensic analysis reveal the recording was a deepfake created by the school's athletic director. The incident, requiring two forensic analysts to resolve the nature of the recording, illustrates a fundamental challenge facing the legal system: as AI-generated content becomes indistinguishable from human-created content, how do courts determine authenticity?
This challenge extends beyond theoretical concerns. Courts nationwide are grappling with synthetic evidence in real cases, from criminal defendants claiming prosecution videos are deepfaked to civil litigants using AI-generated content to support false claims. …”
SOURCE: On Friday, a federal judicial panel convened in Washington, DC, to discuss the challenges of policing AI-generated evidence in court trials, according to a Reuters report. The US Judicial Conference’s Advisory Committee on Evidence Rules, an eight-member panel responsible for drafting evidence-related amendments to the Federal Rules of Evidence, heard from computer scientists and academics about the potential risks of AI being used to manipulate images and videos or create deepfakes that could disrupt a trial.
The meeting took place amid broader efforts by federal and state courts nationwide to address the rise of generative AI models (such as those that power OpenAI’s ChatGPT or Stability AI’s Stable Diffusion), which can be trained on large datasets with the aim of producing realistic text, images, audio, or videos.
In the published 358-page agenda for the meeting, the committee offers up this definition of a deepfake and the problems AI-generated media may pose in legal trials:
A deepfake is an inauthentic audiovisual presentation prepared by software programs using artificial intelligence. Of course, photos and videos have always been subject to forgery, but developments in AI make deepfakes much more difficult to detect. Software for creating deepfakes is already freely available online and fairly easy for anyone to use. As the software’s usability and the videos’ apparent genuineness keep improving over time, it will become harder for computer systems, much less lay jurors, to tell real from fake.
During Friday’s three-hour hearing, the panel wrestled with the question of whether existing rules, which predate the rise of generative AI, are sufficient to ensure the reliability and authenticity of evidence presented in court.
“Deepfakes in the Courtroom: Problems and Solutions”
SOURCE: (© Illinois State Bar Association) By George Bellas “The explosion of artificial intelligence (AI) has significantly impacted the practice of law. While it has improved legal research, drafting, and automating repetitive tasks, the impact of AI in the courtroom must still be confronted. The increased intrusion of AI into the legal world as a whole and the courtroom creates many challenges, both practically and ethically, in the context of litigation.
High on the list are so-called “deepfakes,” a term that refers to altered or completely fabricated AI-generated images, audio, or video, that are also extremely realistic, making them difficult to discern from reality.1 In a sense, they’re AI’s version of photoshopping.
And the ease with which deepfakes can be created poses significant problems for courts in handling video and image evidence. We can no longer assume a recording or video is authentic when it could easily be a deepfake.
As Judge Herbert B. Dixon, Jr. of the Superior Court of the District of Columbia recently observed, “Because deepfakes are designed to gaslight the observer… any truism associated with the ancient statement ‘seeing is believing’ might disappear from our ethos.”2
Deepfakes, which first appeared in 2017,3 have been used for purposes ranging from doctored porn clips, to spoof and satire, to fraud and other crimes, as noted in a joint presentation last January by the ABA’s Task Force on Law & AI, and The Bolch Judicial Institute at Duke Law School.4 They also have appeared in the form of fictional social media accounts and voice clones. They can be created in a minute or less. We may be looking at a future in which entire movies are made using only a single scene. In the courtroom context, deepfakes will impact evidence authenticity, witness credibility, and the integrity of the judicial process, not only because of deepfakes themselves but also because genuine evidence now can be alleged to be false, requiring this to be disproven. …” SEE FURTHER for entire article for extensive analysis of real cases and a full bibliography of citations.
Deep Fakes are Not Just being used in Court Litigation — Theft of Personal, Private Data is Increasing Exponentially Across America — Theft of Banking Information from Phones
Are Current Laws Sufficient to Combat Deepfakes?
© 2025 Amy Swaner. All Rights Reserved. May use with attribution and link to article, LINK
Deepfakes are used for 3 specific purposes: to embarrass, mislead, or entertain. Various forms of media have been used for these exact three purposes for centuries. Though deepfakes seem new, they are just the latest iteration of doctored images. Their real malignancy comes from the accessibility, ease, and accuracy of these newest doctored images. Rather than skill and intelligence, a GenAI user needs only imagination and access to a powerful GenAI tool. If these deepfakes are just the latest iteration of a long-standing harm, can existing laws sufficiently combat them? We’ll look at the harms of deepfakes, and consider whether existing laws are sufficient to provide adequate remedies.
Torts
The accessibility and sophistication of deepfake technology have expanded the landscape of potential tortious conduct. When examining the legal remedies available to victims of harmful deepfakes, several established tort doctrines promise recourse. But are they sufficient? While new technology may present novel factual scenarios, the fundamental legal principles governing intentional misrepresentation, reputational harm, and negligent conduct remain applicable.
Generally, when the analysis focuses on the underlying tortious behavior rather than the technological means used to perpetrate it, tort laws are sufficient. The following tort frameworks are particularly relevant to addressing deepfake harms, with fraud and defamation being the most frequently applicable.
Fraud
Fraud is a deliberate deception intended to secure unfair or unlawful gain or to deprive a victim of a legal right. GenAI Deepfakes have already wreaked havoc in the pursuit of fraud, such as a deepfake video that convinced a worker to release $25 million to a fake CFO. The worker was skeptical of the transaction until he was on a video chat with several people, including the CFO. All of the other “people” on the video chat were deepfakes. Consider the following two scenarios, which show how analogous typical fraud claims and deepfake fraud claims are.
Scenario #1 For example, a startup founder is seeking investments for their new AI software company. To attract funding, the founder:
Falsely claims that the company has secured contracts with major corporations.
Alters financial statements to show inflated revenue and profits.
Provides fake customer testimonials and case studies to mislead investors.
Relying on this information, investors contribute millions to the startup. Later, it is revealed that the contracts were fake, revenue was exaggerated, and customer testimonials were fabricated. The company collapses, and the investors lose their money.
This type of harm falls directly under common law and statutory fraud. It is knowingly false claims that are intended to mislead and change behavior for the benefit of the speaker at the expense of the one misled.
Scenario #2 Now in the same situation, imagine that the startup founder uses GenAI to create a deepfake video of a CEO announcing false company information to mislead potential investors. The founder could use Runway MI, Gen 2 to create videos of how the imaginary new software looks and works and videos of testimonials by imaginary customers. The founder could use a free GenAI tool such as ChatGPT, Grok, Gemini, or others to create fake financials. Although GenAI helped to create the misleading information, the intent and the result are the same -- to manipulate and mislead investors. The same laws that were used to punish the “analog” perpetrator of fraud can be used to punish the AI perpetrator. The same holds true for defamation.
Defamation
With GenAI readily accessible, cheap, and easy to use, with such realistic output, it's no wonder that we have seen an explosion of deepfakes, including defamatory deepfakes. Defamation is a false statement presented as fact that causes injury or damage to a person's reputation. Milkovich v. Lorain Journal Co., 497 U.S. 1 (1990). However, despite being false, defamation still enjoys some level of First Amendment protection.
Recent examples of defamatory deepfakes include manipulated videos of Donald Trump appearing to make inflammatory statements about minority groups, and fabricated footage of Kamala Harris apparently calling for extreme policy positions she never actually endorsed.
Public figures pursuing defamation claims must prove that the defamer acted with actual malice—knowledge of falsity or reckless disregard of the truth. New York Times v. Sullivan, 376 U.S. 254 (1964). This standard becomes particularly relevant with deepfakes, as the very nature of their creation implies knowledge of falsity.
Scenario #1 For instance, if political Candidate A accuses a competitor, Candidate Z, in online forums and social media sites that Candidate Z has accepted bribes and engaged in human trafficking, this is actionable defamation if it is false and done with actual malice. Candidate Z could successfully sue for defamation and recover damages if the defamed Candidate Z can prove reputational damage.
Scenario #2 If Candidate B used GenAI to create fake posts on social media sites alleging that Candidate Y received bribes. They could even use GenAI to create a video of Candidate Y receiving bribes. They could also use GenAI to create a video showing Candidate Z holding people against their will and forcing them to engage in whatever activity makes it appear they are being trafficked. Creating and distributing these deepfake videos showing a political candidate accepting bribes and human trafficking would be considered defamatory. So long as Candidate Y can prove that these videos were false, and damaged their reputation, Candidate Y can use the exact same laws as Candidate Z in Scenario #1.
Deepfakes used to perpetrate Fraud and Defamation can be combatted using the same laws used to punish their low-tech analogs. Thus far, no new law is needed.
Negligence
Negligence principles provide a beneficial legal framework for addressing deepfake harms that occur without malicious intent but still result from a creator's failure to exercise reasonable care. Under traditional negligence doctrine, deepfake creators and distributors may be held liable if they breach their duty of care to foreseeable victims through actions such as inadequate disclosure of synthetic content, insufficient security measures protecting deepfake technology, or careless distribution that enables harmful misuse.
The standard of care for deepfake creators is evolving alongside the technology, with courts likely to consider industry best practices, available safeguards, and the potential severity of harm when determining liability. For instance, a creator who fails to implement readily available watermarking technology on a realistic deepfake or fails to identify it as GenAI-created, might be found negligent if that content is subsequently mistaken for authentic material and causes demonstrable harm. As deepfake technology becomes more mainstream, we can expect negligence law to play an increasingly important role in establishing the boundaries of responsible creation and distribution, particularly in cases where intent to harm cannot be established, but foreseeable damage nonetheless occurs.
Crimes
Extortion and Blackmail
Deepfakes have created powerful new tools for extortion and blackmail. Perpetrators typically create compromising synthetic media—often of a sexual or embarrassing nature—and threaten to release it unless the victim complies with demands for money, sexual favors, or other concessions. The criminal nature of this conduct is clear under both federal and state statutes prohibiting extortion, blackmail, and coercion, regardless of whether the threatened content is authentic or synthetic. The psychological impact on victims can be severe even when they know the content is falsified, as the potential public humiliation remains a credible threat. Courts have consistently held that such uses of deepfake technology fall outside constitutional protection, as they constitute true threats and criminal solicitation rather than protected expression. Prosecution of these cases presents unique challenges. One of the most difficult is jurisdictional issues when perpetrators operate across borders and technical hurdles in tracing the origin of anonymous deepfakes.
AI-Generated Non-Consensual Intimate Imagery or “DeepFake Porn”
This is commonly referred to as "AI-generated nonconsensual intimate imagery" or "AI-generated NCII." It's also sometimes called "deepfake pornography" when specifically referring to fabricated sexual content that uses someone's likeness without their consent.
Currently, no comprehensive federal law specifically targets AI-generated nonconsensual intimate imagery, leaving a patchwork of state regulations to address this growing concern. Several states have taken the initiative to fill this regulatory gap.
California leads with two significant pieces of legislation: AB 602 and AB 1280, both explicitly designed to combat sexually explicit deepfakes. Similar protective measures have been enacted in Texas, Virginia, New York, and Minnesota. In Illinois, the Biometric Information Privacy Act (BIPA), though not created specifically for deepfakes, has proven applicable in certain cases involving the unauthorized use of biometric identifiers.
At the federal level, several promising legislative approaches have been proposed but not yet enacted. The DEFIANCE Act (Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act) represents one of the most comprehensive attempts to address deepfake harms. Similarly, the proposed DEEPFAKES Accountability Act and the Preventing Harmful Image Manipulation Act aim to create federal frameworks for combating this technology when used maliciously. For the time being, a comprehensive AI law—even for something as important as preventing NCII--is unlikely to be forthcoming.
In the absence of specific legislation, victims often turn to existing legal frameworks for recourse. Copyright laws offer protection when the original images belong to the victim. Defamation laws, right of publicity claims, and anti-harassment statutes can provide alternative legal avenues depending on the specific circumstances of each case.
The rapidly evolving nature of generative AI technology presents significant challenges for lawmakers and courts alike. These challenges are further complicated by jurisdictional issues, particularly when content is created or hosted internationally, placing it beyond the reach of domestic laws. Legal experts widely acknowledge that despite these various approaches, significant protection gaps remain, leaving many victims without clear or effective legal remedies. As technology continues to advance, the law struggles to keep pace, highlighting the urgent need for more comprehensive legal frameworks.
Revenge porn laws, originally designed to criminalize the nonconsensual distribution of real intimate images, struggle to address the unique challenges posed by AI-generated deepfake pornography. Many existing laws require that the images be authentic, meaning they must depict the actual victim rather than a synthetic creation. This loophole allows perpetrators to claim that no real photo or video was used, evading liability.
In 2019, Virginia became the first state to amend its existing "revenge porn" laws to include provisions against nonconsensual sexual deepfakes. Virginia Section 18.2-386.2. The amendment made it unlawful to create a non-consensual sexual image “by any means whatsoever.” Virginia strengthened its stance by enacting Senate Bill 731. Texas also expanded its statutes to cover AI-generated nonconsensual intimate imagery (NCII). California passed two laws to combat deepfake porn (SB 926 and SB 981—both of which became effective January 1, 2025).
To date there is no comprehensive federal law that criminalizes the creation or dissemination of deepfake pornography. As a result, victims must rely on a patchwork of state laws, civil claims for defamation or invasion of privacy, or intellectual property laws, none of which were originally designed for this type of harm. The absence of clear legal recourse leaves victims vulnerable, making it imperative for lawmakers to adopt federal legislation specifically targeting AI-generated NCII, ensuring that perpetrators face appropriate legal consequences and victims have effective remedies.
Section 230 of the Communications Decency Act generally shields online platforms from liability for user-generated content, meaning that websites and social media companies are not legally responsible for hosting or distributing deepfakes, even if they cause significant harm. This legal immunity presents a major challenge in combating malicious deepfakes. I will explore the implications and impacts of Section 230 and potential reforms in a subsequent article.
Copyright and Trademark
Deepfakes pose significant challenges to copyright and trademark law by blurring the lines between original and derivative works. When deepfakes incorporate copyrighted images, videos, or audio without permission, they may constitute direct copyright infringement. However at times infringement may be inadvertent. This becomes particularly problematic when AI systems are trained on massive datasets of copyrighted materials without proper licensing or attribution. The transformative nature of deepfakes further complicates matters, as courts must determine whether these AI-generated works qualify as "fair use" or represent substantial copying of protected elements. For a more in-depth discussion of AI and
From a trademark perspective, deepfakes can dilute or tarnish valuable marks by associating brands or personalities with unauthorized or potentially damaging content. When a deepfake places a celebrity endorsing a product they never actually promoted, it may constitute trademark infringement or false endorsement. Likewise, deepfakes featuring branded products in inappropriate contexts may damage brand reputation and consumer perception. The Rogers test, which balances trademark rights against artistic expression (Rogers v. Grimaldi, 875 F.2d 994 (2d Cir. 1989)), becomes increasingly difficult to apply when AI-generated content blurs the line between artistic commentary and commercial exploitation.1 As deepfake technology becomes more sophisticated and widespread, both copyright and trademark doctrines face mounting pressure to adapt to these novel forms of potential infringement that were inconceivable when these legal frameworks were established.
Legal Protections for Deepfakes
First Amendment Protections
The First Amendment protects the freedom of speech and expression, among other things. It allows people and entities to express themselves through words and images without governmental interference. The Supreme Court has consistently held that freedom of expression is a fundamental right essential to democratic society. New York Times v. Sullivan, 376 U.S. 254 (1964). All sorts of speech enjoy these protections, including political speech (Brandenburg v. Ohio, 395 U.S. 444 (1969)), and to a limited extent, even defamatory statements. Gertz v. Robert Welch, Inc., 418 U.S. 323 (1974).
The United States Supreme Court has held that technologies that aid humans in expression receive First Amendment protection, such as the printing press, video recorders, and the internet. Reno v. ACLU, 521 U.S. 844 (1997). By this line of logical reasoning, to the extent GenAI helps users draft text or create expressive images, its output will be protected. But do these First Amendment protections extend to the deliberately falsified images and video output created using GenAI? Surprisingly, the answer is yes, to some extent. These falsified statements and images fall into three categories: entertainment in the form of parodies or satire, and defamation. We considered defamation above, and so will consider parodies and satire in turn.
Parodies
Although parodies are not new—think the Spaceballs movie spoofing Star Wars movies back in the late 1980's—they are easier to create and potentially more realistic using GenAI. Two well-circulated GenAI parodies are:
Better Call Trump—Donald Trump appeared as a Saul Goldman character explaining money laundering to Jared Kushner. Created by YouTubers who were showcasing the abilities of DeepFace GenAI software.
Fortune Telling—Snoop Dogg appeared as “Miss Cleo” reading the futures of other celebrities through tarot cards. Created by BrianMonarch.
Convenience Store Holdup—Donald Trump, Joe Biden, Elon Musk, Barak Obama, Kamala Harris, and other celebrites are shown as attempting to hold up a convenience store. Created by AI Video Creations.
Parodies are generally protected as a form of speech under the First Amendment. Courts have recognized parody as entertainment and beneficial form of social commentary. Campbell v. Acuff-Rose Music, 510 U.S. 569 (1994). This protection was powerfully affirmed in Hustler Magazine v. Falwell, 485 U.S. 46 (1988), where the Supreme Court held that parodies of public figures, even when intended to cause emotional distress, are protected by the First Amendment. The case involved a parody advertisement suggesting that televangelist Jerry Falwell had engaged in an incestuous relationship - clearly false and potentially emotionally harmful content, but nevertheless protected as parody. This precedent is particularly relevant to deepfakes, as it suggests that even highly manipulated content may receive constitutional protection when presented as parody. To maintain this protection, however, parodies must stay clear of unprotected speech such as obscenity or incitement into lawless actions.
In addition, trademark laws protect the identifying image(s) or phrase(s) closely associated with a particular product or service if they are used incidentally. If an incidental use of a trademark or service mark adds to the deeper meaning of the parody, its use is generally acceptable under trademark law. Rogers v. Grimaldi, 875 F.2d 994 (2d Cir. 1989).
Courts have generally found trademark protections unavailable if a parody is making a commentary on a symbol, phrase, image, or the product itself. Louis Vuitton Malletier S.A. v. Haute Diggity Dog, LLC, 507 F.3d 252 (4th Cir. 2007).
Satire
Satire is the use of humor, irony, exaggeration, or ridicule to expose and criticize people's stupidity or vices, particularly in the context of contemporary politics and other topical issues. For example, a deepfake video showing world leaders as kindergarteners arguing over toys to comment on international relations would be considered satire.
Satire is also related to parody but differs in that satire uses a creative work to criticize something else, while parody uses some elements of the original work to criticize or comment on that work itself. Both forms of expression generally receive First Amendment protection, though satire may receive slightly less protection than parody in copyright cases. This distinction was highlighted in Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569 (1994), where the Supreme Court recognized that parody has a stronger claim to fair use because it directly comments on the original work. Similarly, in Fisher v. Dees, 794 F.2d 432 (9th Cir. 1986), the court emphasized that parody, which comments on the original work, is more likely to be considered fair use than satire, which uses the original to comment on something else.
Political Speech
Although political speech doesn't receive absolute protection, it does receive the highest level of First Amendment protection. The U.S. Supreme Court clarified that political speech deals with matters of public concern and governmental affairs. Connick v. Myers, 461 U.S. 138 (1983). This becomes crucial when considering deepfakes in political contexts.
One of the central reasons political speech enjoys heightened protection is its role in ensuring an informed electorate and holding government officials accountable. Citizens United v. FEC, 558 U.S. 310 (2010), reaffirmed that restrictions on political speech, particularly those aimed at limiting its source or funding, are subject to strict scrutiny to prevent undue government interference in public discourse.
However, deepfakes complicate this protection by blurring the line between legitimate political expression and deceptive misinformation. When AI-generated media is used to falsely attribute statements or actions to candidates or public officials, it raises unique legal and ethical concerns—does such content constitute protected political speech as a satire or parody, or is it a form of fraud, defamation, or election interference? And if deepfakes are part of freedom of expression as a parody, do they undermine public trust in all media? Courts and lawmakers must grapple with this evolving challenge, balancing free expression with the integrity of democratic processes.
Recent Legal Developments
The legal response to deepfakes is evolving rapidly as states and federal lawmakers attempt to address their risks. everal states, including California, Alabama and Arizona, have enacted legislation specifically targeting harmful deepfake applications. A number of these are specific to deepfakes used to sway political elections. For example, Arizona’s law prohibits the use of deepfakes of political candidates within 90 days before an election. California’s AB 2839 and Colorado’s House Bill 24-1147 mandate that deepfake content include clear disclosures indicating that the media has been artificially altered. These efforts reflect a growing recognition that deepfakes can be weaponized to deceive the public, necessitating stronger legal safeguards.
Looking Ahead
As deepfake technology becomes more sophisticated and accessible, the legal system must evolve to address new challenges while preserving constitutional protections for legitimate speech. This may require:
Development of more nuanced tests for distinguishing protected from unprotected deepfake content.
Implementation of technical solutions for authenticating and tracking the origin of synthetic media.
Creation of expedited legal remedies for victims of malicious deepfakes.
Establishment of clear guidelines and penalties for platforms hosting or distributing deepfake content.
Conclusion
The emergence of deepfake technology is forcing the boundaries of the current legal frameworks. As we've explored throughout this article, deepfakes present unique challenges that test the boundaries of existing law while revealing gaps in our regulatory approach. While many deepfake harms can be addressed through traditional legal doctrines—from fraud and defamation to copyright and right of publicity—the unprecedented accessibility, scalability, and verisimilitude of this technology demands thoughtful reconsideration of how we balance competing interests.
The protection of legitimate expression, including parody, satire, and political commentary, must be carefully weighed against the profound threats that malicious deepfakes pose to individual dignity, public discourse, and democratic processes. As courts and legislators navigate this complex terrain, they must resist the temptation of knee-jerk regulation that might inadvertently interfere with protected speech. Instead, our legal system must develop nuanced frameworks that distinguish between creative expression and harmful manipulation, between innocent entertainment and intentional deception.
Looking forward, the most effective approach will likely combine legal innovation with technological solutions, platform accountability, and digital literacy. It will be easier to note accurate images and videos, rather than rooting out all fakes. Watermarking and content provenance systems may help authenticate genuine media, while expedited legal remedies can provide swift recourse for victims. Perhaps most importantly, the legal profession must take the lead in shaping this discourse—not merely reacting to technological developments but proactively designing frameworks that protect fundamental rights while addressing novel harms. Only through such thoughtful evolution can we protect the legal systems fulfill their essential purpose: safeguarding individual rights while preserving the shared foundations of truth upon which our democratic society depends.
However, see Jack Daniel’s Properties, Inc. v. VIP Products LLC, 599 U.S. 140, 143 S. Ct. 1578 (2023), challenging the Rogers Test.
The following article has significant relevance to the Lakes Region which recently purchased tethered drones for surveillance purposes — for first responders and for aerial surveillance of crowd events … perhaps more.
Readers are encouraged to use their own Critical Thinking.
© 2025 Amy Swaner. All Rights Reserved. May use with attribution and link to article.
On May 19, 2025, a Washington Post investigation revealed that the New Orleans Police Department (NOPD) had been using a real-time facial recognition system in secret for over a year. Through a network of more than 200 surveillance cameras managed by the nonprofit Project NOLA, officers received live alerts identifying individuals suspected of crimes—many of them nonviolent. Washington Post, 2025. This practice violated a 2022 city ordinance-- New Orleans City Code § 147-2 (2022)--that expressly limited the use of facial recognition to serious violent crimes and required written authorization and case-specific documentation before using the technology.
The New Orleans case is not an isolated lapse in policy compliance. It is a powerful example of what happens when advanced artificial intelligence tools outpace legal and ethical oversight. As AI continues to evolve—and law enforcement agencies increasingly turn to automation for efficiency and public safety—this incident underscores the urgent need for responsible governance at the intersection of law, AI, privacy, and public accountability.
This case represents a watershed moment in AI governance; a compelling warning of what happens when advanced surveillance technologies outpace legal and ethical guardrails. As law enforcement agencies nationwide embrace AI-powered tools like facial recognition, this incident lays bare how implementation without transparency or compliance can fundamentally undermine civil liberties and erode public trust. For legal professionals, cybersecurity experts, and Chief AI Officers alike, the message is clear: unchecked surveillance systems constitute a governance crisis demanding urgent attention at the intersection of law, privacy, and algorithmic accountability.
A Pattern of Deception: NOPD’s Covert Use of Facial Recognition Before 2022
The 2025 revelations about NOPD’s unlawful use of facial recognition are not an isolated breach—they are part of a documented pattern of deception and policy evasion that stretches back at least five years. As early as 2020, the New Orleans Police Department acknowledged that it was using facial recognition tools obtained through partnerships with state and federal agencies, including the FBI and Louisiana State Police. Yet this admission came only after years of deceptive public statements by city officials, who repeatedly assured the public that no such technology was in use.
On November 12, 2020 an investigation reported in The Lens, NOLA revealed that while the NOPD did not own facial recognition software, it had been quietly leveraging it through external partners, without disclosure, a policy in place, or oversight. This occurred for years, and under two separate mayoral administrations, according to The Lens. When the ACLU of Louisiana submitted a public records request that same year, the city responded that the city police department did “not employ facial recognition software,” a statement later exposed as intentionally misleading. Although the city of New Orleans did not own the technology, they were using it with the consent of other agencies. Further, NOPD spokespersons claimed that “employ” referred to ownership, not use; a distinction that critics, including the ACLU, rightly characterized as a deliberate attempt to deceive the public and evade scrutiny.
At the time, the NOPD had no records tracking the frequency, purpose, or outcomes of facial recognition use, no policy governing its deployment, and no audit mechanism in place. The Real Time Crime Center, a city-run surveillance hub, had a policy banning facial recognition, but that restriction explicitly did not apply to the NOPD, creating a loophole the department exploited. Even as the City Council debated banning surveillance tools in 2020, high-ranking officials, including the City’s Chief Technology Officer, denied on the record that the city had access to or used facial recognition—statements that were promptly contradicted by NOPD’s internal disclosures.
The Lens, NOLA Undermining Civil Liberties and Public Trust
This history of misuse and concealed operation underscores a critical point: the 2025 incident is not merely the result of lax compliance but reflects a longstanding culture of institutional deception. The NOPD’s pattern of circumventing public accountability, sidestepping oversight, and misleading both the City Council and the public reveals systemic governance failures that continue to undermine legal and democratic norms.
Further, NOPD’s covert use of recognition capabilities is a cautionary tale of how AI technologies, when implemented without transparency, legal compliance, or ethical safeguards, can undermine civil liberties and public trust. As lawyers, legal professionals, and CAIOs, we must recognize that unchecked surveillance systems are not just a technical issue—they are an AI governance crisis. This article examines the legal violations, constitutional concerns, and cybersecurity risks associated with this unauthorized use of AI and provides best practices for how organizations can responsibly deploy AI within the framework of the rule of law.
Legal Violations
1. Municipal Authority and Ordinance Violations
New Orleans City Code § 147-2 (2022), limits the use of facial recognition technology to investigations involving specific violent crimes—namely murder, rape, terrorism, and kidnapping—and mandates a written request from an investigating officer, probable cause documentation, supervisory approval, and case-specific justification.
The NOPD’s deployment of a live, automated alert system without any record of written requests or internal review arguably violated both the letter and spirit of the ordinance. Such behavior may constitute ultra vires agency action, opening the city to liability under state administrative law doctrines or to injunctive challenges by affected individuals or civil rights organizations. It also raises broader administrative law questions regarding local agency autonomy and the enforceability of municipal guardrails on emerging technologies.
2. Fourth Amendment Concerns: Warrantless, Persistent Surveillance
In Carpenter v. United States, 138 S. Ct. 2206 (2018), the Supreme Court held that the government’s warrantless collection of historical cell-site location information constituted a search under the Fourth Amendment, emphasizing the intrusive potential of continuous digital surveillance. The reasoning in Carpenter has since been extended to analogous technologies capable of tracking or identifying individuals in public. See, e.g., Leaders of a Beautiful Struggle v. Baltimore Police Dep’t, 2 F.4th 330, 342–43 (4th Cir. 2021) (en banc) (striking down aerial surveillance system for enabling persistent surveillance).
Further, in United States v. Jones, 565 U.S. 400 (2012), Justice Sotomayor’s concurrence warned that pervasive surveillance could erode constitutional protections, suggesting the Court may require a “mosaic theory” approach to assessing searches involving modern technologies. Id. at 416 (Sotomayor, J., concurring).
Real-time facial recognition likely implicates these same concerns, as it enables undisclosed, suspicionless searches of individuals’ faces in public—without a warrant, individualized suspicion, or clear limitation in scope.
Courts have not yet definitively ruled on real-time facial recognition, but growing legal scholarship and advocacy point toward its classification as a search requiring heightened justification, especially where the technology is deployed continuously or without limitation. See Andrew Guthrie Ferguson, The Fourth Amendment and Facial Recognition: Protecting Privacy in Public, 105 Minn. L. Rev. 1105 (2021).
3. Section 1983 and Equal Protection Risks
The NOPD’s actions may also give rise to claims under 42 U.S.C. § 1983, which authorizes civil suits against state actors who deprive individuals of constitutional rights. Plaintiffs alleging unlawful arrest or surveillance based on misidentification by facial recognition could argue violations of their Fourth and Fourteenth Amendment rights. If these harms flowed from a policy or custom, municipal liability may attach under Monell v. Department of Social Services, 436 U.S. 658, 694 (1978).
Further, the racial and gender disparities associated with facial recognition software are well-documented. A comprehensive study by the National Institute of Standards and Technology found that the majority of facial recognition algorithms exhibit false positive rates up to 100 times higher for Black and Asian faces compared to white faces. See NIST Interagency Report 8280, Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects(Dec. 2019), available here. LINK
If plaintiffs can show that these disparities led to discriminatory policing outcomes or surveillance patterns, they may have a plausible Equal Protection Clause claim, particularly if the system was used disproportionately in communities of color. While proof of discriminatory intent remains challenging, disparate impact coupled with deliberate indifference to known risks may suffice under Village of Arlington Heights v. Metro. Hous. Dev. Corp., 429 U.S. 252, 265–66 (1977).
Cybersecurity and Governance Risks
The use of facial recognition technology in the New Orleans case not only raises legal red flags but also reveals serious deficiencies in cybersecurity and AI governance—two critical dimensions often overlooked in public-sector use of AI. Real-time biometric surveillance systems, like the one operated in partnership with Project NOLA, transmit highly sensitive data, such as facial vectors and GPS coordinates, across networks that may lack hardened security protocols or clear data retention policies. Without robust encryption, access controls, and audit logging, these systems are vulnerable to interception, misuse, or compromise by malicious actors.
Moreover, the decision to integrate AI-driven alerts into officers’ personal or department-issued devices introduces a new vector of cybersecurity risk. By pushing live identification data directly to individual law enforcement units without centralized logging or oversight, the NOPD created what is effectively a shadow AI system—a deployment outside formal governance and compliance frameworks. Such architectures often evade standard risk assessments and incident response protocols, creating gaps that adversaries or internal actors could exploit.
From a governance standpoint, this case illustrates a broader institutional failure: the absence of AI lifecycle management. I was unable to find evidence to indicate that the NOPD conducted a data protection impact assessment, tested for algorithmic bias, or established redress mechanisms for individuals wrongly flagged. These omissions run counter to emerging best practices and frameworks like the NIST AI Risk Management Framework (2023), which emphasizes continuous monitoring, context-sensitive deployment, and public transparency. In the absence of such controls, even well-intentioned uses of AI can lead to rights violations, mission drift, and reputational harm. This is particularly true in law enforcement, where stakes are high and public trust tends to be quite fragile.
What This Means for Legal Professionals and CAIOs
The unauthorized use of facial recognition by the New Orleans Police Department is a powerful reminder that legal, monitoring, and enforcement frameworks often lag behind technological capabilities. But this lag does not excuse institutions from their obligation to uphold constitutional rights, ensure transparency, and manage emerging technologies responsibly.
For law enforcement agencies, this means recognizing that the public’s trust in AI-enhanced policing depends not just on outcomes, but on process, oversight, and accountability. For lawyers and consultants, it means ensuring that AI tools are deployed within clear legal boundaries and that clients have governance structures robust enough to weather scrutiny from courts, regulators, and the communities they serve.
Safeguarding civil liberties in the age of AI will require enforceable policies, cross-disciplinary collaboration, and the courage to pause when compliance is uncertain. In the AI Age currently emerging, it is often a system of “deploy first, determine the risks later.” There is a premium on time, and on taking the steps necessary to put quality AI governance into place. Perhaps there has never been a greater amount of peril. The public is watching. Bad actors are watching. And the next misstep could result not just in litigation, but in a fundamental erosion of trust that may take years to rebuild, affecting law enforcement, government systems, and the broader adoption of beneficial AI technologies.
To support legal professionals and AI leaders in turning these principles into action, we offer the following Best Practices Checklist, grounded in established legal standards, ethical frameworks, and emerging risk management guidance.
Best Practices for Lawyers, Legal Professionals and Chief AI Officers
1. Conduct Pre-Deployment Legal and Risk Assessments
Identify legal risks under constitutional, statutory, and municipal laws.
Analyze privacy and equity risks through data protection impact assessments (DPIAs).
Ensure AI tools are reviewed by legal counsel before operational use.
2. Create and Enforce Clear AI Policies
Define approved use cases, prohibited functions, and procedural requirements.
Require documented justification and supervisory sign-off for high-risk applications.
Emphasize human-in-the-loop decision-making as the default.
3. Implement Strong Cybersecurity and Audit Protocols
Require data encryption, immutable logs, and granular access controls.
Maintain centralized audit trails of AI use, queries, and decision outcomes.
Include AI risk in broader cybersecurity governance programs.
4. Ensure Transparency and Individual Redress
Publicly disclose AI tools in use, their purposes, and applicable safeguards.
Offer grievance procedures for individuals affected by erroneous or unfair decisions.
Provide meaningful human review and the ability to contest automated outcomes.
5. Govern Through Contracts and Procurement
Include AI-specific risk provisions in vendor contracts (e.g., indemnity, audit rights).
Demand disclosure of training data provenance and model performance metrics.
Require vendors to certify compliance with applicable laws and internal policies.