When the California Privacy Protection Agency (“CalPrivacy”) announced a $1.35 million settlement in September 2025 – the largest CCPA penalty to date – one of the itemized grievances stood out for any practitioner who has wrestled with a vendor redline: the company had failed to amend or enter into third-party data protection vendor contracts by regulatory deadlines.

This hints at where state privacy enforcement is heading. The consumer-facing side of privacy compliance – notices, opt-out links, cookie banners – is visible and testable. But the back-end architecture of a compliant privacy program lives at least in part in vendor contracts, and regulators increasingly treat those contracts as evidence of program maturity (or its absence). Nowhere is this more concrete than in California’s 11 CCR § 7051.

Continue Reading The Paper Trail: State Privacy Law Contracting Requirements

The lesson from the PocketOS database deletion is not that agentic AI is dangerous. It’s about governance and controls.

You have probably seen some version of the headline by now: “AI Agent Deletes Company’s Entire Database in 9 Seconds.” It is a compelling story. But the headline, while technically accurate, obscures the far more important lesson buried in the details.

So what actually happened? PocketOS, a small SaaS company that makes software for car rental businesses, was using a popular AI-powered code editor running on Anthropic’s Claude Opus 4.6 model. The AI agent was tasked with resolving a routine issue in a staging environment. When it hit a credential mismatch, the agent decided on its own initiative to “fix” the problem by deleting a volume on Railway, the company’s cloud hosting provider. The agent found a password in an unrelated file and used it to execute a deletion command. Because of permissions made available to the agent and the way access to the infrastructure was configured, that single command using a password which was valid across all systems wiped both the production database and all associated backups.  

The agent, when asked to explain itself, produced what multiple outlets described as a “confession,” acknowledging it had violated its own safety instructions. The story has gone viral. The framing in most coverage puts the AI squarely at the center of the narrative: the agent “went rogue,” it “confessed,” it acted autonomously and destroyed a business. But the reports are not entirely accurate and usually miss the point.

Continue Reading The AI Didn’t Go Rogue. Guardrails Were Never There.

As another piece of harmonization legislation, the AI Act is unsurprisingly reminiscent in regulatory philosophy to the GDPR. Many of the same data principles (transparency, accuracy, security) are present, as is an explicit risk-based approach. Understanding precisely where there is overlap with your existing GDPR program is a head start in your AI Act compliance program design. But it is also important to recognize where the two frameworks diverge. The GDPR regulates what happens to personal data, the legal basis for collection, how it is used, how long it is kept, who can access it. The AI Act generally regulates the AI system itself – namely, how it is designed, tested, documented, governed, and deployed. While that difference in regulatory object creates structural differences in inputs and outputs, the framework itself does have a lot of commonalities.

This post suggests a strategy for efficiently building a unified compliance framework for both regimes.

Continue Reading One Compliance Program for Two Frameworks: Aligning the EU AI Act and GDPR for Efficiency

Episode 14 is now live. In this episode of Consumer Counterpoint, we sit down with Chicago partner Jay Carle to discuss the launch of Seyfarth’s new D.A.T.A. Law practice group. Jay shares insights into the group’s multidisciplinary approach and how it’s designed to help clients stay ahead of emerging data and technology challenges.

Watch Episode

Over the past decade, a vibrant defense‑innovation ecosystem has emerged across the U.S. and Europe, powered by venture‑backed defense tech startups, dual‑use technology companies, and commercial‑first innovators entering national‑security markets. As these companies begin collaborating with defense agencies, they encounter compliance obligations for handling sensitive government information. For those seeking to enter the US national security innovation sector, the center of attention remains on safeguarding Controlled Unclassified Information (CUI).

While the recently codified Cybersecurity Maturity Model Certification (CMMC) addresses more than CUI, its principal aim is to remediate inconsistent compliance with the implementation of the NIST SP 800-171 controls required to safeguard CUI in the Defense Federal Acquisition Supplement (DFARS). Whether or not a company sees itself as a “defense contractor,” understanding CUI and CMMC is rapidly becoming essential for participating in this expanding global ecosystem.

Against that backdrop, this post outlines CUI’s role within CMMC, identifies the primary sources of the underlying safeguarding obligations, and explains how CMMC operationalizes verification of those requirements, especially at Level 2.

Continue Reading Safeguarding Sensitive Government Information: Why the Cybersecurity Maturity Model Certification (CMMC) Matters for the Global Defense Innovation Ecosystem

Introduction

Robotics and artificial intelligence are converging at an unprecedented pace. As robotics systems increasingly integrate AI-driven decision-making, businesses are unlocking new efficiencies and capabilities across industries from manufacturing and logistics to healthcare and real estate.

Yet this convergence introduces complex legal and regulatory challenges. Companies deploying AI-enabled robotics must navigate issues related to data privacy, intellectual property, workplace safety, liability, and compliance with emerging AI governance frameworks.

The Shift: Robotics as an AI Subset

Traditionally, robotics was viewed as a standalone discipline focused on mechanical automation. Today, robotics is increasingly powered by machine learning algorithms, natural language processing, and predictive analytics—hallmarks of AI technology.

This evolution raises critical questions for legal teams:

  • Who owns the data generated by AI-enabled robots?
  • How do we allocate liability when autonomous systems make decisions without human intervention?
  • What contractual safeguards should be in place when outsourcing robotics solutions to third-party vendors?

As robotics increasingly incorporates AI functionality, traditional contract structures for hardware procurement and service agreements require significant updates. This evolution introduces new risk categories that must be addressed through precise drafting and negotiation.

Continue Reading The AI-Driven Evolution of Robotics

On Friday, October 17, 2025, U.S. District Court Judge Vince Chhabria issued a biting Order granting defendant Eating Recovery Center, LLC’s (“ERC”) motion for summary judgment on the plaintiff Jane Doe’s California Invasion of Privacy Act (CIPA) claims, a law enacted in 1967 to address the increasing use of wiretapping to eavesdrop on private phone

On July 24, 2025, the California Privacy Protection Agency (“CPPA”) unanimously voted to adopt a package of Proposed Regulations for the California Consumer Privacy Act (“CCPA”), marking a significant development in California privacy law. These cover Automated Decision-making Technology (“ADMT”), mandatory Cybersecurity Audits, Risk Assessments, and clarifications for the CCPA’s applicability to Insurance Companies. The package will move into its final review stage before formal enactment, once filed with the California Office of Administrative Law.

CCPA Steering Toward Operational Compliance

This is a clear signal that privacy compliance expectations in California are trending toward a more operational phase. The new rules are designed to give Californians greater control over how their personal information is used while pushing businesses toward higher levels of transparency and accountability, especially when automated decision-making and high-risk data processing is involved. For companies, this is more than just a theoretical update – it’s a clarion call to ensure these requirements are built into day-to-day governance, technology and process design, and vendor management practices.

Continue Reading California Privacy Protection Agency (CPPA) Finally Voted to Adopt Much Debated Update to CCPA Regulations: What Your Business Should Know

The UK’s Data (Use and Access) Act received Royal Assent last Thursday, June 19th, bringing into law some significant changes to the country’s post Brexit data protection framework, among an array of other, related rules (on matters ranging from financial conduct to smart meters and “underground assets,” which is more to do with

On June 3, 2025, the California Senate unanimously passed Senate Bill 690 (SB 690), a bill that seeks to add a “commercial business purposes” exception to the California Invasion of Privacy Act (CIPA).

After multiple readings on the Senate floor, SB 690 passed as amended, and will now proceed to the California State Assembly. SB