On Tuesday, June 13 at 1:00 p.m. Eastern, Seyfarth attorneys Kristine Argentine, John Tomaszewski, and Paul Yovanic will present at the Association of National Advertisers webinar,  “Emerging Issues Surrounding Privacy Class Actions and Compliance in 2023.”

The webinar will address the recent surge in consumer class actions, compliance considerations, and recent developments in the law related to privacy claims, including TCPA and State Mini-TCPAs, the Video Privacy Protection Act, data breach claims, biometric privacy, and claims related to collection of data through google analytics tools, such as chat functions, pixels, and cookies. 

For more information and to register, click here.

test

You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist.  The lawyers in that case used the generative artificial intelligence (AI) program ChatGPT to perform their legal research for the court submission, but did not realize that ChatGPT had fabricated the citations and decisions.  This case should serve as a cautionary tale for individuals seeking to use AI in connection with legal research, legal questions, or other legal issues, even outside of the litigation context.

In Mata v. Avianca, Inc.,[1] the plaintiff brought tort claims against an airline for injuries allegedly sustained when one of its employees hit him with a metal serving cart.  The airline filed a motion to dismiss the case. The plaintiff’s lawyer filed an opposition to that motion that included citations to several purported court decisions in its argument. On reply, the airline asserted that a number of the court decisions cited by the plaintiff’s attorney could not be found, and appeared not to exist, while two others were cited incorrectly and, more importantly, did not say what plaintiff’s counsel claimed. The Court directed plaintiff’s counsel to submit an affidavit attaching the problematic decisions identified by the airline.

Plaintiff’s lawyer filed the directed affidavit, and it stated that he could not locate one of the decisions, but claimed to attach the others, with the caveat that certain of the decisions “may not be inclusive of the entire opinions but only what is made available by online database [sic].”[2]  Many of the decisions annexed to this affidavit, however, were not in the format of decisions that are published by courts on their dockets or by legal research databases such as Westlaw and LexisNexis.[3]

In response, the Court stated that “[s]ix of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations”[4], using a non-existent decision purportedly from the Eleventh Circuit Court of Appeals as a demonstrative example.  The Court stated that it contacted the Clerk of the Eleventh Circuit and was told that “there has been no such case before the Eleventh Circuit” and that the docket number shown in the plaintiff’s submission was for a different case.[5] The Court noted that “five [other] decisions submitted by plaintiff’s counsel . . . appear to be fake as well.” The Court scheduled a hearing for June 8, 2023, and demanded that plaintiff’s counsel show cause as to why he should not be sanctioned for citing “fake” cases.[6]

At that point, plaintiff’s counsel revealed what happened.[7] The lawyer who had originally submitted the papers citing the non-existent cases filed an affidavit stating that another lawyer at his firm was the one who handled the research, which the first lawyer “had no reason to doubt.” The second lawyer, who conducted the research, also submitted an affidavit in which he explained that he performed legal research using ChatGPT. The second lawyer explained that ChatGPT “provided its legal source and assured the reliability of its content.” He explained that he had never used ChatGPT for legal research before and “was unaware of the possibility that its content could be false.” The second lawyer noted that the fault was his, rather than that of the first lawyer, and that he “had no intent to deceive this Court or the defendant.” The second lawyer annexed screenshots of his chats with ChatGPT, in which the second lawyer asked whether the cases cited were real. ChatGPT responded “[y]es,” one of the cases “is a real case,” and provided the case citation. ChatGPT even reported in the screenshots that the cases could be found on Westlaw and LexisNexis.[8]

This incident provides a number of important lessons. Some are age-old lessons about double-checking your work and the work of others, and owning up to mistakes immediately. There are also a number of lessons specific to AI, however, that are applicable to lawyers and non-lawyers alike.

This case demonstrates that although ChatGPT and similar programs can provide fluent responses that appear legitimate, the information they provide can be inaccurate or wholly fabricated. In this case, the AI software made up non-existent court decisions, even using the correct case citation format and stating that the cases could be found in commercial legal research databases. Similar issues can arise in non-litigation contexts as well.  For example, a transactional lawyer drafting a contract, or a trusts and estates lawyer drafting a will, could ask AI software for common, court-approved contract or will language that, in fact, has never been used and has never been upheld by any court. A real estate lawyer could attempt to use AI software to identify the appropriate title insurance endorsements available in a particular state, only to receive a list of inapplicable or non-existent endorsements. Non-lawyers hoping to set up a limited liability company or similar business structure without hiring a lawyer could find themselves led astray by AI software as to the steps involved or the forms needed to be completed and/or filed. The list goes on and on.

The case also underscores the need to take care in how questions to AI software are phrased. Here, one of the questions asked by the lawyer was simply “Are the other cases you provided fake”?[9] Asking questions with greater specificity could provide users with the tools needed to double-check the information from other sources, but even the most artful prompt cannot change the fact that the AI’s response may be inaccurate. That said, there are also many potential benefits to using AI in connection with legal work, if used correctly and cautiously. Among other things, AI can assist in sifting through voluminous data and drafting portions of legal documents.  But human supervision and review remain critical.

ChatGPT frequently warns users who ask legal questions that they should consult a lawyer, and it does so for good reason. AI software is a powerful and potentially revolutionary tool, but it has not yet reached the point where it can be relied upon for legal questions, whether in litigation, transactional work, or other legal contexts. Individuals who use AI software, whether lawyers or non-lawyers, should use the software understanding its limitations and realizing that they cannot rely solely on the AI software’s output.  Any output generated by AI software should be double-checked and verified through independent sources. When used correctly, however, it has the potential to assist lawyers and non-lawyers alike.


[1] Case No. 22-cv-1461 (S.D.N.Y.).

[2] Id. at Dkt. No. 29. 

[3] Id.

[4] Id. at Dkt. No. 31. 

[5] Id.

[6] Id.

[7] Id. at Dkt. No. 32.

[8] Id.

[9] Id.

Tennessee and Montana are now set to be the next two states with “omnibus” privacy legislation. “Omnibus” privacy legislation regulates personal information as a broad category, as opposed to data collected by a particular regulated business or collected for a specific purpose, like health information, financial or payment card information. As far as omnibus laws go, Tennessee and Montana are two additional data points informing the trend we are seeing at the state level regarding privacy and data protection. Fortunately (or unfortunately depending on your point of view) these two states have taken the model which was initiated by Virginia and Colorado instead of following the California model.

Is there Really Anything New?

While these two new laws may seem to be “more of the same”, the Tennessee law contains some new interesting approaches to the regulation of privacy and data protection. While we see the usual set of privacy obligations (notice requirements, rights of access and deletion, restrictions around targeted advertising and online behavioral advertising, et cetera) in both the Tennessee and Montana laws, Tennessee has taken the unusual step of building into its law specific guidance on how to actually develop and deploy a privacy program in the Tennessee Information Protection Act (“TIPA”).

Continue Reading Two New State Privacy Laws – But What is Really New?

The My Health My Data Act (“Act”) was approved by the Washington State House on April 17, 2023. The Act is now with Governor Jay Inslee for signature and is expected to be signed into law in its current form, which is broad enough to warrant anyone with any activity in Washington to consider its scope and implications for operations. Because the Act will be enforceable through a private right of action, it has the potential to create substantial legal exposure for violations.

The Act creates new and unique consumer rights and obligations for business relating to the collection, sharing, and use of “Consumer Health Data” (“CHD”). It expressly aims to “close the gap between consumer knowledge and industry practice” by expanding obligations related to processing of CHD to entities not covered by HIPAA. However, it is significantly broader in potential scope, including, in part, due to the gaping definition of CHD (which expressly includes data that identifies past, present, or future physical or mental health status, for example, “bodily functions” and “precise location information that could reasonably indicate an attempt to receive health services or supplies”). The Act will impact a range of business, including advertisers, mobile app providers like health and wellness trackers, wearable device manufacturers and, of course, healthcare and wellness industry companies and their data processors handling non-HIPAA-regulated CHD. Notably, the Act expressly addresses abortion/reproductive health services and gender-affirming care services (including by making it unlawful for any person to use a “geofence” (or virtual boundary) around a facility that provides health care services) for the purposes of identifying or tracking consumers seeking such services; collecting CHD from consumers; or sending them notifications, messages, or advertisements related to their CHD or health care services. This restriction applies regardless of consent or opt-in.

Many of the Act’s definitions appear to be significantly broader than definitions within other privacy laws, meaning the Act might apply to companies that do not currently consider themselves to be collecting or processing health data (e.g., a cosmetics retailer where one completes a “skin analysis” and purchases foundation for “acne prone” skin, for instance).

The Act specifies effective dates on a provision-by-provision basis throughout. Most sections of the Act should come into effect on March 31, 2024, and three months later on June 30, 2024, for small businesses. The legislature did not include an effective date in the provision that prohibits geofencing, which could cause the prohibition to be effective as early as July 22, 2023, because under Washington law,​ bills signed into law take effect 90 days after the end of the session in which they were passed, unless they specify otherwise.

Who gets the new rights and protections under the Act?

The Act protects only “consumers” acting in an individual or household context and who are either Washington residents or natural persons whose CHD is collected in Washington state, regardless of their residency or location. This could have significant implications for companies physically situated in Washington but processing data of individuals located elsewhere.

Notably, in contrast to California’s amended CCPA, this Act expressly excludes in the definition of “consumer” an individual acting in an employment context; however, it is not clear whether this means relief only for the employer or others (including benefits providers) and whether all processing by such entities would be “in the employment context.”

Who must comply with the Act’s requirements?

The Act’s obligations apply to a “regulated entity,” defined as any legal entity that: (1) conducts business in Washington or produces or provides products or services that are “targeted” to consumers in Washington and (2) alone or jointly with others, determines the purpose and means of collecting, processing, sharing, or selling of CHD. The Act does not apply to government agencies, tribal nations, or contracted service providers processing CHD on behalf of a government agency. However, a regulated entity does not have to be a for profit entity. The Act also defines the term “small business” as another type of entity subject to the Act. However, the term “small business” is essentially subsumed in the term “regulated entity,” and all obligations under the Act generally also apply to small businesses, but with a short delay to the effective date for certain provisions. What is a “small business” is determined by certain data processing volume thresholds.

For the purposes of this article, we refer only to “regulated entities.”

What data is CHD?

“CHD” under the Act is personal information that is linked or reasonably linkable to a consumer and that identifies the consumer’s past, present, or future physical or mental health status, and specifically includes:

  • Individual health conditions, treatment, diseases, or diagnoses;
  • Social, psychological, behavioral, and medical interventions;
  • Health-related surgeries or procedures;
  • Use or purchase of prescribed medication;
  • Bodily functions, vital signs, symptoms, or measurements of the information expressly identified in the definition of CHD;
  • Diagnoses or diagnostic testing, treatment, or medication;
  • Gender-affirming care information (as defined by the Act);
  • Reproductive or sexual health information (as defined by the Act);
  • Biometric data (as defined by the Act);
  • Genetic data (as defined by the Act);
  • Precise location information (as defined by the Act) that could reasonably indicate a consumer’s attempt to acquire or receive health services or supplies;
  • Data that identifies a consumer seeking “health care services,” which is defined broadly as any service provided to a person to assess, measure, improve, or learn about a person’s mental or physical health; and
  • Any information that a regulated entity, or its respective processor, processes to associate or identify a consumer with the data described above that is derived or extrapolated from non-health information (such as proxy, derivative, inferred, or emergent data by any means, including algorithms or machine learning).

There are several data category exemptions. For example, the Act will not apply to: (A) Protected Health Information (PHI) governed by HIPAA, information intermingled with PHI maintained by HIPAA-regulated entities, and health records governed by or created pursuant to other healthcare-related state and federal laws; (B) Data regulated by the Gramm-Leach-Bliley Act, Fair Credit Reporting Act, Administrative Simplification provisions of the Social Security Act, Family Educational Rights and Privacy Act, statutes and regulations applicable to the Washington Health Benefit Exchange, and certain privacy rules adopted by the Washington Office of the Insurance Commissioner; or (C) Deidentified data (data that cannot reasonably be used to infer information about, or otherwise be linked to, an identified or identifiable consumer, or a device linked to such a consumer, so long as certain requirements are met).

The obligations imposed by the Act do not restrict collection, use, or disclosure of CHD to prevent, detect, protect against, or respond and prosecute in relation to security incidents, theft, fraud, harassment, malicious or deceptive activities, or any illegal activity under WA or federal law; preserve the integrity or security of systems; or investigate, report, or prosecute those responsible for any such action that is illegal under WA or federal law. This is important for business to consider as many data processing activities could potentially fall into this category.

What obligations are imposed by the Act?

Regulated entities (including small businesses):

  • must post a “Consumer Health Data Privacy Policy” and post a “prominent” link on the regulated entity’s homepage. This notice must contain certain clear and conspicuous disclosures including categories of data collected and purposes for same, sources of same, categories shared, a list of the categories of third parties and specific affiliates with whom it is shared, and how to exercise one’s rights under the Act.
  • may not collect any CHD except with affirmative consent for a specified purpose; or to the extent necessary to provide a product or service requested by the consumer. Under the Act the term “collecting” includes “buying, renting, accessing, retaining, receiving, acquiring, inferring, deriving, or otherwise processing CHD in any manner.”
  • may not share CHD except with affirmative consent that is “separate and distinct” from the consent to collect; or to the extent necessary to provide a product or service requested by the consumer. Importantly, the definition of “share” includes disclosures to affiliates (something that could create significant operating hurdles for group companies if they cannot squarely fit their internal sharing within the exceptions above). However, “sharing” excludes disclosure to (1) a processor in order to provide goods or services in a manner consistent with the purpose for collection disclosed to the consumer; (2) a third party with whom the consumer has a direct relationship, if certain conditions are satisfied. It also excludes disclosures of data as an asset in the M&A context, if the recipient complies with the Act.

Valid consent under this Act requires “a clear affirmative act that signifies a consumer’s freely given, specific informed, opt-in, voluntary, and unambiguous agreement.” A consumer’s acceptance of a website’s general terms of use or privacy policy or hovering over, muting, pausing, or closing of a given piece of content may not constitute valid consent.

  • Regulated entities must restrict access to CHD by employees, processors, and contractors to that which is necessary to provide the consumer-requested product or service or for the purposes for which the consumer provided consent.
  •  Regulated entities must establish and maintain administrative, technical, and physical data security practices satisfying a reasonable industry standard to protect CHD appropriate for the volume and nature of the data.

The Act makes it unlawful for any person to implement a “geofence” around an entity that provides in-person health care services where such geofence is used to: (1) identify or track consumers seeking health care services, (2) collect CHD, or (3) send notifications, messages, or advertisements to consumers related to their CHD or health care services. Geofence is defined as “technology that uses global positioning coordinates, cell tower connectivity, cellular data, radio frequency identification, Wi-Fi data, and/or any other form of location detection to establish a virtual boundary around a specific physical location. For purposes of this definition, ‘geofence’ means a virtual boundary that is 2,000 feet or less from the perimeter of the physical location.” Because of the broad definition of “CHD,” which covers an expansive scope of personal data, and “health care services,” which includes any services “to access, measure, improve, or learn about a person’s mental or physical health” (e.g., a book store could arguably fall into the definition by offering a service that a person can use to “learn about” or “improve” their mental or physical health), the prohibition on geofencing could apply to a broad range of businesses and business activities. For example, a fitness club’s app that checks you in when entering the club could be seen as violating this prohibition. More broadly, a retailer that uses geofencing to push coupons or ads to consumers that visit a supermarket, which often have a pharmacy inside, could be seen as violating this geofencing prohibition.

Any person must obtain a consumer’s separate authorization to sell or offer to sell specific CHD, which may not be a condition on the provision of goods or services. This must be done by providing the consumer with specific plain language disclosures, including on the purpose of the sale and the buyer’s name and contract information. Authorization is only valid for one year and may be revoked sooner. A copy of the signed authorization must be provided to the consumer, and both the seller and the buyer of the data must retain a copy of the authorization for 6 years.

Specific requirements apply to use of processors.

Consumer rights

Consumers have a number of privacy rights under the Act, including the right to:

  • confirm whether a regulated entity is collecting, sharing, or selling the consumer’s health data;
  • access CHD, including a list of all third parties and affiliates with whom the regulated entity has shared or to whom it has sold the CHD and an active email address or other online mechanism to contact such parties;
  • withdraw consent from the regulated entity’s collection and sharing of CHD;
  • delete CHD concerning the consumer; and
  • appeal a regulated entity’s refusal to take action on a request.

The Act’s deletion right is apparently nearly unfettered. A regulated entity must delete a consumer’s health data from its records, including from all parts of its network, including archived or backup systems, and will not be able to decline, or delay, deletion requests for the common exceptions found in other data privacy laws, including the CCPA. Regulated entities will have just 45 days to comply with a consumer’s request. The major exception is inability to authenticate such request using commercially reasonable efforts.

The Act includes a prohibition against discrimination in relation to exercising the consumer rights.

AG enforcement and private right of action

Violations of the Act are “an unfair or deceptive act in trade or commerce and an unfair method of competition” under Washington’s Consumer Protection Act. The Act is enforceable both by the Attorney General’s office and through a full private right of action for aggrieved consumers.

Under the Washington Consumer Protection Act, the Washington Attorney General may bring an action on behalf of the people of the state to restrain and prevent prohibited or unlawful acts (RCW 19.86.080(1)), and any person injury by deceptive acts or practices may bring a civil act to: (1) enjoin further deceptive acts or practices; (2) recover the actual damages sustained; (3) recover reasonable attorneys’ fees and costs (RCW 19.86.090). Courts have discretion to increase awards of damages up to the lesser of: $25,000 or an amount of up to three times the actual damages (RCW 19.86.090).

Given its expansiveness and broad reach, this Act significantly impacts entities in and out of Washington that collect and process Washington residents’ personal information or that process personal information in Washington state. This is especially noteworthy for the global privacy community, given that Washington is home to some of the largest technology companies and cloud service providers in the world.

Entities doing business in and/or collecting or processing personal information in Washington should review their data inventory, collection, and sharing practices to determine if this Act applies. Such entities should be thinking about how to integrate compliance with it into their existing data privacy compliance programs.

* * *

Yana and Neeka are members of Seyfarth’s cross-disciplinary privacy & cybersecurity team and are ready to help answer any questions related to Washington’s “My Health My Data” Act.

On March 15, 2023 the Securities and Exchange Commission (“SEC”) proposed three new sets of rules (the “Proposed Rules”) which, if adopted, would require a variety of companies to beef up their cybersecurity policies and data breach notification procedures. As characterized by SEC Chair Gary Gensler, the Proposed Rules aim to promote “cyber resiliency” in furtherance of the SEC’s “responsibility to help protect for financial stability.”[1]

In particular, the SEC has proposed:

  • Amendments to Regulation S-P which would, among other things, require broker-dealers, investment companies, and registered investment advisers to adopt written policies and procedures for response to data breaches, and to provide notice to individuals “reasonably likely” to be impacted within thirty days after becoming aware that an incident was “reasonably likely” to have occurred (“Proposed Reg S-P Amendments”).[2]
  • New requirements for a number of “Market Entities” (including broker-dealers, clearing agencies, and national securities exchanges) to, among other things: (i) implement cybersecurity risk policies and procedures; (ii) annually assess the design and effectiveness of these policies and procedures; and (iii) notify the SEC and the public of any “significant cybersecurity incident” (“Proposed Cybersecurity Risk Management Rule”).[3]
  • Amendments to Regulation Systems Compliance and Integrity (“Reg SCI”) in order to expand the entities covered by Reg SCI (“SCI Entities”) and add additional data security and notification requirements to SCI Entities (“Proposed Reg SCI Amendments”).[4]

As Commissioner Hester Peirce observed, each Proposed Rule “overlaps and intersects with each of the others, as well as other existing and proposed regulations.” [5] Therefore, while each of the Proposed Rules relates to similar cybersecurity goals, each must be considered in turn to determine whether a particular company is covered and what steps the company would need to undertake should the Proposed Rules become final.

Below we discuss each set of Proposed Rules in more detail and provide some takeaways and tips for cybersecurity preparedness regardless of industry.

Proposed Reg S-P Amendments

Reg S-P, adopted in 2000, requires that brokers, dealers, investment companies, and registered investment advisers adopt written policies and procedures regarding the protection and disposal of customer records and information.[6] But, as Chair Gensler explained in a statement in support of the Proposed Reg S-P Amendments, “[t]hough the current rule requires covered firms to notify customers about how they use their financial information, these firms have no requirement to notify customers about breaches,” and the Proposed Reg S-P Amendments look to “close this gap.”[7]

In particular, “[w]hile all 50 states have enacted laws in recent years requiring firms to notify individuals of data breaches, standards differ by state, with some states imposing heightened notification requirements relative to other states,” and the SEC seeks, through the Proposed Reg S-P Amendments, to provide “a Federal minimum standard for customer notification” for covered entities.[8] This includes a definition of “sensitive customer information” which is broader than that used in at least 12 states; a 30-day notification deadline, which is shorter than timing currently mandated by 15 states (plus 32 states which do not include a notification deadline or permit delayed notifications for law enforcement purposes); and required notification unless the covered institution finds no risk of harm, unlike 21 states which only require notice if, after investigation, the covered institution does find risk of harm.[9]

Furthermore, while Reg S-P currently applies to broker-dealers, investment companies, and registered investment advisors, the Proposed Reg S-P Amendments would expand the scope to transfer agents.[10] It also would apply customer information safeguarding and disposal rules to customer information that a covered institution receives from other financial institutions and to a broader set of information by newly defining the term “customer information” which, for non-transfer agents, would “encompass any record containing ‘nonpublic personal information’ (as defined in Regulation S-P) about ‘a customer of a financial institution,’ whether in paper, electronic or other form that is handled or maintained by the covered institution or on its behalf,” and for transfer agents, which “typically do not have consumers or customers” for purposes of Reg S-P, would have a similar definition with respect to “any natural person, who is a securityholder of an issuer for which the transfer agent acts or has acted as transfer agent, that is handled or maintained by the transfer agent or on its behalf.”[11]

Proposed Cybersecurity Risk Management Rule

The Proposed Cybersecurity Risk Management Rule will impact a variety of “different types of entities performing various functions” in the financial markets defined as “Market Entities,” including “broker-dealers, broker-dealers that operate an alternative trading system, clearing agencies, major security-based swap participants, the Municipal Securities Rulemaking Board, national securities associations, national securities exchanges, security-based swap data repositories, security-based swap dealers, and transfer agents.”[12]

As Chair Gensler explained, the Proposed Cybersecurity Risk Management Rule is designed to “address financial sector market entities’ cybersecurity,” by, among other things, requiring Market Entities to adopt written policies and procedures to address their cybersecurity risks, to notify the SEC of significant cyber incidents, and, with the exception of smaller broker-dealers, to disclose to the public a summary description of cybersecurity risks that could materially affect the entity and significant cybersecurity incidents in the current or previous calendar year.[13]

According to the SEC, these policies and procedures are “not intended to impose a one-size-fits-all approach to addressing cybersecurity risks,” and are designed to provide Market Entities “with the flexibility to update and modify their policies and procedures as needed[.]”[14] However, there are certain minimum policies and procedures that would be required, such as periodic assessments of cybersecurity risks,[15] controls designed to minimize user-related risks and prevent unauthorized system access,[16] periodic assessment of information systems,[17] oversight of service providers that receive, maintain, or process the entity’s information (including  written contracts between the entity and its service providers),[18] measures designed to detect, mitigate, and remediate cybersecurity threats and vulnerabilities,[19] measures designed to detect, respond to, and recover from cybersecurity incidents,[20] and an annual review of the design and effectiveness of cybersecurity policies and procedures (with a written report).[21] For most regulated entities, such measures are already in place.

Proposed Reg SCI Amendments

Finally, the SEC has proposed amendments to Reg SCI, a 2014 rule adopted to “strengthen the technology infrastructure of the U.S. securities markets, reduce the occurrence of systems issues in those markets, improve their resiliency when technological issues arise, and establish an updated and formalized regulatory framework” for the SEC’s oversight of these systems.[22]  Reg SCI applies to “SCI Entities,” which include self-regulatory organizations, certain large Alternative Trading Systems, and certain other market participants deemed to have “potential to impact investors, the overall market, or the trading of individual securities in the event of certain types of systems problems.”[23]

The Proposed Reg SCI Amendments would expand the definition of SCI Entity to include registered Security-Based Swap Data Repositories, registered broker-dealers exceeding a size threshold, and additional clearing agencies exempt from registration.[24] They also would broaden requirements to which SCI Entities are subject, including  required notice to the SEC and affected persons of any “systems intrusions,” which would include a “range of cybersecurity events.”[25]

Takeaways

While the Proposed Rules are not adopted as-of-yet, companies which could be covered should take the opportunity to reevaluate their cybersecurity practices and policies, both to mitigate as much as possible the risk of a cyber-attack and to be prepared to address an attack, including meeting all notification requirements, should one occur.

Among other things, best practices include:

  • A written cyber risk assessment which categorizes and prioritizes cyber risk based on an inventory of the information systems’ components, including the type of information residing on the network and the potential impact of a cybersecurity incident;
  • A cybersecurity vulnerability assessment to assess threats and vulnerabilities; determine deviations from acceptable configurations, enterprise or local policy; assess the level of risk; and develop and/or recommend appropriate mitigation countermeasures in both operational and nonoperational situations;
  • A written incident response plan that defines how the company will respond to and recover from a cybersecurity incident, including timing and method of reporting such incident to regulators, persons or other entities;
  • A business continuity plan designed to reasonably ensure continued operations when confronted with a cybersecurity incident and maintain access to information;
  • Tabletop exercises to review and test incident response and business continuity plans;
  • Annual review of policies and procedures.

As a next step, each of the Proposed Rules will be published on the Federal Register and open for comment for sixty days following this publication. Regardless of whether the Proposed Rules are adopted, they represent the SEC’s increasing awareness of, and desire to mitigate, cybersecurity incidents, and companies should be prepared accordingly.


[1] Gensler, Gary, Opening Statement before the March 15 Commission Meeting (SEC, March 15, 2023).

[2] See Press Release, SEC Proposes Changes to Reg S-P to Enhance Protection of Customer Information (SEC, March 15, 2023). The full text of the Proposed Reg S-P Amendments can be found here.

[3] See Press Release, SEC Proposes New Requirements to Address Cybersecurity Risks to the U.S. Securities Markets (SEC March 15, 2023). The full text of the Proposed Cybersecurity Risk Management Rule can be found here.

[4] See Press Release, SEC Proposes to Expand and Update Regulation SCI (SEC, March 15, 2023). The full text of the Proposed Reg SCI Amendments can be found here.
In addition, on March 15, 2023 the SEC re-opened comments on proposed cybersecurity risk management rules for investment advisors until May 22, 2023. For our analysis of these proposed rules, see How Fund Industry Can Prepare For SEC’s Cyber Proposal (Law360, March 4, 2022). The SEC is also presently considering comments on a different proposed rule mandating certain cybersecurity disclosures by public companies. See Carlson, Scott and Riley, Danny, SEC Proposes Mandatory Cybersecurity Disclosures by Public Companies (Carpe Datum Blog, April 14, 2022).

[5] Peirce, Hester, Statement on Regulation SP: Privacy of Consumer Financial Information and Safeguarding Customer Information (SEC, March 15, 2023).

[6] Proposed Reg S-P Amendments, supra n.2 at 1.

[7] Gensler, Gary, Statement on Amendments to Regulation S-P (SEC, March 15, 2023).

[8] Proposed Reg S-P Amendments, supra n.2 at 4.

[9] Id. at 4-6.

[10] Proposed Reg S-P Amendments, supra n.2, at 6-7.

[11] Id. at 74-75, 82.

[12] Proposed Cybersecurity Risk Management Rule, supra n. 3 at 9-10 (internal definitions of terms omitted).

[13] Gensler, Gary, Statement on Enhanced Cybersecurity for Market Entities (SEC, March 15, 2023).

[14] Proposed Cybersecurity Risk Management Rule, supra n. 3 at 103.

[15] Id. at 103-108.

[16] Id. at 109-112.

[17] Id. at 113-115.

[18] Id. at 115-116.

[19] Id. at 116-118.

[20] Id. at 118-124.

[21] Id. at 124-126.

[22] Proposed Reg SCI Amendments, supra n.4 at 10.

[23] Id. at 13-14.

[24] Id. at 24.

[25] Id. at 24-25.

Under China’s data protection regulatory framework, data processors are required to pass a security assessment conducted by the cybersecurity regulator before transferring certain categories or volumes of data out of China. This January, six months after the Cyberspace Administration of China (“CAC”) released the Measures on Security Assessment of Outbound Data Transfers (“Measures”), the Beijing counterpart of CAC reported the first two cases where the data processors passed the security assessments led by CAC, which sheds some light on the uncertainty and complexity of the security assessment.

Uncertainty of Reviewing Process and End of Grace Period

As disclosed by Beijing CAC, as of February 22, 2023, Beijing CAC has assisted more than 310 entities with their potential applications for the security assessment of outbound data transfers, and has received 48 formal applications from organizations in industries such as technology, e-commerce, healthcare, finance, automotive, and civil aviation, including multinational companies. Among many applications, CAC granted two organizations with the approval for transferring data out of China, namely the Beijing Friendship Hospital of the Capital Medical University and Air China.

Pursuant to the Measures, an application for the security assessment should first be submitted to the local CAC for review. Once approved at the local level, the application will be escalated to CAC for final approval.  Though the total processing time should be no longer than 57 working days as provided in the Measures, the Measures allow CAC to extend the reviewing period if necessary. Therefore, the total processing time is much longer. Given the 6-month grace period for data processors ended in March 2023, multinational companies with the necessity of transferring data out of China should prepare for the security assessment application to be compliant.

For Multinational Companies: Challenging Yet Attainable

Given the details of the two cases approved by the CAC are not yet disclosed to the public, there isn’t guidance regarding how the security assessment is being processed now.

However, as disclosed by Beijing CAC, the applications from some multinational companies are currently under CAC’s review after being approved at the Beijing level. Additionally, Beijing CAC has completed the review process for six other companies; their applications will be provided to CAC for further review. While we will continue to keep an eye on CAC’s review process, we expect to see the first case of a multinational company getting through the review process soon.

Practically, more entities are likely to be subject to security assessment than as required by the Measures. For multinational companies with the needs to transfer data out of China, they should be aware that the CAC-led security assessment is time consuming and challenging under stringent regulations. They also should be prepared for the data security compliance requirements, such as conducting self-assessments, or seeking professional advice on alternative choices to a security assessment.

It’s been no doubt a week of mixed emotions at the California Privacy Protection Agency (“CPPA”) which last week had its final CCPA regulations (“Regulations”) approved and filed with the California Secretary of State by the Office of Administrative Law. The final regulations have been stated to be “effective immediately”. The result is that California employers are now going to have a significant burden around compliance with California privacy law which they didn’t have previously.

Taken on its face, “effective immediately” would mean that enforcement of the regulations would be available (if not acted upon) immediately. However, as with much about the CCPA, this may not be definitive.

First, the California Administrative Procedure Act (“APA”) provides that regulations become effective on one of four quarterly dates based on when the final regulations are filed with the Secretary of State. Under the APA the enforcement date would still be July 1, because the regulation was filed between March 1 and May 31. See Cal. Gov. Code §11343.4(a)(3).

Second, Proposition 24 (the actual amendment to the CCPA) itself provides timing of enforcement of the new provisions of the CCPA. Specifically, Cal. Civ. Code §1798.185(d) states “Notwithstanding any other law, civil and administrative enforcement of the provisions of law added or amended by this act shall not commence until July 1, 2023.

To complicate matters further, on March 30, 2023, the day after the new regulations were announced as finalized by the CPPA, the California Chamber of Commerce filed suit against the Agency seeking declaratory and injunctive relief to delay enforcement. The lawsuit seeks the delay of enforcement until one year after the regulations were finalized. The original CPRA mandated Regulations be adopted by July 2022, with enforcement to begin July, 2023.  The Chamber’s suit makes many claims, including that the new Regulations are incomplete and rushed upon business because of the CPPA’s own internal delays, and the elimination of the safe harbor for enforcement combined with the shortened period between regulation and enforcement causes undue hardship. Their argument concerning the enforcement data is essentially that because the CPPA missed the regulation adoption date by approximately eight months, the enforcement data should also be shifted forward the same period of time.  Whether this lawsuit will succeed or not is difficult to ascertain at this time.

All of this is to say, while the press release of the CPPA may be technically correct, the practical application of the law to businesses still seems to have some breathing room. That said, despite the continuing lack of certainty around this legislation, it is important to continue to shore up any compliance efforts businesses have underway. This is particularly important in the HR/workplace context, where businesses have had broadened obligations to job applicants, employees, owners, directors, officers and contractors.

There are some immediate actions covered businesses will need to take in any eventuality:

  1. Figure out what HR-related data is subject to the CCPA and what isn’t.
    • Review exemptions under Cal. Civ. Code §§1798.145 and 1798.146. For example, background reports on employees from consumer reporting agencies under the Fair Credit Reporting Act likely exempt.
    • Review Federal laws that expressly preempt state law. For example, ERISA generally preempts state law and has certain record-keeping requirements that will affect how employers respond to request for deletion, for example.
  2. Review the new regulations for required notices and disclosures.
    • Draft an HR-related data privacy policy for employees and applicants. This is a separate requirement from the earlier “privacy notice” that was required under Cal. Civ. Code §1798.100 as all the requirements of §1798.130 are also implicated. Additionally, the regulations have distinctive requirements around “privacy policies” (under 11 Cal. Code Reg. §7011) and “privacy notices” (under 11 Cal. Code Reg. §7012).
    • “Sensitive Personal Information” now has to be specifically discussed in policies and notices. This can impact EEOC reporting data.
  3. Develop a “Service Provider Addendum” for all vendors that touch covered data.
    • The regulations require “magic language” to keep a vendor a “service provider”. If a vendor isn’t classified as a either service provider or contractor, then they are a “third party” and businesses lose the “safe harbor” around joint liability if the vendor violates the CCPA or the regulations.

Clearly, there is much more for a full compliance program to be developed and deployed but working through the above considerations will keep most businesses on course for compliance (almost) no matter what.

This just in….March 30, 2023. The California Office of Administrative Law has approved the CCPA Regulations and they are effective immediately. The text has not changed substantively since the modifications proposed late last year.

Without further ado, please read the CPPA’s announcement here.

At printing time, the final documents were to “be made available on the agency website as soon as they have been processed.”

The recent Cothron v. White Castle Illinois Supreme Court decision ruled that BIPA violations accrue with each collection, leading to skyrocketing claims – and damages. It’s critical for employers to understand what this decision means, how this decision affects them, and how to avoid the risks inherent in employee data collection.  

Our March 21, 2023, our webinar covered:

  • An in-depth look at the recent Illinois decision and its ramifications
  • How to remain in compliance and avoid violations even when data collection is mandatory
  • Similar decisions, and what to expect next in this developing trend. 

You can check out the video recording here: Breaking BIPA Developments: Damages Keep Piling Up | Seyfarth Shaw LLP

Seyfarth Synopsis: Since ChatGPT became available to the public at large in November 2022, employers have been wondering, and asking their employment lawyers, “What kind of policies should we be putting in place around the use of ChatGPT in the workplace?”  Although at this stage it is difficult to imagine all of the different ways ChatGPT, and its subsequent iterations, could be used by employees in the workplace, it is important to consider some of the more obvious usage cases and how employers might choose to address them in workplace policies.

What is ChatGPT?

ChatGPT is a form of artificial intelligence (AI) — an AI language model that is trained to interact in a conversational way.  At its most basic level, AI is a computer system able to perform tasks that normally require human intelligence.  In order to achieve this, AI needs to be trained.  First, massive data sets are fed into a computer algorithm.  Then the trained model is evaluated in order to determine how well it performs in making predictions when confronted with previously unseen data.  For ChatGPT, it is predicting the next word in a given context to provide that conversational tone for which it has become known.  Lastly, the AI goes through a testing phase to find out if the model performs well on large amounts of new data it has not seen before.  This is the phase in which ChatGPT finds itself. 

Legal Risks for Employers

Given how AI is trained and learns, significant issues can arise for employers when employees use ChatGPT to perform their job duties.  One big concern when employees obtain information from a source like ChatGPT in connection with their work is accuracy and bias. 

ChatGPT’s ability to supply information as an AI language model is only as good as the information from which it has learned and on which it has been trained.  Although ChatGPT has been trained on vast swaths of information from the Internet, by its very nature as AI, there are and will continue to be some gaps in ChatGPT’s knowledge base.  The most obvious example of such a gap is that the current  version of ChatGPT was only trained on data sets available through 2021.  On top of that, one needs to keep in mind that not everything that appears on the Internet is true and so there will be some built-in accuracy problems with information provided by ChatGPT given the data on which it was trained.  Thus, with respect to legal risk for employers, if employees are relying on ChatGPT for information in connection with work and not independently fact-checking that information for accuracy, obvious problems can arise depending on how the employee uses the information and to whom the information is provided.  Thus, it would make sense for employers to have policies that put guardrails on when and to what extent it is permissible for employees to obtain information from ChatGPT in connection with their work. There is also the question of inherent bias in AI.  The EEOC is focused on this issue as it relates to the employment discrimination laws it enforces and state and local legislators are proposing, and in some jurisdictions already passed, legislation that places restrictions on the use of AI by employers.  As described above, the information AI provides is necessarily dependent on the information upon which it is trained (and those who make decisions about what information the AI receives).  This bias could manifest itself in the types of information ChatGPT offers in response to questions presented in “conversation” with it.  Also, if ChatGPT is consulted with regarding to decision-making in employment, this could lead to claims of discrimination, as well as compliance issues based on state and local laws that require notice of the use of AI in certain employment decisions and/or audits of AI before using it in certain employment contexts.  Because of the risks of bias in AI, employers should include in their policies a general prohibition on the use of AI in connection with employment decisions absent approval from the legal department.

The other big concern for employers when thinking about how employees might use ChatGPT in connection with work is confidentiality and data privacy.  Employers are naturally concerned that employees will share proprietary, confidential and/or trade secret information when having “conversations” with ChatGPT.  Although ChatGPT represents that it does not retain information provided in conversations, it does “learn” from every conversation.  And of course, users are entering information into the conversations with ChatGPT over the Internet and there is no guarantee of security such communications.  Thus, while the details of how exactly confidential employer information could be impacted if revealed by an employee to ChatGPT, prudent employers will include in employee confidentiality agreements and policies prohibitions on employees referring to or entering confidential, proprietary or trade secret information into AI chatbots or language models, such as ChatGPT.   A good argument could be made that it is not consistent with treating information as a “trade secret” if it is given to a chatbot on the Internet.  On the flip side, given how ChatGPT was trained on wide swaths of information from the Internet, it is conceivable that employees could receive and use information from ChatGPT that is trademarked, copyrighted and/or the intellectual property of another person or entity, creating legal risk for the employer.

Other Employer Concerns

In addition to these legal concerns, employers also should also consider to what extent they want to allow employees to use ChatGPT in connection with their jobs.  Employers are at important crossroads in terms of determining whether and to what extent to embrace or restrict the usage of ChatGPT in their workplaces.  Employers will need to weigh the efficiency  and economy that could be achieved by employees using ChatGPT to perform such tasks as writing routine letters and emails, generating simple reports, and creating presentations, for example, against the potential loss in developmental opportunities for employees in  performing such tasks themselves.  ChatGPT is not going away, and in fact, a new and improved version should be out within the year. 

Employers will ultimately need to address the issue of its use in their workplaces the next iteration is going to be even better.  For all of the risks ChatGPT can present for employers, it can also be leveraged by employers.  The discussion has just started.  Employers – like ChatGPT – will likely be learning and beta testing on this for a bit.