As we move into 2023, Biometric Information Privacy remains a constantly evolving field, with states enacting new statutes, technology evolving, plaintiffs raising new theories, and cases being filed daily. Keeping up with biometric laws can be a daunting task for these reasons.

On February 7, 2023, we led a webinar looking at some of the recent developments in this ever-changing area of law, and how companies can adapt. Topics included:

  • Questions that have finally been answered, and which areas remain unresolved
  • How to remain in compliance and avoid violations
  • What’s next for information privacy and protection

You can check out the video recording here: The Here and Now of BIPA: Updates and Developments in Biometric Privacy | Seyfarth Shaw LLP

In a January 11, 2023 op-ed published in the Wall Street Journal, President Joe Biden urged “Democrats and Republicans to come together to pass strong bipartisan legislation to hold Big Tech accountable.”  He warned that the “risks Big Tech poses for ordinary Americans are clear. Big Tech companies collect huge amounts of data” about technology users, including “the places we go,” and argued that “we need serious federal protections for Americans’ privacy. That means clear limits on how companies can collect, use and share highly personal data,” including location data.

Potential Privacy Rules—Legislation or Regulation?

With Republicans taking charge in the House of Representatives and Democrats retaining control of the Senate in the upcoming legislative term, it seems an inauspicious time for passage of comprehensive national privacy legislation.  The American Data Privacy and Protection Act had broad bipartisan support and appeared to have momentum in Congress in the latter half of 2022, but foundered in large part due to resistance from California privacy regulators concerned that federal legislation would preempt the California Consumer Privacy Act (CCPA). 

Inaction by Congress is not going to stop privacy regulation in the United States, however, and without a comprehensive national policy, businesses face an increasingly complex patchwork of laws and rules.  In addition to California’s privacy law, enacted by that state in 2018, the Virginia Consumer Data Protection Act took effect on January 1, 2023, and similar laws in Colorado, Connecticut, and Utah will take effect during the year.  Meanwhile, the Federal Trade Commission (FTC) appears poised to issue its own privacy rules after announcing that it was “exploring rules to crack down on harmful commercial surveillance and lax data security” in an August 2022 Advance Notice of Proposed Rulemaking.

The FTC’s notice met fierce opposition from members of Congress and industry participants during the public comment period, which closed in November 2022.  Three Republican senators submitted a letter warning that new FTC privacy rules would “only add to the compliance burden facing small businesses” and that “Congress is the only appropriate venue for developing rules for data privacy and security and to set a truly national standard.”  The Alliance for Automotive Innovation submitted a comment encouraging the FTC to eschew rulemaking in favor of working with Congress to develop a comprehensive national privacy law, while the National Automobile Dealers Association submitted a comment questioning whether privacy issues even fell within the scope of the FTC’s authority to regulate unfair or deceptive acts or practices.

After reviewing the public comments it has received, the FTC may decide to issue a formal notice of proposed rulemaking; at least three FTC commissioners appear to agree that national privacy regulation is needed.  With state privacy laws and potential FTC rulemaking threatening to impose an increasingly heavy regulatory burden on businesses, Congress may have no choice but to act in 2023.

“Big Tech,” Antitrust Enforcement, and Automakers

Meanwhile, as reflected in President Biden’s January 11 op-ed, “Big Tech” remains a bipartisan target of choice for perceived anticompetitive abuses; this focus on “Big Tech” could have an impact on automakers, as well.  In a high-profile November 2, 2022 letter sent to FTC Chair Lina Khan and Jonathan Kanter, head of the Antitrust Division of the U.S. Department of Justice (DOJ), Senator Elizabeth Warren called for increased oversight of “Big Tech’s expansion into the automotive industry,” warning that in her view, technology companies “are leveraging their market power in the mobile operating system, digital app markets, and data infrastructure spheres to become the dominant players in the automotive sphere.”

According to Senator Warren, these companies are using “all-or-nothing” bundling tactics to expand their anticompetitive grasp of the automobile market; for example, by Google requiring automakers to purchase an entire suite of services to access popular apps like Google Maps.  She also expressed concern that “Big Tech is also laying the groundwork for potentially anticompetitive uses of data generated by its new role in the automobile industry” developing autonomous vehicles, and warned that if these technology companies use their access to massive quantities of location and other vehicle data “to obtain an advantage over companies that are shut out of the market, the effects will be difficult to reverse.” 

Senator Warren urged the FTC and DOJ to exercise their oversight authority to deter such abuses, and to review with skepticism potential acquisitions by “Big Tech” companies of emerging companies developing competing technologies.  Congress substantially increased the budgets of both the FTC and the DOJ Antitrust Division at the end of 2022, and automakers should anticipate increased scrutiny for “Big Tech” partners in 2023.

On 16 November 2022, EU Regulation 2022/2065, better known as the Digital Services Act (“DSA”), came into force. The DSA is a key development in the use of online services in the European Union (“EU”), with an impact on online services as significant as the one which the General Data Protection Regulation (“GDPR”) had upon the collection, use, transfer, and storage of data originating in the EU on 25 May 2018.

Ambit

The DSA sets out rules and obligations for digital services providers that act as intermediaries in their role of connecting consumers with goods, services, and content.  

Its goal is to regulate and control the dissemination of illegal or harmful content online, provide more consumer protection in online marketplaces, and to introduce safeguards for internet users and users of digital services. It also introduces new obligations for major online platforms and search engines to prevent such platforms being abused.

The DSA applies to a wide range of providers of:

  •  Intermediary services offering network infrastructure such as internet access providers, domain name registrars, and other providers of what is described as ‘mere conduit’ or ‘caching’ services;
  • Hosting services such as cloud and webhosting services;
  • Online platforms bringing together sellers and consumers such as online marketplaces, app stores, collaborative economy platforms and social media platforms; and
  • Very large online platforms and very large online search engines that are used to disseminate content and information.

The DSA applies in the EU, and to those providers outside the EU that offer their services in the EU. If a provider is not established in the EU, they will have to appoint a legal representative within the EU.

The DSA splits providers into tiers. The most heavily regulated tier covers Very Large Online Platforms (“VLOP”s) and Very Large Online Search Engines (“VLSE”s). The main criteria that will bring a provider under the scope of the DSA as a VLOP or VLSE is whether it operates a platform servicing more than 45 million monthly active end users located in the EU.

Features

The DSA introduces:

  • Mechanisms allowing users to flag illegal content online, and for platforms to cooperate with specialised “trusted flaggers” to identify and remove illegal content;
  • New rules to trace sellers on online marketplaces and a new obligation by online marketplaces to randomly check against existing databases whether products or services on their sites comply with the law;
  • Safeguards on moderation of content on platforms, giving users a chance to challenge platforms’ content moderation decisions when their content gets removed or restricted;
  • Transparency on the algorithms used for recommending content or products to users;
  • New obligations to protect minors on any platform in the EU;
  • A requirement for VLOPs to mitigate abuse of their systems, which could lead to, for example, disinformation or election manipulation, cyber violence against women, or harm to minors online;
  • Bans on targeted advertising on online platforms profiling children or based on special categories of personal data such as ethnicity, political views, or sexual orientation;
  • A ban on using “dark patterns” within the interfaces deployed by online platforms, which appears to impact systems that might manipulate users into making choices they do not intend to make; and
  • New rights for users, including a right to complain to a platform, seek out-of-court settlements, complain to their national authority in their own language, or seek compensation for breaches of the rules. Representative organisations will be able to defend user rights for large scale breaches of the law.

Obligations

The DSA takes an asymmetric approach to obligations setting them out based on the category or type of services provided. Providers are matched with different and cumulative sets of obligations, commensurate with their role and significance within the digital services market.

The first set of obligations applies to all providers of intermediary services, and includes requirements to:

  1. Establish two single points of contact for direct communication with regulatory authorities and their users respectively;
  2. Designate a legal representative within the EU if they are a foreign-established provider. The representative will be held responsible for a provider’s non-compliance, without prejudice to the provider’s own liability for non-compliance;
  3. Set out clear terms and conditions on any restrictions that the provider may apply on the services provided, such as policies, procedures, or tools engaged in content moderation, with additional reporting obligations related to provider’s type or category; and
  4. Publish annual transparency reports regarding any content moderation activities the provider engaged in, with additional reporting obligations related to their type or category.

The second set of obligations then applies to all hosting services, including all online platforms, where providers must:

  1. Implement notice and action mechanisms. The mechanism must allow users to notify the provider of alleged illegal content alongside supporting information. Notices under this mechanism will serve as actual notices for the provider, which may override certain limitations to liability or shields granted under the DSA; and
  2. Report criminal offences to law enforcement or judicial authorities of the EU relevant Member State(s) when the provider has information that indicates a criminal act threatening a person’s life or safety will take place.

The third set of obligations applies to online platforms, and includes requirements to:

  1. Publish monthly active user figures averaged over six months, at least once every six months on their platform. This obligation is extended to online search engines;
  2. Implement a complaint and redress system for users regarding the provider’s decisions on content moderation;
  3. Engage and abide, in good faith, with out-of-court dispute settlement bodies certified under the DSA;
  4. Prioritize notices filed by trusted flaggers of content as designated under the DSA;
  5. Implement measures and protections against misuse, including systems for complaint handling, warning, review, and suspension;
  6. Publish annual transparency reports, with greater reporting obligations, including information on out-of-court dispute settlements and reports on suspensions or other protections against misuse;
  7. Follow Commission guidelines regarding interface design and advertising;
  8. Adhere to advertising rules under the DSA, by presenting transparency information regarding the advertiser, allowing users to declare whether submitted content is a commercial communication, and abstain from using special categories of personal data to target advertisements to users;
  9. Protect minors by abstaining from targeted advertising towards minors using personal data, and implementing appropriate measures as may be issued by the Commission.

In addition to the above, VLOPs and VLSEs are then subject to more onerous requirements to:

  1. Implement risk management and crises response mechanisms, including the appointment of a compliance officer;
  2. Share data with the relevant authorities and researchers;
  3. Adhere to codes of conduct pertaining to data access, transparency, and advertising as well as pay a supervisory fee; and
  4. Adopt external, independent risk and accountability measures.

The Commission will also further consult and develop applicable standards, codes of conduct, and crisis protocols where applicable, for further clarification of the Providers’ obligations.

What is Illegal?

The DSA sets out EU-wide rules that cover detection, flagging, and removal of illegal content, as well as a new risk assessment framework for VLOPs and VLSEs on how illegal content spreads on their services. However, and crucially, the DSA does not define what is considered “illegal.”

Rather, what constitutes illegal content is defined by other laws either at the EU level or at individual Member State (national) level. For example, terrorist content, child sexual abuse material, or illegal hate speech is defined at the EU level and accordingly illegal across the whole EU. When content is illegal only in a Member State, as a general rule it should only be removed by Providers in the territory where it is illegal and not across the whole EU.

Enforcement

A similar approach is taken to enforcement, with a mechanism which consists of Member State (national) level and EU level cooperation, which will supervise how providers adapt their systems to the new requirements. The supervision of the rules will be shared between the Commission—primarily responsible for VLOPs and VLSEs—and Member States, responsible for any smaller providers based in or operating out of their Member State.

Penalties

The DSA has a new enforcement mechanism, again consisting of Member State (national) level and EU level cooperation, which will supervise how providers adapt their systems to the new requirements. The supervision of the rules will be shared between the Commission—primarily responsible for VLOPs and VLSEs—and Member States, responsible for any smaller providers based in or operating out of their Member State.

Member States will have to designate competent authorities—Digital Services Coordinators—by 17 February 2024, an independent authority who will be responsible for supervising the providers based in their Member State and to participate in the EU cooperation mechanism of the DSA. Each Digital Services Coordinator will have the authority to carry out investigations, conduct audits, accept undertakings or commitments from the providers on how they will remedy infringements, and impose penalties, including financial fines.

Each Member State will clearly specify the penalties in their national laws in line with the requirements set out in Article 52 of the DSA, meaning that the maximum fines they can include within their own laws are as follows:

  • Failure to comply with an obligation shall be a maximum fine of 6% of the annual worldwide turnover of the infringing provider concerned in the preceding financial year; and
  • Failure to supply correct, complete, or accurate information, failure to reply or rectify incorrect, incomplete, or misleading information, and failure to submit to an inspection shall be a maximum amount of 1% of the annual income or worldwide turnover of the infringing provider or person concerned in the preceding financial year.

Alternatively, Member States may impose periodic penalty payments up to a maximum amount of 5% of the average daily worldwide turnover or income of the infringing provider concerned in the preceding financial year per day, calculated from the date specified in the decision concerned.

For VLOPs and VLSEs, however, the Commission has sole and direct supervision and enforcement powers and can, in the most serious cases, impose fines of up to 6% of the annual worldwide turnover of the infringing VLOPs or VLSEs.

Note that in relation to the 6% maximum fine this is a higher fine than that which can be imposed for a breach of the GDPR.

Impact

Whilst the DSA is already in force, it shall not be directly applicable across the EU until 17 February 2024. In the months leading up to this date, EU Member States must empower their national authorities to enforce the DSA’s rules. For VLOPs and VLSEs, which are supervised directly by the Commission, the impact will be felt sooner. For example, all online platforms which are not considered small enough to escape the rules must publish data on the number of active monthly users by 17 February 2023.

As for the law itself, it is a clear reaction to concerns that online platforms can be used to spread misinformation or for illegal purposes, which is not unique to the EU. It also seems that introduction of the law was accelerated by concerns over the involvement of bad actors in election campaigns, as well as concerns in relation to the impact online platforms and specifically social media may have upon minors and society in general.

When the GDPR came into force, many other jurisdictions such as Singapore and China passed or amended laws to adopt broadly similar provisions; it remains to be seen whether the same could happen with the DSA. That said, given the increasing global concern as to how major social media platforms can be misused, one would expect other jurisdictions to be watching the rollout of the DSA with interest.

Ransomware attacks have become one of the most common and pervasive cybercrimes perpetrated against U.S. companies. A bad actor, often from overseas, will gain access to upload malware onto a company’s network storage or application platforms that encrypts all files it can access. A message or text file is usually left with instructions on how to contact the attacker to pay a ransom for the decryption key. In the worst case, a ransomware attack can freeze the business operations by effectively removing access to the company’s critical systems and rendering them useless. Aside from the business impact, what legal implications are created by a ransomware attack?

Privacy

The greatest legal concern is one of privacy. By definition, ransomware attacks gain access to the internal systems maintained or owned by a business. However, not all ransomware attacks are created equal and privacy obligations differ from one attack to another.

The most harmless ransomware attack is one that encrypts data on an identifiable location that is confirmed to not contain any personal information for employees or customers, and which can be easily restored from clean backups. Assuming information that meets the definition of personal data (including PII or PHI) is affected, then further legal analysis is required in order to determine whether or not the business has further legal responsibilities. In that evaluation, the availability of reliable system logs, network traffic and other information becomes critical. For example, some state data breach notification laws do not require notification to potentially affected individuals unless information was obtained by the unauthorized attacker. In other words, unless data was copied or exfiltrated by the attacker, there is not a breach. However, other states define a data breach as the unauthorized acquisition or access to certain categories of protected information. In states that include “access” in their definition of a breach, a ransomware attacker who is able to remotely browse through a network environment and select the target systems of files for an attack has obtained access. If the malware operates independently and there was no external access outside the execution of the computer code, it is arguable that there has been no unauthorized access by a person. It can be difficult to gain concrete information as to whether the attack resulted in the loss of data—but mere encryption, without more, is a arguably a “better case scenario” compared to one involving the loss or removal of information.

Hackers have caught on to this. In some cases, a ransomware assailant will provide proof that they have accessed personal information and can publish it on the dark web. These “proof of life” attacks provide a snippet of the personal information—for example, one of many social security numbers stored on the now-encrypted database—and hackers will threaten to publish all of the personal information if their demands are not met. Unfortunately, even though ransomware attackers when paid almost always live up to their end of the bargain by providing decryption keys and deleting exfiltrated data, the fact that information has been obtained by unauthorized individuals is unquestionably a breach, even if the attackers agree to delete it. This means, if personal information is involved, an attack that includes exfiltration is most likely going to trigger a reporting obligation.

Congress has introduced several bills that would require the reporting of a ransomware attack to the Department of Homeland Security within a certain time frame, usually 24-72 hours, with certain mandatory reporting obligations for certain industries already in place. It is unclear, however, what obligations will be incurred by the attacked party or whether the exfiltration of personal information will modify those obligations.

Intellectual Property

Many companies maintain their “secret sauce” as a trade secret. Whether a company develops software, manufactures adhesive, or trades on Wall Street, trade secret protection is paramount for the intangible assets of a company that are not patented. A ransomware attack can result in the exfiltration of the trade secret and possible publication of the trade secret—an act that would eliminate any protection for the trade secret at hand. And victims of such attacks are surprised to learn that their cyber insurance often does not cover such loss. Indeed, important trade secrets should be kept under proverbial lock and key to protect against exploitation or publication by ransomware attackers.


Ransomware attacks take many forms. Many involve the exfiltration or unauthorized access to employee or customer personal information or trade secrets, which can lead to catastrophic loss for a company with a large privacy or trade secret footprint. In addition to practicing good network and data security, employee training, and record retention to minimize the impact of attacks, it is imperative (and in some states required) that businesses have a written information security response program for the management and remediation of cyberattacks. In the investigation and response to an incident, it is important to determine what type of ransomware attack has occurred so that a company can determine the resulting privacy notification and intellectual property loss associated with the attack.

We strongly recommend consultation with capable outside legal counsel and experienced computer forensic experts in the response, remediation and investigation of a ransomware incident. The reasonableness of a business’ safeguards, the adequacy of its investigation, and the speed of its remediation response could all be subject to scrutiny in the event of litigation or a regulatory investigation. A proper team of internal stakeholders, counsel and forensic investigators should collaborate in addressing the investigation, documentation, remediation, insurance, customer and governmental notifications, law enforcement and public relations questions in swift – and where necessary, legally privileged discussions. Companies can also mitigate their risk by securing personal information or trade secrets behind updated network controls; employing encryption; conducting regular training and anti-phishing exercises; and deploying more secure multi-factor identification for workers and external users.

We have seen a market driven push for companies to embrace diversity and inclusion (D&I) policies over the last few years, which reflects a key shift in social and cultural norms for many organisations. Increasingly, consumers, staff and senior business leaders expect proactive steps to be taken for D&I objectives. Research demonstrates a strong business case for promoting diversity, although some suggest that viewing it through a lens of fairness is more effective. Regardless of the rationale, there are very sound reasons for companies to be embracing a diverse and inclusive workforce.

In pursuit of this objective, global businesses might assume that diversity reporting obligations apply in Australia in the same way they do in other jurisdictions and that overseas policies will be suitable for use here. With the best of intentions, following guidance from reputable external organisations focussed on general strategies to promote D&I, businesses might default to policies and practices designed overseas.

So what’s the problem? Many companies are unaware of the local compliance issues in Australia that need to be met when collecting diversity data and implementing these programs:

  • Under the Privacy Act 1988 and state and territory health records laws (where they apply), the collection of ‘sensitive information’ and ‘health information’ is strictly regulated. A very common requirement is that the collection of the information be ‘reasonably necessary’ for, or directly related to, the company’s functions or activities.
  • D&I programs will often be unlawful where these involve making employment decisions on the basis of anti-discrimination protected attributes, unless they fall within one or more of the ‘special measures’ exemptions (also known as ‘positive discrimination’ or ‘affirmative action’). ‘Special measures’ exemptions are only available in limited circumstances and some states (e.g. NSW) require a formal order of a Tribunal.
  • Some anti-discrimination laws also make it unlawful to collect information that will be used for unlawful discrimination, meaning that if the D&I program is unlawful then collecting the data to support it could also be a separate breach.

While it may be ironic that anti-discrimination laws pose roadblocks to genuine D&I initiatives, the fact remains that careful legal assessment is needed before proceeding to roll out global policies locally.

For example, while it is becoming more common for people to disclose diversity attributes, it may be very difficult to justify any practice of generally asking for or collecting this information for future unknown D&I purposes, without having any specific need for the information at the point when it is collected. Aside from Workplace Gender Equality Agency reporting in relation to gender (for entities with 100 or more employees), there are no other general D&I reporting obligations in Australia that would justify collecting data about diversity attributes. (This is in contrast to some overseas jurisdictions where this information is required to be collected and reported on.)

Many businesses are not aware that this increasingly common practice could be unlawful. It may also expose the business to unnecessary legal risk where job applicants are asked about diversity before recruitment decisions have been made, because an unsuccessful job applicant may more easily believe that they were discriminated against on this basis in the selection process.

To be compliant, businesses should:

  • Consider anonymised collection strategies in preference to identified data where there could be a question about whether it is ‘reasonably necessary’ or not;
  • Only collect sensitive information about individuals where there is a business case supporting the fact that it is ‘reasonably necessary’ and all privacy and health records obligations for its collection are met; and
  • Ensure D&I initiatives based on protected attributes are supported by a business case that can be relied on to show that the applicable requirements for ‘special measures’ in each jurisdiction have been satisfied, noting that this may require applications to be made to a tribunal beforehand (eg in NSW).

These steps can add complexity and red tape to D&I initiatives. However, ensuring that D&I steps are all locally compliant in our view demonstrates real commitment to these laudable goals and will best position businesses to become more diverse and inclusive in the longer term.

As we have been covering, the Supreme Court has overturned Roe v. Wade in their Dobbs v. Jackson Women’s Health Organization, leaving it to states to regulate access to abortion in their territory. The Biden Administration’s response to the overturning of Roe v. Wade in Dobbs v. Jackson Women’s Health Organization is taking shape and it has directed the Federal governmental agencies to look at what they can and should do to protect women’s health and privacy. Over the last few weeks, those agencies have been weighing in.

Initially, during the week of June 27th, we saw the following agency activity:

  • Tri-Agency Guidance re Contraceptive Coverage: On June 27th, the agencies responsible for enforcing the provisions of the Affordable Care Act (ACA) — the Departments of Health and Human Services, Labor, and Treasury — issued a letter directed to health plans and insurers “reminding” them that group health plans must cover, without cost-sharing, birth control and contraceptive counseling for plan participants. They note that they are concerned about a lack of compliance with this mandate, and that they will be actively enforcing it.
  • HHS Guidance re HIPAA Privacy: Shortly after, HHS issued guidance regarding the privacy protections offered by HIPAA relating to reproductive health care services covered under a health plan, including abortion services. This guidance reminds covered entities that HIPAA permits, but may not require, disclosure of PHI when such disclosure is required by law, for law enforcement purposes, or to avert a serious threat to health or safety. The guidance described the following disclosure scenarios, without an individual authorization, as breaching HIPAA’s privacy obligations:
    • “Required by Law:” An individual goes to a hospital emergency department while experiencing complications related to a miscarriage during the tenth week of pregnancy. A hospital workforce member suspects the individual of having taken medication to end their pregnancy. State or other law prohibits abortion after six weeks of pregnancy but does not require the hospital to report individuals to law enforcement. Where state law does not expressly require such reporting, HIPAA would not permit a disclosure to law enforcement under the “required by law” provision.
    • “For Law Enforcement Purposes:” A law enforcement official goes to a reproductive health care clinic and requests records of abortions performed at the clinic. If the request is not accompanied by a court order or other mandate enforceable in a court of law, HIPAA would not permit the clinic to disclose PHI in response to the request.
    • “To Avert a Serious Threat to Health or Safety:” A pregnant individual in a state that bans abortion informs their health care provider that they intend to seek an abortion in another state where abortion is legal. The provider wants to report the statement to law enforcement to attempt to prevent the abortion from taking place. However, HIPAA would not permit this as a disclosure to avert a serious threat to health or safety because a statement indicating an individual’s intent to get a legal abortion, or any other care tied to pregnancy, does not qualify as a serious an imminent threat to the health and safety of a person or the public, and it generally would be inconsistent with professional ethical standards.

On Friday, July 9th, the Biden administration issued an “Executive Order on Protecting Access to Reproductive Healthcare Services.” The Executive Order creates the Interagency Task Force on Reproductive Healthcare Access and instructs different agencies in broad brushstrokes in at least three areas:

  • Access to Services: The Secretary and Health and Human Services is to identify possible ways to:
    • protect and expand access to abortion care, including medication abortion, and other reproductive health services such as family planning services;
    • increase education about available reproductive health care services and contraception;
    • ensure all patients receive protections for emergency care afforded by law.

The Secretary of Health and Human Services is directed to report back to the President in 30 days on this point.

  • Legal Assistance: The Attorney General and Counsel to the President will encourage lawyers to represent patients, providers and third parties lawfully seeking reproductive health services.
  • Physical Protection: The Attorney General and Department of Homeland Security will consider ways to ensure safety of patients, providers, third parties, and clinics, pharmacies and other entities providing reproductive health services.
  • Privacy and Data Protection: Agencies also will consider ways to: address privacy threats, e.g., the sale of sensitive health-related data and digital surveillance, protect consumers’ privacy when seeking information about reproductive health care services, and strengthen protections under HIPAA with regard to reproductive healthcare services and patient-provider confidentiality laws.

It did not take long for the agencies to respond:

  • On Monday, July 11th, in a letter to health care providers, HHS Secretary Xavier Becerra said that the federal Emergency Medical Treatment and Active Labor Act requires health care providers to stabilize a patient in an emergency health situation. Given the Supremacy Clause of the Constitution, that statute takes precedence over conflicting state law. As a result, that stabilization treatment could include abortion services if needed to protect the woman’s life.
  • Also on Monday, the Federal Trade Commission announced that it is taking action to ensure that sensitive medical data, including location tracking data on electronic applications, is not illegally shared. The FTC gave several examples of existing enforcement activity and noted it will aggressively pursue other violations.

We are certain to see more responses to the Executive Order and will update this space. Should you have any questions, please contact your Seyfarth attorney. We will continue to monitor and provide updates as developments unfold. 

To learn more about the Dobbs decision, we invite you to join us on Wednesday, July 13 at 3 p.m. Central for a webinar entitled  “Post-Dobbs Implications for Employers and Employer Plan Sponsors.” For more information and to register, click here.

Introduction

On March 9, 2022, the U.S. Securities and Exchange Commission (“SEC”) proposed mandates for cybersecurity disclosures by public companies. If adopted, these mandates seek to provide investors a deeper look into public companies’ cybersecurity risk, governance, and incident reporting practices. SEC chair Gary Gensler noted in a statement regarding the proposed mandates that cybersecurity incidents continue to become a growing risk with “significant financial, operational, legal, and reputational impacts.”

“The interconnectedness of our networks, the use of predictive data analytics, and the insatiable desire for data are only accelerating, putting our financial accounts, investments, and private information at risk. Investors want to know more about how issuers are managing those growing risks.” – Gary Gensler, SEC Chairperson

Required Disclosures

According to the SEC, the proposed mandates would require information to be disclosed in a “consistent, comparable, and decision-useful manner” and fall in the following categories:

  1. Mandatory, ongoing disclosures on public companies’ governance, risk management, and strategy related to cybersecurity risks. Under the proposal, some examples of disclosed information would include: the company’s cybersecurity policies and procedures; how the company assesses and manages cybersecurity risks; how cybersecurity risks and incidents might impact the company’s financials; and the management’s role and oversight of cybersecurity risks.
  2. Mandatory, timely, cybersecurity incident reporting. The proposed mandates would require companies to disclose incidents on Form 8-K within four business days after a company determines it has experienced a “material” cybersecurity event. To the extent the information is known at the time of the Form 8-K filing, a company would have to disclose:
    • “When the incident was discovered and whether it is ongoing;
    • A brief description of the nature and scope of the incident;
    • Whether any data was stolen, altered, accessed, or used for any other unauthorized purpose;
    • The effect of the incident on the registrant’s operations; and
    • Whether the registrant has remediated or is currently remediating the incident.

When To Report A Material Cybersecurity Incident

The proposed trigger to report an incident depends on when the company determines that the cybersecurity incident it has experienced is material. While the materiality determination may coincide with the date of discovering the incident, it may also develop over time. The goal of the SEC’s proposal is to mandate reports about what is material to investors. Though the expectation is that companies will be prompt in making this determination (and reporting it within four business days), it is unclear how long it might take due to a number of factors. The SEC shared various cases that address what constitutes material information in regard to cybersecurity incident disclosures, including: TSC Industries, Inc. v. Northway, Inc.[i]Basic, Inc. v. Levinson[ii], and Matrixx Initiatives, Inc. v. Siracusano.[iii]

In general, an incident is material if company information available to shareholders is altered or compromised, or “if there is a substantial likelihood that a reasonable shareholder would consider it important” when making an investment decision.[iv] The SEC recommends companies take a careful, objective assessment of the incident, and determine whether a reasonable investor would consider the incident to be material. Some examples of cybersecurity incidents that would trigger the necessity for Form 8-K disclosures under the proposed rule could be:

  • An unauthorized compromising of information assets such as data, technological systems, and networks. These incidents can stem from intentional attacks or from the accidental exposure of the information assets. Compromised data includes sensitive business information, intellectual property files, and personally identifiable information.
  • An incident that causes technology systems to be interrupted, degraded, or in operative; and
  • An incident where a cybercriminal makes a ransom demand or threatens to expose company information to the public.

It remains to be seen how the final rule would come out but this standard of “materiality” would cover much more than what was previously considered by most companies to be “material.”

Concerns From Commissioner Pierce

The SEC voted 3-1 in support for the proposed amendments, with Commissioner Hester Peirce dissenting. Commissioner Peirce’s main concern was that the proposed rules are too dismissive of the need to work with other agencies on issues of cybersecurity. Other concerns included that the changes would lead to the micromanagement of both boards of directors and management of public companies, and that in light of the SEC’s 2018 Guidance, the proposed rules are unnecessary.

“We have an important role to play in ensuring that investors get the information they need to understand issuers’ cybersecurity risks if they are material. This proposal, however, flirts with casting us as the nation’s cybersecurity command center, a role Congress did not give us.” – Hester M. Peirce, SEC Commissioner

Proposal Timeline

The public is free to comment on SEC proposals for up to 30 days after publication in the Federal Register or up to 60 days after the proposal was made, whichever is longer. After that, the SEC will consider the public’s input and determine the next steps in enacting a final rule. Unless the SEC takes some steps to fast track the process, the timeline from a proposed rule to enacting a final rule can take an average of 450 days, so companies impacted by the SEC’s proposal could expect some version of cybersecurity disclosures to take effect probably no earlier than late 2022 or sometime in mid-2023 depending upon how much priority the agency places on enacting the regulation.

[i] TSC Industries v. Northway, 426 U.S. 438, 449 (1976)

[ii] Basic Inc. v. Levinson, 485 U.S. 224, 232 (1988).

[iii] Matrixx Initiatives, Inc. v. Siracusano, 563 U.S. 27 (2011).

[iv] TSC Industries v. Northway, 426 U.S. 438, 449 (1976)

Introduction

On March 15, 2022, President Biden signed into law the Cyber Incident Reporting for Critical Infrastructure Act of 2022. The Act will require critical infrastructure organizations (defined below) to report cyber attacks to the Cybersecurity and Infrastructure Security Agency (CISA) within 72 hours. The Act also creates an obligation to report ransomware payments within 24 hours.

According to the Federal Bureau of Investigation’s 2021 Internet Crime Report, released on March 23, 2022, cyber incidents rose 7% from 2020, with potential losses topping $6.9 billion. Many of the most threatened organizations fall into the critical infrastructure sector, and in 2021 alone, cyber incidents caused oil and food shortages, as well as supply chain threats. With cyber incidents reaching all-time highs in 2021, the legislation purports to protect U.S. critical infrastructure entities and investigate cyber crimes moving forward. The Act suggests that reporting obligations are being implemented to ensure that the government can support in the response, mitigation, and protection of both private and public companies that are covered under the Act. Within 24 months, CISA’s director is required to issue a proposed rule, and must issue a final rule 18 months after making the proposal. The legislation also authorizes the Director of CISA to issue future regulations to amend or revise that rule.

Covered Entities

While the reporting obligations will not be in effect until the Director of CISA clarifies which entities are officially covered in the final rule, the Act refers to the Presidential Policy Directive 21 (2013) to provide some guidance. With reference to the Directive, the industries that might be covered as critical infrastructure entities include: chemical; commercial facilities; communications; critical manufacturing; dams; defense industrial bases; emergency services; energy; financial services; food and agriculture; government facilities; healthcare and public health; information technology; nuclear reactors, materials, and waste; transportation systems; and water and wastewater systems. When a covered entity “reasonably believes” that it has experienced a “substantial” cyber incident, the 72-hour reporting obligation will trigger. They will have 24 hours to report any ransom payments, even if the ransomware attack does not fall within the defined coverage of cyber incidents. If a covered entity both pays a ransom and suffers a substantial cyber incident, it may submit a single report to CISA.

Covered Cyber Incidents

The Act directs CISA, in the final rule, to include a clear description of the types of substantial cyber incidents that would trigger a reporting obligation. A covered incident, at a minimum, would include a “substantial loss of confidentiality, integrity, or availability of such information system or network, or a serious impact on the safety and resiliency of operational systems and processes;” a disruption of operations due to a denial of an attack on an entities’ network or technology systems, or an unauthorized access or disruption to operations caused by a compromised supply chain or service provider. The Act adds that the final rule should also highlight considerations such as the sophistication of tactics used in the attack, the sensitivity to the data at issue, the number of individuals actually or potentially affected by the attack, and the potential impacts on industrial control systems. In finalizing the rule, CISA’s Director will need to issue regulations regarding which entities and incidents are covered; the manner, timing and form of reports; and the necessary steps to take for information preservation.

The Expanded Role of the Cybersecurity and Infrastructure Security Agency

The legislation expands CISA’s role in managing cyber reporting for the U.S.’s critical infrastructure sector. Among the responsibilities described in the Act are CISA’s oversight in rulemaking, assessing reported incidents, enforcement, coordinating and sharing information with other federal agencies, and moving forward with other Federal cyber initiatives. Once the final rule is enacted, CISA will conduct an outreach and education campaign on the current and upcoming cybersecurity initiatives of the initiatives mentioned in the Act are below:

  • Cyber Incident Reporting Council: The Council is to “coordinate, deconflict, and harmonize Federal incident reporting requirements.” It would be led by the Department of Homeland Security in consultation with the Attorney General and other Federal agencies.
  • Ransomware Vulnerability Warning Pilot Program: CISA will be required to implement this program no later than one year after the law’s enactment. The program’s goal, leveraging existing authorities and technologies, will be to develop procedures for identifying information systems at risk for ransomware attacks, and to notify the owners and operators of those vulnerable systems.
  • Ransomware Threat Mitigation Activities: To mitigate ransomware threats, CISA will establish a Joint Ransomware Task Force in consultation with the FBI, the National Cyber Director, and the Attorney General. The task force is “to coordinate an ongoing nationwide campaign against ransomware attacks and identify and pursue opportunities for international cooperation.” In carrying out these responsibilities, there will be a priority on implementing intelligence-driven systems that disrupt cyber criminals. To do so, the task force will consult “with relevant private sector, State, local, Tribal, and territorial governments and international stakeholders to identify needs and establish mechanisms.”

Guidance for Organizations

The Act’s reporting obligations will not take effect until CISA implements a final rule. Companies may get involved in the rulemaking process once CISA releases the proposed rule in the Federal Register. When the proposed rule is issued within the next two years, public commentary is taken into consideration from anywhere between 30 and 60 days. If a company has the desire to notify authorities of malicious cyber activity, they can utilize the FBI’s Internet Crime Complaint Center (IC3) or the CISA Incident Reporting System. While waiting for the rule to be drafted, companies should be taking steps to bolster internal cybersecurity protocols. CISA’s website provides updates, resources, and tools for organizations, as well as individuals, to ensure heightened security procedures. The final rule for mandatory reporting may be a few years out, but organizations and individuals should protect themselves and their assets now.

There’s been a lot of debate in mainstream and social media in the past week about major Australian corporates removing pay secrecy clauses from their employment contracts. The Financial Services Union is keeping sustained pressure on employers in that industry to remove the clauses from their employment contracts. The Labor Party has made it known that, if elected, it intends to amend the Fair Work Act to prohibit these kinds of clauses, as part of their commitment to achieving gender pay equity.

The Australian position on pay secrecy clauses is different to that of other leading economies. Pay secrecy clauses have been made legally unenforceable in the United States of America and the United Kingdom, with the worthy aim of decreasing discrimination and disempowerment of employees. In 2021, the European Union also announced a proposal to make pay transparency a binding measure for its member states.

But there are sound reasons for employers to include pay secrecy clauses in employment contracts. As with all complex issues, there are trade-offs that must be considered in arriving at a balanced final position. Requiring employees to keep their pay levels confidential can assist with preventing workplace tension and conflict, particularly in sectors where a significant proportion of pay is discretionary. Pay secrecy clauses can also provide an easy ‘out’ for employees who aren’t comfortable divulging their remuneration to others.

Before making any decisions about removing pay secrecy clauses from your employment contracts, there are some important practical considerations to work through:

  1. What exactly are you prepared to allow? Whilst an employer may be open to removing pay secrecy clauses, there may still be good reasons to moderate employees’ public statements that could potentially damage the employer’s brand or reputation. If appropriate, set clear boundaries around when and with whom employees are permitted to discuss their pay.
  2. Protect employees who don’t want to disclose. How will you ensure that workers who don’t wish to share their private pay information don’t feel pressured to do so? Consider developing a communication policy to guide behaviours and expectations around disclosures.
  3. Quarantine employees’ choices about disclosing their pay from other decision-making processes. Employees must not be dismissed or subject to other adverse action because they have made complaints or enquiries about their pay, or (if pay secrecy prohibitions are introduced) because they have exercised, or propose to exercise, any right to disclose or withhold their pay details. Be clear on the proper process and channels for raising genuine complaints. Consider training your leaders on effectively separating an employee’s disclosure (or not) from other decisions about their access to promotions or other opportunities, disciplinary action or termination, and handling sensitive pay discussions, queries, and complaints appropriately.
  4. Be prepared to answer tough questions about pay gaps. There are good reasons to remove pay secrecy clauses if that is the only way to ensure transparency about pay. Employers can also consider alternative approaches such as providing detailed information about pay that does not identify individual employees. Whichever policy position is taken – arm yourself with knowledge – do pay differentials exist in your workforce? Are there sound merit-based reasons for the gaps, or is gender (or another protected characteristic) the underlying reason, and if so, what is being done to address this? Understanding the reason for gaps in pay, whether based on gender or any other attribute, requires a detailed analysis of data and a regression analysis which can help to flush out causal relationships between gender or other attributes and variable matters such as percentage pay rises or discretionary pay.
  5. Be mindful of privacy obligations. Disclosing details about an individual’s pay data for purposes other than those directly related to the employment relationship with that individual (for example, as part of broader pay equality initiatives) without their informed consent may expose the employer to a privacy complaint. If you need to share pay data, can this be done at an aggregated, anonymised level?

It’s unlikely that removing pay secrecy clauses will resolve gender pay gaps in and of itself – the question is whether it is a necessary step along the way in light of alternative measures that may not have the same unintended consequences. And when well-executed, pay transparency might also be leveraged as a powerful motivational and cultural factor.

In the second program in the 2022 Trade Secrets Webinar Series, Seyfarth partners Jesse Coleman, Dan Hart, and Caitlin Lane discussed how to identify the greatest threats to trade secrets, provided tips and best practices for protecting trade secrets abroad, and covered enforcement mechanisms and remedies internationally and in the US.

As a follow up to this webinar, our team wanted to highlight:

  • US Law provides two key statutes with civil remedies for protecting trade secrets where the misappropriation occurs extraterritorially – ITC Section 337 (19 U.S.C. § 1337) and the Defend Trade Secrets Act, 18 U.S.C. § 1837-each with different remedies, requirements of applicability, and pros/cons.
  • Employers should ensure that their employment agreements include favorable choice-of-law, venue, and forum-selection clauses to increase the likelihood that any subsequent legal proceeding for trade secret misappropriation occurs in a location that is likely to recognize and protect the company’s intellectual property.
  • Employers should form a well-rounded, strategic approach to global defense of trade secrets and leverage multiple protective mechanisms including restrictive covenants, notice periods, contractual agreements and statutory protections.
  • Restrictive covenants should be tailored for jurisdictional requirements and nuances – one-size does not fit all when it comes to protecting trade secrets across multiple countries.
  • Employers should implement a holistic strategy for protecting trade secrets at every stage of the employment relationship, from onboarding to pre-litigation enforcement efforts post-termination, with coordination between HR, Legal, IT, and other stakeholders within the company.
  • Practical measures should also be taken to protect confidential information and trade secrets, including limiting access to sensitive information, using exit interviews, and (provided that applicable privacy laws are followed) monitoring use of company IT resources and conducting forensic investigations of departing employees’ computer devices.