In September of this year, with SB 327, California stepped into the vanguard of information age law by passing a cybersecurity regulation on the Internet of Things. SB 327 has added new sections to Cal. Civil Code §1798. Specifically, §1798.91 et seq. While this seems to be a good thing, the larger question is what does it do, and how far does it reach?

What does it Do?

In a nutshell, the law requires the manufacturer of a “connected device” to 1) equip it with reasonable security feature(s) appropriate to a) the nature and function of the device; b) the information it may collect, contain, or transmit; and 2) ensure those security features are designed to protect the device and any information within from unauthorized access, destruction, use, modification, or disclosure.

Recognizing the inherent ambiguity “reasonable security feature(s)” may cause, fortunately the drafters of the law provided some clarification:  If the device is subject to authentication outside a local area network, should it contain either a unique pre-programmed password; or the device requires a user to generate a new means of authentication prior to initial access being granted; then such security feature is reasonable.

Note that this is a “reasonableness test” just for the authentication aspect of the device. The rest of the requirements in Cal. Civil Code §1798.91.04(a) will still mandate reasonable security beyond just the authentication aspects of the device.

How Far does it Reach?

“Connected device” is defined quite broadly. Under the definition, all the IoT devices we have discussed in previous posts should be covered by the law. Additionally, the law makes it apparent that manufacturers are the primary party subject to the requirements. Additionally, it applies to manufacturers located anywhere, even outside of California, if they sell or offer devices for sale in California.

Why does this Matter?

This law will have far reaching effects because the world we live in is a connected world. The Internet of Things is technology that increasingly influences everyone’s life and any business that manufactures devices are increasingly making those devices “connected”.

Until now, no such cybersecurity law existed. The legal landscape for around when OEMs had to incorporate cybersecurity was a veritable wild west. Adoption of this measure now mandates security measures be “baked” into the device before human user intervention. “Reasonable security” is now “table stakes” for anyone selling a smart device in California – which is nearly everyone.

Of course, there is much debate about what “reasonable security” might be. Under SB 327 there is some guidance, but it is still limited. Section 1798.91.04 does provide a floor for authentication requirements with the mandate that either a unique preprogrammed password will be provided OR the user won’t be able to use the device until the password is changed. However, there is still some question as to what the rest of the requirements will need to be to “protect the device and any information within from unauthorized access, destruction, use, modification, or disclosure.” Still,  California has taken the vanguard position in regulating IoT devices specifically. The Federal government and other states have not looked at this question from a “connected device” perspective. Most other laws imposing cybersecurity requirements talk about a “system”, which can include devices, but can also include other controls (e.g. network security, physical security, etc.).

So is this a problem? 

Just a few observations to keep in mind:

  • First, this is a California state specific law. There is no federal law on this issue. This can create preemption and constitutionality questions – adding to the uncertainty of compliance.
  • Second, “reasonable security” outside the authentication protocols of the device are still ambiguous. This leaves businesses with looking at standards like NIST guidelines, which can be overwhelming, or taking the risk they their security is deemed inadequate “after the fact”.
  • Third, SB 327 expressly carves out third-party software from being subject to this title. However, the interconnectivity of such third-party software may well be the source of a security breach – the NIST guidelines recognize this. As such, is it reasonable security to not consider how a device interacts with third-party software? This approach seems to fail to consider how devices (and software) are built today.

Fortunately, SB 327 does not include a private right of action – so the plaintiff’s bar will be limited in what they can do. Unfortunately, city and county attorney’s do have authority to enforce the law. This means that an activist city attorney may well force a device manufacturer into court.

In any event, SB 327 can be seen as the beginning of a trend which sees OEMs responsibilities expand beyond merely making sure their devices are safe, but also making sure the software inside the device is safe.

At the end of June, the California legislature passed its Bill 375, the California Consumer Privacy Act of 2018.  The Act contains a number of concepts that would be familiar to those who are working to bring their companies and organizations into compliance with GDPR.  The new law defines a category of “Personal Information” that radically departs from a traditional definition of Personal Data commonly found in various State Data Privacy Laws, which usually ties an individual name to other identifiers like social security number, account number, or other factors.  Instead, the California Act defines “Personal Information” as information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.  It does not, mercifully, include publicly available information, but it still comes closer to a GDPR-like definition of “personal data” than any other US law.

The Act provides California residents some rights that also appear familiar.  For example:

  • Consumers can request a copy of all the Personal Information a business has collected;
  • Consumers have the right to request that the business delete their Personal Information (subject to some exceptions), and a right to direct a company to not share their Personal Information with third parties; and
  • Consumers can request that a business disclose the categories of information it has collected, the sources of information, the purpose for the collection and/or its sale of the information, and the third parties with whom the information is shared.

These certainly sound like concepts that could be referenced as The Right to Access; The Right to Be Forgotten; and Data Portability.

Business requirements include:

  • Meaningful notifications to consumers at the point of contact where Personal Information is collected;
  • Updated online privacy notices to include the types of Personal Information collected, the purpose of collection, and rights information;
  • Implementation of Data Security measures to protect Personal Information;
  • Providing training to employees handling Personal Information or involved in consumer inquiries;
  • The inclusion of provisions in contracts with third parties with whom Personal Information is shared to include data privacy protections and restrictions on disclosure; and
  • The inclusion of a “do not sell my personal information” option on public facing interfaces and websites that collect personal information. Companies must take measures to not discriminate against users who opt out, but at the same time they can offer price incentives to those who chose to opt in.

The Act takes effect on January 1, 2020.  It has the same approximate 2 year “runway” period that GDPR provided in 2016 (leading up to May 25, 2018) for companies to gear up their compliance.  This law has potentially widespread impact, but some of the mechanisms of its application remain unclear, due in some degree to some of its broadly worded language.  In this way, it is also similar to the GDPR.

The challenge with implementation for large companies is the same as every other State level data privacy law – it is often virtually impossible to reliably identify who the “California” consumers are.  Thereby making it by practical necessity a global requirement for all publicly facing systems and applications for all users.

We recommend that most companies prioritize and stage their compliance today, focusing on GDPR in the short term, but  a California (or potentially necessary practical nationwide) compliance strategy should be included in late 2018 and 2019 IT and Privacy compliance plans.

Since its enactment a decade ago, the Illinois Biometric Information Privacy Act (BIPA) has seen a recent spike in attention from employees and consumers alike. This is due, in large part, to the technological advancements that businesses use to service consumers and keep track of employee time.

What Is The BIPA?

Intending to protect consumers, Illinois was the first state to enact a statute to regulate use of biometric information. The BIPA regulates the collection, use, safeguarding, handling, storage, retention, and destruction of biometric identifiers and information. The statute defines biometric identifiers to include a retina or iris scan, fingerprint, or scan of hand or face geometry. Furthermore, the statute defines biometric information as any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual. Any person aggrieved by a violation of the act may sue to recover actual or statutory damages or other appropriate relief. A prevailing party may also recover attorneys’ fees and costs.

Since September of 2017, there have been more than thirty-five class action BIPA lawsuits with no particular industry being targeted. More commonly sued industries include healthcare facilities, manufacturing and hospitality.

The drastic increase in litigation is largely contributable to employers’ attempt to prevent “buddy punching,” a term that references situations where employees punch in for a co-worker where biometric data is not required to clock in or out. For example, in Howe v. Speedway LLC, the class alleges that defendants violated the BIPA by implementing a finger-operated clock system without informing employees about the company’s policy of use, storage and ultimate destruction of the fingerprint data. Businesses engaging in technological innovation have also come under attack from consumers. In Morris v. Wow Bao LLC, the class alleges that Wow Bao unlawfully used customers’ facial biometrics to verify purchases at self-order kiosks.

Recent Precedent

In Rivera v. Google Inc.,the District Court for the Northern District of Illinois explained that a “biometric identifier” is a “set of biometric measurements” while “biometric information” is the “conversion of those measurements into a different, useable form.” The court reasoned that “[t]he affirmative definition of “biometric information” does important work for the Privacy Act; without it, private entities could evade (or at least arguably could evade) the Act’s restrictions by converting a person’s biometric identifier into some other piece of information, like mathematical representation or, even simpler, a unique number assigned to a person’s biometric identifier.” Thus, a company could be liable for the storage of biometric information, in any form, including an unreadable algorithm.

More recently, in Rosenbach v. Six Flagsthe Illinois Appellate Court, Second District, confirmed that the BIPA is not a strict liability statute that permits recovery for mere violation. Instead, consumers must prove actual harm to sue for a BIPA violation. The court reasoned that the BIPA provides a right of action to persons “aggrieved” by a statutory violation, and an aggrieved person is one who has suffered an actual injury, adverse action, or harm. Vague allegations of harm to privacy are insufficient. The court opined that, if the Illinois legislature intended to allow for a private cause of action for every technical violation of the BIPA, the legislature could have omitted the word “aggrieved” and stated that every violation was actionable. The court’s holding that actual harm is required is consistent with the holdings of federal district courts on this issue.

Damages and Uncertainty

Plaintiffs and their counsel are attracted to the BIPA because it provides for significant statutory damages as well as attorneys’ fees and costs. The BIPA allows plaintiffs to seek $1,000 for each negligent violation, and $5,000 for each intentional or reckless violation, plus attorneys’ fees and costs.

To date, all claims have been filed as negligence claims, and, thus, it is unclear what a plaintiff must show to establish an intentional violation. Similarly, the law is unsettled on whether the statutory damages are awarded per claim or per violation. A per violation rule would exponentially increase a defendant’s potential liability. For example, some plaintiffs are currently seeking $1,000 or $5,000 for each swipe of a fingerprint to clock in or out.

How To Protect Your Business

To avoid a costly mistake when retaining biometric data, businesses should:

  1. provide employees or consumers with a detailed written policy that includes why and how the data will be collected, stored, retained, used, and destroyed;
  2. require a signed consent before collecting the data;
  3. implement a security protocol to protect the data; and
  4. place an appropriate provision in vendor contracts (e.g., for data storage) to require vendors to adhere to the law and report any data breaches.

Consent can be obtained in different ways. For example, employers may condition employment upon an individual’s consent to a data retention policy, and companies can require consumers to accept a click-through consent before accessing a company’s website or application.

For questions or additional information, please contact Esther Slater McDonald at emcdonald@seyfarth.com or Paul Yovanic Jr. at pyovanic@seyfarth.com.

Seyfarth Shaw Offers Data Privacy & Protection in the EU-U.S. Desktop Guide and On-Demand Webinar Series

On May 25, 2018, the EU General Data Protection Regulation (“GDPR”) will impose significant new obligations on all U.S. companies that handle personal data of any EU individual. U.S. companies can be fined up to €20 million or 4% of their global annual revenue for the most egregious violations. What does the future passage of GDPR mean for your business?

Seyfarth’s eDiscovery and Information Governance (eDIG) and Global Privacy and Security (GPS) practitioners are pleased to announce the release of Data Privacy & Protection in the EU-U.S.: What Companies Need to Know Now, which describes GDPR’s unique legal structure and remedies, and includes tips and strategies in light of the future passage of the GDPR.

How to Get Your Desktop Guide:

To request the Data Privacy & Protection in the EU-U.S. Desktop Guide as a pdf or hard copy, please click the button below:

GDPR Webinar Series

Throughout August and October of 2017, Seyfarth Shaw’s attorneys provided high-level discussions on risk assessment tools and remediation strategies to help companies prepare and reduce the cost of EU GDPR compliance. Each segment is one hour long and can be accessed on-demand at Seyfarth’s Carpe Datum Law Blog and The Global Privacy Watch Blog.

For updates and insight on GDPR, we invite you to click here to subscribe to Seyfarth’s Carpe Datum Law Blog and here to subscribe to Seyfarth’s The Global Privacy Watch Blog.

Cross-posted from Employment Law Lookout.

Seyfarth Synopsis:  A string of recent class action lawsuits regarding businesses’ use of employees’ biometric data should put employers on heightened alert regarding compliance with various state biometric privacy laws.

As biometric technology has become more advanced and affordable, more employers have begun implementing procedures and systems that rely on employees’ biometric data. “Biometrics” are measurements of individual biological patterns or characteristics such as fingerprints, voiceprints, and eye scans that can be used to quickly and easily identify employees.  However, unlike social security numbers or other personal identifiers, biometrics are biologically unique and, generally speaking, immutable.  Thus, unlike a bank account or a social security number, which can be changed if it is stolen, biometric data, when compromised, cannot be changed or replaced, leaving an affected individual without recourse and at a heightened risk for identity theft.  Given the serious repercussions of compromised biometric data, a number of states have proposed or passed laws regulating the collection and storage of biometric data.  And plaintiffs’ attorneys are taking notice, as the number of class action lawsuits in this area has surged in recent months.

Currently, there are three states that have statutes regulating the collection and storage of biometric data: Illinois, Texas, and Washington.  In 2008, Illinois passed the Biometric Information Privacy Act (“BIPA”).  Texas followed suit in 2009, and Washington passed its biometric privacy law in 2017. Continue Reading Hazards Ahead: Uptick in Biometric Privacy Laws Can Put Employers in Hot Seat

When you bring to mind someone “hacking” a computer one of the images that likely comes up is a screen of complex code designed to crack through your security technology.  Whereas there is a technological element to every security incident, the issue usually starts with a simple mistake made by one person.   Hackers understand that it is far easier to trick a person into providing a password, executing malicious software, or entering information into a fake website, than cracking an encrypted network — and hackers prey on the fact that you think “nobody is targeting me.”

Below are some guidelines to help keep you and your technology safe on the network.

General Best Practices

Let’s start with some general guidelines on things you should never do with regards to your computer or your online accounts.

First, never share your personal information with any individual or website unless you are certain you know with whom you are dealing.  Hackers often will call their target (you) pretending to be a service desk technician or someone you would trust.  The hacker than asks you to provide personal information such as passwords, login ids, computer names, etc.; which all can be used to compromise your accounts.  The best thing to do in this case, unless you are expecting someone from your IT department to call you, is to politely end the conversation and call the service desk back on a number provided to you by your company.  Note, this type of attack also applies to websites. Technology exists for hackers to quickly set up “spoofed” websites, or websites designed to look and act the same as legitimate sites with which you are familiar.  In effect this is the same approach as pretending to be a legitimate IT employee; however, here the hacker entices you to enter information (username and password) into a bogus site in an attempt to steal the information.  Be wary of links to sites that are sent to you through untrusted sources or email.  If you encounter a site that doesn’t quite look right or isn’t responding the way you expect it to, don’t use the site.  Try to access the site through a familiar link. Continue Reading Cybersecurity Best Practices

Cross-posted from Carpe Datum Law

On May 25, 2018, the EU General Data Protection Regulation (“GDPR”) will impose significant new obligations on all U.S. companies that handle personal data of any EU individual. U.S. companies can be fined up to €20 million or 4% of their global annual revenue for the most egregious violations. What does the future passage of GDPR mean for your business?

Our experienced eDiscovery and Information Governance (eDIG) and Global Privacy and Security (GPS) practitioners will present a series of four 1-hour webinars in August through October of 2017. The presenters will provide a high-level discussion on risk assessment tools and remediation strategies to help prepare and reduce the cost of EU GDPR compliance. Continue Reading Is your organization ready for the new EU General Data Protection Regulation?

The General Data Protection Regulation is coming, and along with it, a significant expectation of increased harmonization in the privacy rules across the EU. Considering the 60-plus articles which directly impose obligations on controllers and processors, this isn’t an unreasonable sentiment. However (as is often the case with the EU), reality is a bit more complicated than what the expectations reflect.

The reason for the retained level of complexity even under the GDPR are what are known as “opening clauses”. These clauses permit a Member State to modify the provisions of the Article in which the clause resides. In effect, the opening clauses permit the Member State to introduce a more restrictive application of the GDPR obligation via local legislation.

These opening clauses are particularly important to note as there are a number of them (around 30% of the directly applicable Articles have opening clauses), and many of them address an already complicated area of data protection law – employment. While there are a number of companies who have a large consumer impact in the EU, there are just as many (if not more) who have workers in the EU, or have clients who have workers in the EU. As a consequence, the implementation of the GDPR doesn’t fully mitigate the patchwork quilt of local law when it comes to labor & employment law. This is both because of the opening clauses in a number of related Articles, as well as the plain text of Article 88.

The lack of consistency in HR-related data protection is particularly concerning with the advances in workforce management, monitoring, and the use of personal devices in the workplace (e.g. Bring Your Own Device, or “BYOD” environments). One of the ways that the regulators have attempted to address this very real issue around inconsistent GDPR obligations is with an update to the 2001 Article 29 Working Party opinion on data protection of employees. The new opinion, published on 23 June 2017, provides an update to the recommendations which were put in place prior to the age of social media and pervasive computing (i.e. Internet of Things).

While not mandatory, the Opinion does operate somewhat as a roadmap to the way regulators in the EU will consider enforcement – both in breach situations, as well as in accountability situations (i.e. when an entity has to “show” how they are compliant). The Opinion is also instructive as much of the analysis revolves around the concept of “proportionality”.

This balancing of the legitimate interests between employees and employers was not a commonly used method of legitimizing processing under Directive 95/46/EC and its local implementing legislation. However, it seems that this is the direction the Working Party is taking.  This may be seen as both a good and bad situation. On one hand, it indicates that the regulators are starting to understand the complexity of the modern workplace, and how rigid bright-line rules won’t really work. On the other hand, it would seem to require a significant amount of analysis by data protection experts (which is subsequently documented) showing the balance of interests doesn’t harm the employee.

In any event, at least in the realm of employment law, the GDPR isn’t going to be quite the panacea that many of us were hoping for. It is still going to be a complex, difficult to manage, area of law for the foreseeable future.

The 2017 edition of The Legal 500 United States recommends Seyfarth Shaw’s Global Privacy & Security Team as one of the best in the country for Cyber Law (including data protection and privacy). In addition, based on feedback from corporate counsel, the co-chairs of Seyfarth’s group, Scott A. Carlson and John P. Tomaszewski, and Seyfarth partners Karla Grossenbacher (head of Seyfarth’s National Workplace Privacy Team) and Richard D. Lutkus were recommended in the editorial. Richard Lutkus is also listed as one of 14 “Next Generation Lawyers.”

The Legal 500 United States is an independent guide providing comprehensive coverage on legal services and is widely referenced for its definitive judgment of law firm capabilities.

Cross-posted from Carpe Datum Law

Recently, a widespread global ransomware attack has struck hospitals, communication, and other types of companies and government offices around the world, seizing control of affected computers until the victims pay a ransom.  This widespread ransomware campaign has affected various organizations with reports of tens of thousands of infections in as many as 99 countries, including the United States, United Kingdom, Spain, Russia, Taiwan, France, and Japan.  The software can run in as many as 27 different languages.  The latest version of this ransomware variant, known as WannaCryWCry, or Wanna Decryptor, was discovered the morning of May 12, 2017, by an independent security researcher and has spread rapidly.

The risk posed by this ransomware is that it enumerates any and all of your “user data” files like Word, Excel, PDF, PowerPoint, loose email, pictures, movies, music, and other similar files.  Once it finds those files, it encrypts that data on your computer, making it impossible to recover the underlying user data without providing a decryption key.  Also, the ransomware is persistent, meaning that if you create new files on the computer while it’s infected, those will be discovered by the ransomware and encrypted immediately with an encryption key.  To get the decryption key, you must pay a ransom in the form of Bitcoin, which provides the threat actors some minor level of anonymity.  In this case, the attackers are demanding roughly $300 USD.  The threat actors are known to choose amounts that they feel the victim would be able to pay in order to increase their “return on investment.”

The ransomware works by exploiting a vulnerability in Microsoft Windows.  The working theory right now is that this ransomware was based off of the “EternalBlue” exploit, which was developed by the U.S. National Security Agency and leaked by the Shadowbrokers on April 14, 2017.  Despite the fact that this particular vulnerability had been patched since March 2017 by Microsoft, many Windows users had still not installed this security patch, and all Windows versions preceding Windows 10 are subject to infection.

The spread of the malware was stemmed on Saturday, when a “kill switch” was activated by a researcher who registered a previously unregistered domain to which the malware was making requests.  However, multiple sources have reported that a new version of the malware had been deployed, with the kill switch removed.  At this time, global malware analysts have not observed any evidence to substantiate those claims.

You should remain diligent and do the following:

  • Be aware and have a security-minded approach when using any computer. Never click on unsolicited links or open unsolicited attachments in emails, especially from sources you do not already know or trust.
  • Ensure that your antivirus and anti-malware are up-to-date.
  • Apply Security Updates! Enable automatic updates and reboot weekly.  Systems that are receiving automatic updates should already be protected against this malware.  If you aren’t sure, visit https://support.microsoft.com/en-us/help/3067639/how-to-get-an-update-through-windows-update
  • Backup your data! The risk of malware is losing your data.  If you perform regular backups, you won’t have to worry about ransomware.  Make sure you utilize a backup system that is robust enough to have versioning so that unencrypted versions of your files are available to restore.  Make sure your backup system isn’t erasing your unencrypted backups with the encrypted ones!

If your organization is the victim of a ransomware attack, please contact law enforcement immediately.

  1. Contact your FBI Field Office Cyber Task Force  immediately to report a ransomware event and request assistance. These professionals work with state and local law enforcement and other federal and international partners to pursue cyber criminals globally and to assist victims of cyber-crime.
  2. Report cyber incidents to the US-CERT and  FBI’s Internet Crime Complaint Center.