Breach Notification Law: Yahoo’s Breach and the Duty to Disclose

Last week, Yahoo disclosed that in 2014 it suffered one of the largest data breaches in history, with at least 500 million Yahoo accounts compromised.  Given the timing of its acquisition deal with Verizon, Yahoo has been criticized for failing to sooner notify its customers of the breach.  Reportedly, Yahoo has been aware of loss of information as early as July 2016, the same month that it was revealed that Verizon would acquire Yahoo.  Did Yahoo have a duty to disclosure the “breach”?

A “breach” generally indicates unauthorized acquisition compared to an “incident” where unauthorized access is attempted.  Under California breach notification laws—where Yahoo is headquartered—unless notification would impede a criminal investigation, expedient disclosure without unreasonable delay must be given following the discovery or notification of a breach.  This law requires a business or a government agency that owns or licenses unencrypted computerized data that includes personal information to notify any California resident whose unencrypted personal information was or is reasonably believed to have been acquired by an unauthorized person.  The California Office of Privacy Protection provides guidance that notice should be given within 10 business days.

But the notification requirements for Yahoo are further complicated by the fact that each state’s law protects the breach of personal information of residents only of that state.  Thus, for a company like Yahoo who has customers in all 50 states, it is subject to many separate breach notification laws.  Currently all states and the District of Columbia have their own breach notification laws with the exception of Alabama, New Mexico, and South Dakota.  In states such as Connecticut, New Jersey, and the U.S. territory of Puerto Rico, notification may be triggered based on discovery of unauthorized access alone.

Data breach notification is intended to give individuals early warning to take protective action against their personal information being compromised.  Practically speaking, one can argue that in the case of Yahoo, the information was stolen over two years ago and any unauthorized use could have occurred well before the two month delay in disclosing the breach.  Nonetheless, breach notification laws nationwide require otherwise.

Ninth Circuit Issues Two Recent Decisions Further Definining Liability Under the Computer Fraud and Abuse Act

In July, the Ninth Circuit Court of Appeals issued two decisions by which it intends to clarify liability under the federal Computer Fraud and Abuse Act, 18 U.S.C. § 1030 (“CFAA”).  The CFAA imposes criminal penalties and civil damages upon whoever “knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value . . . .”  For a more complete explanation of the CFAA, please click here.  The two cases are United States v. Nosal (Nosal II), No. 14-10037, 2016 U.S. App. LEXIS 12382 (9th Cir. July 5, 2016) and Facebook, Inc. v. Power Ventures, No 13-17102, 2016 U.S. App. LEXIS 12781 (9th Cir. July 12, 2016).

In Nosal II, the Ninth Circuit, for a second time, considered the scope of the CFAA involving defendant Nosal.  The first time was in United States v. Nosal (Nosal I), 676 F.3d 854 (9th Cir. 2012) (en banc), holding that exceeding the terms of use of computer where access was authorized was not a violation of the CFAA.  The court in Nosal I rejected Nosal’s liability for downloading confidential information from his then employer’s databased to use at a new enterprise.  Although Nosal was authorized to access the database as a current employee, the downloading violated the employer’s confidentiality and computer use policies.  However, violating such terms of use did not constitute a violation of the CFAA; the court distinguished between access restrictions and use restrictions and held that the “exceeds authorized access” prong of section 1030(a)(4) of the CFAA “does not extend to violations of [a company’s] use restrictions.”  (Id. at 863.)  The court affirmed the district court’s dismissal of the five CFAA counts related to Nosal’s conduct.

Undaunted, the United States Attorney prosecuted Nosal in Nosal II under a different factual scenario.  In addition to accessing and downloading computer material during his employment, Nosal also accessed and downloaded material after his employment terminated.  Even though his former employer revoked his access to the employer’s computers, Nosal enlisted the aid of his former executive assistant to access the former employer’s computers.  The executive assistant continued to have permitted access to the former employer’s computers.

The court in Nosal II concluded that Nosal’s use of the executive assistant to access computers to which he had no permitted access violated the CFAA, holding:

“Without authorization” is an unambiguous, non-technical term that, given its plain and ordinary meaning, means accessing a protected computer without permission.  This definition has a simple corollary:  once authorization to access a computer has been affirmatively revoked, the user cannot sidestep the statute by going through the back door and accessing the computer through a third party.

(Nosal II, 2016 U.S. App. LEXIS 12382 at *4.)

The second case Facebook, Inc. v. Power Ventures, Inc., No 13-17102, 2016 U.S. App. LEXIS 12781 (9th Cir. July 12, 2016), applied Nosal I and Nosal II to a specific and complex fact pattern.  In Facebook, the defendant Power Ventures, Inc. (“Power”) operated a social website with the following concept:  “Individuals who already used other social networking websites could log on to and create an account. would then aggregate the user’s social networking information.  The individual, a ‘Power’ user could see all contacts from many social networking sites on a single page.  The Power user thus could keep track of a variety of social networking friends through a single program and could click through the central Power website to individual social networking sites.”  (Id. at *4.)

Power instituted a promotional campaign to generate more users for its site.  It did so by encouraging Facebook users to refer Facebook “friends” to  This campaign utilized Facebook to transmit messages both external and internal to Facebook.  Upon becoming aware of Power’s promotional campaign, Facebook transmitted a cease and desist letter to Power instructing Power to terminate its activities.  Facebook also attempted to block Power’s access to Facebook.  Power sought to circumvent the block and continued its promotion.

Facebook filed an action alleging, inter alia, violation of the CFAA.  The district court granted summary judgment in favor of Facebook and awarded damages.  The Ninth Circuit affirmed the district court’s ruling on the CFAA claim while reversing on certain other issues.  The court remanded for reconsideration of appropriate remedies and a recalculation of damages under the CFAA.

In reaching its conclusion, the Ninth Circuit reviewed both Nosal I and Nosal II:

From those cases, we distill two general rules in analyzing authorization under the CFAA.  First, a defendant can run afoul of the CFAA when her or she has no permission to access a computer or when such permission has been revoked explicitly.  Once permission has been revoked, technological gamesmanship or the enlisting of a third party to aid in access will not excuse liability.  Second, a violation of the terms of the use of a website – without more – cannot be the basis for liability under the CFAA.

(2016 U.S. App. LEXIS 12781 at *17-18.)

The court ruled that Power’s original access to the Facebook website through Facebook users (who were also users) did not violate the CFAA.  The permission of the Facebook user was sufficient to avoid liability.  However, once Facebook served Power with the cease and desist letter, Power no longer had permission to access the Facebook website.  That letter superseded any permission attributable to any Facebook user.

Whether Nosal I, Nosal II and Facebook provide a clear enough road map to computer users and legal counsel as to what constitutes a CFAA violation remains to be seen.  What we do know is that:

(a)     violation of terms of use alone is not a violation of CFAA;

(b)     one cannot circumvent revocation of right of access to computers through the use of third parties or other “gamesmanship’”

(c)     a timely cease and desist letter can revoke permission which third parties may have previously and legitimately provided.

Ninth Circuit Rules on Meaning of “Without Authorization” under Computer Fraud and Abuse Act

Last month, the Ninth Circuit affirmed the criminal conviction of an individual for accessing a computer “without authorization” in violation of the Computer Fraud and Abuse Act (“CFAA”).  U.S. v. Nosal (9th Cir., July 5, 2016).

The CFAA imposes criminal penalties on whoever “accesses a protected computer without authorization, or exceeds authorized access . . .” 18 U.S.C. § 1030.

“Without authorization”, the court ruled, means “accessing a protected computer without permission.”

In Nosal, a former employee of an executive search firm conspired with former colleagues to obtain confidential source lists and client contact data to start a competing search firm. Among other counts, Nosal was charged with violating the CFAA. The question before the three-judge panel was whether Nosal conspired to access a protected computer “without authorization” when he and his accomplices used the login credentials of Nosal’s former assistant to access the search firm’s proprietary information. Affirming Nosal’s conviction, the court held that Nosal did act “without authorization” when he continued to access data after his former employer rescinded permission to access its computer system.

Justice Reinhardt issued a vehement dissent in the case. He argued that the case was about simple password sharing, and the majority opinion “threatens to criminalize all sorts of innocuous conduct engaged in daily by ordinary citizens.” He emphasized that the CFAA is a criminal statute and must be construed more narrowly than a civil statute. A better interpretation of “without authorization”, he urged, is accessing a computer without the permission of either the system owner or a legitimate account holder. Though the facts of this case were distasteful, he noted that the court’s ruling had broader legal implications and thus he could not endorse the majority’s opinion.

Interestingly, this is the second time the Ninth Circuit has interpreted provisions of the CFAA in this case, and the first case touched on the very issues the dissent addressed. In the prior opinion, an en banc panel of the Ninth Circuit ruled on the meaning of “exceeds authorized access.” The court held that “exceeds authorized access” serves to restrict access to information but did not restrict how that information was used. Accordingly, the court determined that Nosal did not violate the CFAA when he had his former colleagues access information from the firm’s confidential databases and send it to him. Those colleagues were authorized to access the data. The CFAA, the court ruled, was not intended to impose criminal liability for violations of private computer use policies. The en banc panel construed the statute narrowly, “so that Congress will not unintentionally turn ordinary citizens into criminals.”

The majority’s recent opinion will likely not be the last word on this issue. Nosal will be filing a Petition for Rehearing and Rehearing en Banc. If this eventually gets to the panel it will be left to be seen whether Judge Reinhardt’s position seeking narrow construction of the term “without authorization” will be followed.

Regulators Nationwide Weigh in on CPUC Litigation

In May, we posted a blog on litigation filed by telecom providers and trade associations to prevent the California Public Utilities Commission (CPUC) from requiring Plaintiffs to turn over competitively sensitive data to a third party. Plaintiffs allege that disclosure of that data would violate regulations issued by the Federal Communications Commission (FCC) regarding the confidential status of that information. There is now a new party at the table. The National Association of Regulatory Utility Commissioners (NARUC) filed an amicus brief asking the Court to side with the CPUC, and permit disclosure of the data.

For background, after the complaint was filed, the Northern District of California granted Plaintiffs’ motion for a preliminary injunction, blocking the CPUC from sharing the companies’ ostensibly sensitive data with the third party. That order was based on the Court’s finding that the telecom Plaintiffs were likely to prevail on their argument that the CPUC order is preempted by FCC regulations, and because the telecom Plaintiffs “overwhelmingly” demonstrated that they would face irreparable harm if disclosure occurs.

NARUC then filed its amicus brief requesting that the Court deny Plaintiffs’ claims and permit the CPUC order to stand. NARUC framed the question before the Court as follows:

[D]o States have the ability to obtain and to use under state law broadband data, including granular, disaggregated, carrier-specific subscription data, which telecommunications carriers may (or may not) also submit to the FCC on the FCC’s Form 477?

NARUC argues in the affirmative. In particular, NARUC cites a 2010 opinion and order from the FCC that concluded the collection and use of broadband data by states was not preempted by federal law. NARUC also points to the great need for disclosure of data by regulated entities to state regulators, suggesting that it would be poor policy to prevent public utility regulators from accessing the data at issue in this litigation.

Interestingly, NARUC’s amicus brief focuses on the preemption argument and does not attempt to address or reconcile the impetus for the litigation—privacy concerns.

This litigation highlights two areas of tension in the privacy sphere. The first is the tension between ensuring privacy and data security and conducting regulatory activities, whether for the promotion of health, safety, or environmental wellbeing. The second tension is whether privacy protections should stem from the federal government, the states, or the industry itself. Without clear guidelines governing how to balance these competing policies, courts are often asked to decide significant privacy questions through legal doctrine and not the substance of the implicated rights. That is the case here, where the Court is left to grapple with a traditional legal question—whether the FCC’s regulations preempt state regulatory actions—rather than the propriety of requiring disclosure of sensitive information in the context of a public utility regulatory proceeding.

Despite Adoption of EU-US Privacy Shield, Uncertainties Remain

You may recall our previous posts about the drafting and negotiation of the EU-US Privacy Shield, the law designed to implement a data sharing agreement that sets specific privacy standards for companies sharing information between the European Union and United States.  The Privacy Shield law has now been adopted, and will go into effect on August 1.  After that date, companies can apply for Privacy Shield certifications.  While it may seem to be time to rejoice for consumers and American companies doing business overseas, there remain quite a few questions with the new law.

First, it is uncertain whether the Privacy Shield will be able to withstand judicial scrutiny.  The prior law governing information sharing between the United States and the European Union, the EU-US Safe Harbor program, was invalidated by European courts, necessitating the development of the Privacy Shield in the first place.  Certainly the new law, which the Wall Street Journal has described as “more robust than its predecessor”, was drafted to comport with the EU’s recent General Data Protection Regulation and with that prior ruling in mind.  Nonetheless, there will be legal challenges to the Privacy Shield by consumers.  United States laws on privacy and data security get tougher by the day, but they remain market friendly and fall short of the consumer protection offered by European courts.  Based on history, one cannot say with full comfort that the new law will withstand judicial rigor.

Second, the recent vote on Brexit complicates the situation.  Will law similar to the Privacy Shield be adopted in Great Britain?  If not, will the US and Great Britain adopt different standards to govern the transfer of information?  Will those standards be higher or lower than the EU?  And what impact will discussions between the EU and Great Britain in the privacy realm have on United States companies?  Presently, these questions remain unanswered, although one can certainly presume that there will be some standards negotiated and put in place in the near future.

Despite those questions, the adoption of the Privacy Shield still provides companies with some comfort because they now know the standards by which they may operate.  With respect to the EU, companies may now do away with whatever patchwork solution they crafted after the Safe Harbor agreement was abolished, and know the framework for any information sharing going forward.  In addition, those standards will mesh better with existing EU privacy laws.  This is a signal to companies that do business in Europe and seek Privacy Shield certification to review their policies and procedures on the storage and transfer of information.

White House Commission on Cybersecurity Seeks Input from Stakeholders on Future of Digital Landscape

Last week, the White House Commission on Enhancing National Cybersecurity held its third of six scheduled meetings around the nation.  President Obama established the Commission by Executive Order in February 2016 with the goal of “recommending bold, actionable steps that the government, private sector, and the nation as a whole can take to bolster cybersecurity in today’s digital world.”

Key private and public stakeholders were invited to be panelists to discuss three main topics: Addressing Security Challenges to the Digital Economy; Collaborating to Secure the Digital Economy; and Innovating to Secure the Future of the Digital Economy.   Among the invited panelists were executives and security officers from Google, Facebook, the University of California, The William and Flora Hewlett Foundation, and Kleiner Perkins Caufield & Byers, who provided their input and recommendations on research & development, technology, and innovation.

One of the key points addressed by the panel was the trade-off between mandated security standards and innovation.  The Commission asked panelists how companies can be incentivized to keep investing in security instead of simply meeting the minimum standards.  Weighing safety against costs is an issue for any company, but that balancing is particularly pronounced in the fast-moving technology industry where it could be prohibitively expensive to ensure security standards keep up with evolving technology.  In response, a few panelists recommended the creation of a new federal agency to oversee cybersecurity.

Throughout the day, the panelists emphasized the need for education about cyber threats and security, both for current consumers, and the current and future workforce.  Panelists also called for greater transparency and information sharing about cyber attacks and user data requests, and greater international collaboration.

One particularly interesting portion of the day occurred when the Commission asked panelists what the next “cyber-Pearl Harbor” might be, quoting former Secretary of Defense Leon Panetta.  One panelist explained that he was “terrified” of the growth of the Internet of Things, and called for technology to be maintained and secured for the lifetime of the product it is attached to, which in many cases—like cars, for example—are 10 years or more.  Another panelist warned about increasing machine intelligence.

Ultimately, as one panelist commented, technology alone will not solve cyber threats; there will also need to be smart policy to both solve the crises of the moment and plan for the future.

An audio recording of the meeting is made available by UC Berkeley’s Center for Long-Term Cybersecurity.

The Commission will deliver its final report to the President by December 1, 2016.

Data Breaches Response Costs Continue to Rise

SEC Chair Mary Jo White recently opined that cyber security is the biggest risk facing the United States financial system.  Companies should take heed of that warning in light of the release of the 2016 Cost of Data Breach Study by IBM and the Ponemon Institute, which showed that average response costs for data breaches continue to rise.

According to that study, the total average cost of data breach incidents to companies located in the United States increased from $6.53 million to $7.01 million between 2014 and 2015.  That represents an approximate 7% increase in the average cost of responding to a data breach.  The study observed that, over its 11 year history, there has not been significant fluctuation in the response cost.  While this might seem comforting, IBM and the Ponemon Institute note that this indicates another chilling likelihood: “it is a permanent cost organizations need to be prepared to deal with and incorporate in their data protection strategies.”  In addition, the biggest financial consequence to organizations that have experienced a breach is not the cost of responding to the data breach (which is onerous), but the loss of business.

This sobering analysis supports Chair White’s concerns about data security risks and underscores the severity of the problem.  Given the staggering cost of data breaches, a firm should plan well in advance to mitigate its exposure to such costs.  For example, the study also provided that while half of all data breaches resulted from a malicious attack, “23 percent of incidents were caused by negligent employees, and 27 percent involved system glitches that included both IT and business process failures.”  That means half of the breaches might have been avoided with proper training and IT protocols.  Given that data breaches aren’t going to go away, companies should consider factors noted in the study that decrease the cost of responding to data breaches, including “[i]ncident response plans and teams in place, extensive use of encryption, employee training, BCM involvement or extensive use of DLP.”  Companies may also wish to consider cyber-insurance policies to help bear the costs associated with data breaches.  But given the alarming costs detailed in the study, companies should keep in mind the old adage that an ounce of prevention is worth a pound of cure.

SEC Fines Morgan Stanley for Data “Breach”

Last week, the SEC fined Morgan Stanley $1 million for two data security failures resulting in the exposure of personal information of 730,000 of its customers. Andrew Ceresney, the director of enforcement at the SEC stated, “Given the dangers and impact of cyberbreaches, data security is a critically important aspect of investor protection…we expect SEC registrants of all sizes to have policies and procedures that are reasonably designed to protect customer information.” But this wasn’t your typical data breach situation, and it is an important lesson for companies.

That data “breach” occurred when a former employee downloaded customer records, which were then hacked and posted on-line. That former employee has been suspended by the SEC, but it was the fine of Morgan Stanley that is the real story because this wasn’t a breach of the company’s defenses.

The SEC’s fine was based on two control failures.

First, the SEC asserted that Morgan Stanley failed to monitor its employees from accessing the records of customers where they were not authorized to do so. While there were some controls in place to prevent such actions in its internal data portals, a glitch permitted Morgan Stanley employees to access all customer records through a certain type of report.

Second, while the company had a control in place to prevent the use of thumb drives to download customer records, there was no corresponding control for uploading records on-line. Combined, those holes allowed one employee to access customer records and upload them to the internet.

This is an important lesson for companies because unlike most breaches we hear about in the news, this was not a hack of the company itself. Morgan Stanley is being fined for its employee’s bad acts.  Would the SEC fine Morgan Stanley if one of its employees printed customer records and then left them in a briefcase on a subway train? Perhaps.

The takeaway for companies is that they may be liable for cyber theft even where its own external defenses are in good shape and have not been breached. Time to brush up the internal policies and protocols.

Courts Continue to Grapple with Data Breach Claims

Our last few blogs have focused on litigation under the Video Privacy Protection Act, including the recent ruling from the 10th Circuit in Yershov v. Gannett Satellite Information Network, Inc., 2016 U.S. App. LEXIS 7791 (1st Cir. Apr. 29, 2016).  Elsewhere, courts have been trying to figure out what to do about consumer lawsuits, including a recent ruling in the multi-district class action litigation world where Judge Lucy Koh of the Northern District of California issued an order denying in part a motion to dismiss that permitted claims relating to the enormous data breach at Anthem Blue Cross to survive.

That lawsuit arose after Anthem Blue Cross suffered a massive data breach in which 80 million of its users had their data hacked and compromised.  Naturally, many lawsuits followed, naming upwards of 40 entities affiliated with Anthem Blue Cross.  The plaintiffs alleged over a dozen state and federal claims, but those myriad causes of action can all be boiled down to a simple premise: the plaintiffs would like to be compensated for costs associated with the data breach.  Blue Cross asserted a number of defenses.  There are two that are of particular interest.

First, on this motion Anthem Blue Cross contended the plaintiffs had no connection with Anthem Blue Cross that would provide standing for a breach of contract action.  Judge Koh rejected this argument.  For plaintiffs under California law, she accepted the argument that individual or group insurance policies sufficed to meet that standard.  More interesting was her ruling on New Jersey breach of contract claims, where she stated that a privacy policy referenced in an informational booklet provided to a consumer was sufficient.  That ruling would seemingly provide consumers quite a bit of latitude to allege a contractual relationship with entities that have suffered a data breach.

Second, Anthem Blue Cross, like all other data breach defendants, have argued that consumers lacked standing because no one has actually been hurt or damaged as a result of the data breach.  This argument has been bolstered because Anthem Blue Cross has been bearing all of the costs and fees to date for credit monitoring and other services.  It is worth referencing because this issue of standing and damages in the data breach context has been a difficult one for courts to solve.  While the claims in the Anthem Blue Cross survived, other courts have weighing this issue have gone the other way.  For example, in a recent Third Circuit decision, Storm, et al. v. Paytime Inc., the Court followed its prior Reilly v. Ceridian decision in affirming a district court ruling that the class plaintiffs did not have standing until stolen data had been used.  The important takeaway on this point is that there really is no single standard.

So what is the right answer?  Should Anthem Blue Cross be forced to pay for damages where they may be hypothetical?  But isn’t it likely that at some point, those consumers will be harmed?  And must those consumers really wait until something bad happens and then file litigation piecemeal?  This dilemma will continue to cause headaches for judges.