FTC Commissioner Brill Urges State AGs to Up the Ante

This post was written by Divonne Smoyer and Christine Czuprynski.

Businesses that think they know what privacy issues are on the minds of the state attorneys general (AGs) should be aware that AGs are being urged to take action, either on their own, or in concert with the FTC, on key cutting edge privacy issues. At a major meeting of state AGs this week at the Conference of Western Attorneys General, FTC Commissioner Julie Brill, one of the highlighted speakers at the event, emphasized the importance of the AGs’ role in privacy regulation, and encouraged AGs to collaborate and cooperate on privacy investigations consistent with FTC efforts.

Commissioner Brill, a former assistant AG in two influential state attorney general offices, Vermont and North Carolina, outlined for the AGs several high-level privacy priorities for the FTC, including: (1) user-generated health information; (2) the Internet of Things; and, (3) mobile payments and mobile security. She invited the states to follow these and other privacy issues, and to complement the FTC’s actions in these areas in appropriate ways.

Also a focus: the Commission’s “Big Data” data broker report. Commissioner Brill emphasized her concerns about data broker practices, including their use of terms to describe and categorize individuals, such as “Urban Scramble,” “Mobile Mixers,” “Rural Everlasting,” and “Married Sophisticates.” She stressed that the information gathered by data brokers about these groups may allow businesses to make inferences about people, which in turn could impact access to credit, and in other ways. She pointed out that the FTC unanimously called for legislation to increase transparency and provide consumers with meaningful choices about how their data is used.

Building on her comments about data brokers, Commissioner Brill voiced concerns about the United States’ sectoral approach to privacy law and stressed that there needs to be gap-filling in areas outside of those sector-specific laws, and, since Congress is focused elsewhere on privacy issues, state action may be the best option to take on these issues and fill the gaps. This is not the first time Commissioner Brill has called on the states to take decisive action, and it won’t be the last.

Finally, Commissioner Brill addressed the FTC’s case against Wyndham in particular, noting that the FTC is aggressively fighting challenges to its Section 5 authority. She reminded the states that they have an interest in this fight given that state UDAP statutes share a common blueprint as so-called “mini-FTC Acts,” and invited collaboration on future challenges.

It is likely that many of the states will take action consistent with Commissioner Brill's urging.

UK set to implement emergency Data Retention and Investigatory Powers Bill

This post was written by Cynthia O'Donoghue, Angus Finnegan and Kate Brimsted.

In April, the Court of Justice of the European Union (‘Court’) declared Directive 2006/24/EC on the Retention of Data to be invalid, creating uncertainty for telecommunications operators across the region. In a controversial move by the UK Government, the Data Retention and Investigatory Powers Act 2014 (‘Act’) has been passed using emergency procedures.

Formulated in 2006, the Directive aimed to harmonise the laws of Member States in relation to the retention of data. It introduced an obligation on telecommunications operators to retain a wide range of traffic and location data, which could then be accessed by national authorities for the purpose of detecting and investigating serious crime. The Directive was implemented in the UK through the Data Retention (EC Directive) Regulations 2009.

In its judgment, the Court stated that the obligation to retain communications data and the ability of national authorities to access them constituted an interference with both Articles 7 and 8 of the Charter of Fundamental Rights. Whilst this satisfied the objective of general interest, it was not proportionate or limited to what was strictly necessary. There was concern that the data collected “may allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained.”

The Act seeks to maintain the status quo by preempting any legal challenge to the Regulations, and allows the Secretary of State to issue a notice requiring the retention of all data, or specific categories of data, for a period of 12 months. Whilst the effect of the Act is largely similar to its predecessor, the language used is more expansive and appears to be capable of encompassing a broader range of data.

The Act also amends certain provisions of the Regulation of Investigatory Powers Act 2000, allowing for the extra-territoriality of warrants in certain circumstances. This is a major step not only for UK interception powers, but for interception powers globally. Last month, we reported that Microsoft would continue to challenge a U.S. court ruling that effectively allowed an extra-territorial warrant to be issued; it appears that the legal basis for similar powers could be being introduced by the back door in the UK.

It is unclear whether the Act will be a temporary piece of legislation, staying in place until a more permanent solution is implemented at EU level, or whether it will be permanent. However, one positive effect will be that telecommunications operators will know what their retention obligations are. That is not the case in almost all other Member States at present.

Has Facebook been evil? It's down to the regulators to decide

This post was written by Cynthia O'Donoghue and Kate Brimsted.

In June, Facebook came under public scrutiny after it was revealed that the company carried out research in 2012 that manipulated the News Feeds of 689,000 users. Several regulators are now poised to investigate Facebook’s conduct.

The study exposed users to a large amount of either positive or negative comments in order to observe the effect of this on the way that they used the site. It found that “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.”

Facebook’s behavior will now be scrutinized by data protection regulators, with the UK’s Information Commissioner’s Office indicating on 1 July that it will work with the Irish Data Protection Commissioner to learn more about the circumstances surrounding the research. The regulators are likely to be particularly interested in the terms of use and privacy policy that applied at the time of the research, and whether they contained adequate notices.

Meanwhile, on 3 July, the Electronic Privacy Information Centre (‘EPIC’) filed a formal complaint with the U.S. Federal Trade Commission, requesting that the regulatory body undertake an investigation of Facebook’s practices. The FTC has not yet responded to this request.

Although perhaps an extreme example, this issue highlights the challenges that organisations can face when using data for a purpose that goes beyond what users would expect. Given the mysterious algorithms that underlie what any Facebook user sees (contrary to common belief, it is not simply a chronological list of activities), it is arguable that the issue here arises out of functionality that is not far removed from Facebook’s everyday operations. It will be interesting therefore to see whether the regulators take any robust action.

Italian Data Protection Authority issues new EU guidelines

This post was written by Cynthia O’Donoghue, Kate Brimsted, and Matthew N. Peters.

In early May the Italian data protection authority (“Garante”) issued “Simplified Arrangements to Provide Information and Obtain Consent Regarding Cookies” (“Guidelines”).  These are intended to provide clarity on the application of Legislative Decree No. 69/2012 (the “2012 Act”), which implemented the EU Cookie Directive in Italy.

The Guidelines synthesize the findings of a public consultation and set out simple methods for informing website users about the use of cookies and procuring their consent.

Key topics include:

i) Distinguishing technical cookies from profiling cookies: technical cookies only require users to be clearly informed and include browsing/session cookies, first-party analytics cookies and functional cookies; while profiling cookies require users’ consent to create a user profile and for the website operator and any third parties to carry out marketing and promotional activities.

ii) A ‘double decker’ approach to inform users and obtain consent by providing summary cookie by means of a ‘banner’ on a website landing page with more detailed information included in a full privacy notice that is linked to the banner.

iii) Links to third parties that also place cookies on a user’s device to each respective third party’s own consent and privacy notices so users remain fully informed and retain their ability to consent.   

iv) Implementation and sanctions: Garante has given data controllers one year from the date of publication of the Guidelines to meet these requirements. Failure to do so carries a range of sanctions, including a maximum fine of €300,000 and ‘naming and shaming’.

European Commission Releases Cloud Computing Service Level Agreements

This post was written by Cynthia O’Donoghue and Kate Brimsted.

Back in 2012, the European Commission (‘Commission’) adopted the Cloud Computing Strategy to promote the adoption of cloud computing and ultimately boost productivity. In June 2014, the Cloud Select Industry Group – Subgroup on Service Legal Agreements published Standardisation Guidelines for Cloud Service Level Agreements (‘Guidelines’) as part of this strategy.

To achieve standardisation of Service Level Agreements (‘SLAs’), the Guidelines call for action “at an international level, rather than at national or regional level”, and cite three main concerns. Firstly, SLAs are usually applied over multiple jurisdictions, and this can result in the application of differing legal requirements. Secondly, the variety of cloud services and potential deployment models necessitate different approaches to SLAs. Finally, the terminology used is highly variable between different service providers, presenting a difficulty for cloud customers when trying to compare products.

A number of principles are put forward to assist organisations through the development of standard agreements, including technical neutrality, business model neutrality, world-wide applicability, the use of unambiguous definitions and comparable service level objectives, standards and guidelines that span customer types, and the use of proof points to ensure the viability of concepts.

The Guidelines also cover the common categories of service level objectives (‘SLOs’) typically covered by SLAs relating to performance, security data management and data protection.  In particular, SLOs cover availability, response time, capacity, support, and end-of-service data migration, as well as authentication and authorization, cryptography, security incident management and reporting, monitoring, and vulnerability management.  Some of the important data-management SLOs cover data classification, business continuity and disaster recovery, as well as data portability. The personal data protection SLOs address codes of conduct, standards and certification, purpose specification, data minimization, use, retention and disclosure, transparency and accountability, location of the personal data, and the customer’s ability to intervene.

The Commission hopes the Guidelines will facilitate relationships between service providers and customers, and encourage the adoption of cloud computing and related technologies.

European Commission releases communication on building a data-driven economy, calling for a rapid conclusion to data-protection reform

This post was written by Cynthia O'Donoghue and Kate Brimsted.

In July, the European Commission (‘Commission’) published a communication titled “Towards a thriving data-driven economy” (‘Communication’), setting out the conditions that it believes are needed to establish a single market for big data and cloud computing. The Communication recognizes that the current legal environment is overly complex, creating “entry barriers to SMEs and [stifling] innovation.” In a press statement, the Commission also called for governments to “embrace the potential of Big Data.”

The Communication follows the European Council’s conclusions of 2013, which identified the digital economy, innovation and services as potential growth areas. The Commission recognizes that for “a new industrial revolution driven by digital data, computation and automation,” the EU needs a data-friendly legal framework and improved infrastructure.

Citing statistics about the amount of data being generated worldwide, the Commission believes that reform of EU data-protection laws and the adoption of Network and Information Security Directive will ensure a “high level of trust fundamental for a thriving data-driven economy.” To this end, the Commission seeks a rapid conclusion to the legislative process.

The Commission’s vision of a data-driven economy is founded on the availability of reliable and interoperable datasets and enabling infrastructure, facilitating value and using Big Data over a range of applications.

To achieve a data-driven economy, coordination among Member States and the EU is necessary. The key framework conditions are digital entrepreneurship, open data incubators, developing a skills base, a data market monitoring tool and the identification of sectorial priorities, and ensuring the availability of infrastructure for a data-driven economy, along with addressing regulatory issues relating to consumer and data protection, including data-mining and security.

In an atmosphere of increasingly complex regulation anticipated by the Draft Data Protection Regulation and rulings of Europe’s senior courts, a positive slant on the use of data should be refreshing to organisations that depend on it in their operations. The test for the recommendations will be in how the Commission and the EU seek to implement them.

Apps and Data Privacy - New Guidelines from the German DPAs

This post was written by Dr. Thomas Fischl and Dr. Alin Seegel.

Under the auspices of the Bavarian state data protection authority, the so-called Düsseldorfer Kreis (an association of all German data privacy regulators for the private sector) on June 23 published guidelines for developers and providers of mobile apps.  Since mobile applications increasingly become the focus of regulators, the guide points to data privacy and technical requirements regarding the field of app development and operation, and provides practical examples.

In spring, the Bavarian data privacy regulatory agency had randomly selected 60 apps for closer examination. In the process, the agency looked at privacy notices and compared them with the type of data that, at first glance, was transmitted.  In its conclusion, the agency noted that “every app provides some data privacy information, but that this information cannot be adequately reviewed.”  Based on this finding, the agency has more closely examined 10 apps, and subsequently created an orientation guide for app-developers and app-providers.

Among other things, the 33-page guide addresses the applicability of German data privacy laws, permit-related statements of fact regarding the collection and processing of personal data in the context of operating a mobile application, technical data privacy, and the notification obligations to be adhered to by the app provider. In addition to the legal notice, the latter include an app-specific privacy statement and other legal obligations.

With regard to app development, the guide of the German DPAs recommends that by utilizing data privacy preferences (“privacy by default”), one must ensure that the app can later be offered without deficiencies in data privacy.

Regarding technical data privacy, the guide elaborates on secure data transmission, as well as the application’s access to the location data of the respective device.

In addition to the above aspects, the guide addresses specific issues arising during the development of mobile applications, such as the integration of functions for payments or apps for young people and children.

For the future, regulators can be expected to be even more concerned with infringements related to apps, and will also be expected to initiate procedures to impose fines. The guidelines are a must-read for every app developer making apps available in Germany and throughout Europe.

U.S. extraterritorial data warrants: yet another reason for swift Data Protection reform, says EU Commission

This post was written by Kate Brimsted.

In May, we reported that a U.S. magistrate judge had upheld a warrant requiring Microsoft to disclose emails held on servers in Ireland to the U.S. authorities. The ruling has now attracted the attention of Brussels, with the Vice-President of the European Commission, Viviane Reding, voicing her concern.

Microsoft had argued before the court that the warrant, which was issued under the Stored Communications Act, should be quashed. This was because it amounted to an extraterritorial warrant, which U.S. courts were not authorised to issue under the Act. In summary, the court ruled that the warrant should be upheld, noting that otherwise the U.S. government would have to rely on the “slow and laborious” procedure under the Mutual Legal Assistance Treaty, which would place a “substantial” burden on the government.

In a letter to Sophie in’t Veld, a Dutch MEP, Ms Reding noted that the U.S. decision “bypasses existing formal procedures”, and that the Commission is concerned that the extraterritorial application of foreign laws may “be in breach of international law”. In light of this, Ms Reding states that requests should not be directly addressed to companies, and that existing formal channels such as the Mutual Legal Assistance Treaty should be used in order to avoid companies being “caught in the middle” of a conflict of laws. She also advocates that the EU institutions should work towards the swift adoption of the EU data protection reform.  Ms Reding further reported that the Council of Ministers has agreed with the principle reflected by the proposed Regulation – and consistent with the recent Google Spain decision – that “EU rules should apply to all companies, even those not established in the EU (territorial scope), whenever they handle personal data of individuals in the EU”.

Florida Strengthens Data Breach Notification Law

This post was written by Divonne Smoyer and Christine N. Czuprynski.

Florida’s new data breach notification law, effective July 1, 2014, follows a recent trend of expanding the definition of personal information and requiring entities to notify state attorney general offices or other regulators. The Florida Information Protection Act, signed into law June 20, repeals the existing data breach notification law and imposes new requirements on covered entities.

First, the definition of personal information has been expanded. Personal information includes those data points that are present in most data breach notification laws – an individual’s name in combination with Social Security number, driver’s license number, or financial account number with a its corresponding security code or password – but also includes medical history and health insurance policy number. In addition, the definition now includes a user name or email address in combination with a password or some other information that allows access to an online account.

The Florida law requires notification to be made to the affected individuals, the state Department of Legal Affairs with the attorney general’s office, and credit reporting agencies, under certain circumstances. Notification to individuals and to the attorney general must occur within 30 days after determination of the breach or reason to believe a breach occurred. Florida already allows an entity to conduct a risk-of-harm analysis to determine if notification is required, and the new law retains that right. An entity is not required to notify individuals if it “reasonably determines that the breach has not and will not likely result in identity theft or any other financial harm to the individuals whose personal information has been accessed.” That determination must be documented in writing and maintained for five years, and must be provided to the attorney general within 30 days. If an entity determines that notification to individuals is required, such notification should include the date of the breach, a description of the information compromised, and contact information for the entity.

Notification to the attorney general must include a description of the breach, the number of Floridians affected, information regarding any services being offered, a copy of the notice, and contact information for an individual who can provide additional information. Upon request, an entity must also provide a copy of any police report or incident report, as well as a computer forensic report and internal policies relating to breaches. These sensitive documents – forensic reports and internal policies – do not have to be disclosed in any other state.

The new law also requires entities to take reasonable measures to protect and secure data in electronic form containing personal information.

Plaintiffs Take Another Blow In Video Privacy Protection Act (VPPA) Class Action Against Hulu and Facebook

This post was written by Lisa B. Kim and Paul Bond.

On June 17, 2014, Magistrate Judge Laurel Beeler of the Northern District of California denied class certification for the proposed class of Hulu and Facebook users alleging that their personal information was transmitted to Facebook in violation of the Video Privacy Protection Act (VPPA).  We’ve written about this VPPA case before. At the end of April, the court granted Hulu’s motion for summary judgment as to disclosures Hulu made to comScore (a third-party analytics provider), but denied it as to disclosures made to Facebook.

In denying class certification, Judge Beeler found that the class was not ascertainable because the only manner in which to identify the class would be by class members self-reporting via an affidavit.  The court reasoned that it would be inappropriate here because the higher dollar amount involved with VPPA violations (i.e., $2,500) required some form of verification using objective criteria.  The court further noted that the claims alleged here could not be easily verified.  Based on the record before it, the court could not tell how a potential class member could reliably establish whether s/he logs into Facebook and Hulu from the same browser, logs out of Facebook, clears cookie settings, and uses software to block cookies.  The importance of these things is that plaintiffs’ disclosure theory is based on transmission of a certain cookie to Facebook, and this would potentially happen for Hulu users who watched a video using hulu.com, having used the same computer and web browser to log into Facebook in the previous four weeks using default settings.

Relatedly, the court also found that while there were common questions of law or fact that pertained to the class, those common questions did not “predominate,” as required by FRCP 23(b)(3).  The court held that substantial issues about whether class members remained logged into Facebook and whether they would clear or block cookies indicated that common issues did not predominate over individual ones. 

The court denied class certification without prejudice, but noted that it was unaware of how plaintiffs could overcome these issues given the current factual record.  It will be interesting to see whether the plaintiffs take another attempt at certifying the class and how this ruling impacts other VPPA cases pending around the nation.

CFPB Proposes Changes to the Annual Privacy Notice: There is Still Time To Comment

This post was written by Timothy J. Nagle and Christopher J. Fatherley.

In December 2011, the Consumer Financial Protection Bureau (CFPB) published a Federal Register (FR) notice [76 FR 75825] on “Streamlining Inherited Regulations.”  These regulations consist of federal consumer financial laws that were transferred to CFPB authority under the Dodd-Frank Wall Street Reform and Consumer Protection Act from seven other federal agencies. Among the regulations that were identified as opportunities for “streamlining” was the annual privacy notice required by Regulation P (“Privacy of Consumer Financial Information”) issued by the Federal Reserve [12 CFR Part 216].  In its fall 2013 “Statement of Regulatory Priorities,” the Bureau continued the process by stating its intent to publish a notice of proposed rulemaking “to explore whether to modify certain requirements under the Gramm-Leach-Bliley Act's implementing Regulation P to which financial institutions provide annual notices regarding their data sharing practices.”

The CFPB issued its proposed rule (“Amendment to the Annual Privacy Notice Requirement Under the Gramm-Leach-Bliley Act (Regulation P)”) on May 13, 2014 [79 FR 27214].  The amendment describes an “alternate delivery method” for the annual disclosure that financial institutions could use in specified situations.  These circumstances are consistent with the purpose of section 503 of the Gramm-Leach-Bliley Act (GLBA), which requires financial institutions to provide initial notice upon entering into a relationship with a customer, and then annually thereafter.

A financial institution may (but is not required to) use the alternate delivery method if its practices satisfy five criteria:

  • It does not share customer nonpublic personal information with nonaffiliated third parties in a manner that would trigger opt-out rights under GLBA.  Financial institutions are not required to provide opt-out rights to customers when sharing information with third-party service providers, pursuant to joint marketing agreements or in response to a formal law enforcement request.  However, using an example mentioned in the notice, a bank would be required to provide such rights to its mortgage customers whose personal information it intends to sell to an unaffiliated home insurance company.  In this latter situation, the new alternative notice process would not be available.
  • It does not include in its annual notice the separate opt-out notice required under section 603(d)(2)(A)(III) of the Fair Credit Reporting Act (FCRA) if a financial institution shares information about a consumer with its affiliates.  Such activity is excluded from the definition of a consumer report” in FCRA, but notice to the consumer and an opportunity to opt out is required.  Financial institutions are required to include this disclosure in the annual privacy notice.  Therefore, if a financial institution does share such information internally, and does not provide a separate disclosure, it may not take advantage of the “alternate delivery method.”
  • The annual notice is not the only notice used to satisfy the Affiliate Marketing Rule in section 624 of FCRA.  Financial institutions are not required to include this opt-out notice in the annual privacy notice, but many do.  If a financial institution shares information about a consumer with an affiliate for marketing purposes, it may use the new delivery process only if it independently satisfies the section 624 disclosure requirement.
  • The information contained in the prior year’s notice (e.g., information sharing practices) has not changed.
  • The institution uses the Model Privacy Form Under the Gramm-Leach-Bliley Act published in 2009 [74 FR 62890] for its annual privacy notice.

Financial institutions that satisfy the above criteria may discontinue mailing the annual privacy notice if they provide notice by other means described in the proposed rule.  Institutions using the alternate delivery method will be required to post the privacy notice continuously and conspicuously on their website, deliver an annual reminder on another notice or disclosure of the availability and location of the notice, and provide a toll-free telephone number for customers to request that a paper copy of the notice be mailed to them.  While GLBA and Regulation P provide for notice in written or electronic form, most financial institutions mail the notices at substantial cost.  This action by the CFPB is intended to balance the cost considerations with the benefit to consumers of the annual notice and the potential for confusion where an institution’s practices have not changed.  And small financial institutions, which are less likely to share customer information in a way that triggers customer opt-out rights, would benefit from the cost savings with no harm to the customer. 

In the proposed rule, the CFPB requested comment and information regarding the practical aspects of the changes, such as the number of financial institutions that change their policies, deliver notices electronically, or combine the FCRA and privacy notices. The initial rule provided only 30 days to comment, but this has been extended to July 14, 2014 [79 FR 30485] in response to requests from several financial services industry groups.  This initiative by the CFPB seems to have more velocity than similar efforts in Congress, where bills in the House (Eliminate Privacy Notice Confusion Act - H.R. 749) and Senate (Privacy Notice Modernization Act of 2013 - S. 635) are languishing.  Financial institutions should at least be aware of this development and evaluate whether they will benefit from the proposed revisions.

Canadian Court Certifies Facebook Class Action Over 'Sponsored Stories'

This post was written by Mark S. Melodia and Frederick Lah.

The British Columbia Supreme Court recently certified a class action against Facebook in connection with its Sponsored Stories program.  Under that program, advertisers paid Facebook for Sponsored Stories, which would in turn generate ads featuring a user’s name and profile picture based on which products and companies the user “liked.”  We previously analyzed a California privacy class action brought over the program.  Since the publication of our previous article, the California court granted final approval to a $20 million settlement that required Facebook to make small payments to class members.  That settlement is currently being challenged in the Ninth Circuit Court of Appeals by a public interest group. 

In the Canadian case, one of the main issues was whether Facebook users have the protection of BC’s Privacy Act, or instead, whether Facebook’s online Terms of Use overrode these protections.  Facebook’s Terms of Use contained a forum selection clause that bound users to adjudicate disputes in California.  Interestingly, despite the Court finding a “prima facie basis” for the “validity, clarity and enforceability” of the forum selection clause in the Terms of Use, it still rejected the clause.  Instead, the Court pointed to section 4 of B.C.’s Privacy Act, which states that an action under the Privacy Act “must be heard and determined by the Supreme Court.”  Per the Court, claims brought under the Privacy Act could not be brought in California, and held that “the Forum Selection Clause must give way to the Privacy Act.” 

After holding that it had jurisdiction, the Court then certified the class, defining it as all B.C. residents who are or have been Facebook members at any time between January 2011 and May 2014, and whose name or picture was used as part of the Sponsored Stories.  The Court rejected Facebook’s argument that the class definition was overly broad and that it had several problems, including that the class definition: (i) has no temporal limitations; (ii) does not address the fact that many users use false names or unidentifiable portraits; (iii) does not address the fact that Sponsored Stories were used for non-commercial entities as well as for businesses; (iv) does not  address the necessary element of lack of consent; and (v) includes people who do not have a plausible claim, as well as people will not be able to self-identify whether they are in the class.  Per the Court, “[h]ere, the tort [ ] of the Privacy Act seems tailor-made for class proceedings, where the alleged wrongful conduct was systemic and on a mass scale, and where proof of individual loss is not necessary or sought. Without the assistance of the [ ] class action procedure, the plaintiff and proposed class members’ claims based on [ ] the Privacy Act would be unlikely to have access to justice. Furthermore, the sheer number of individual claims, given the reach of Facebook, would overwhelm the courts unless a class proceeding was available.”

It is becoming increasingly clear that the risk of privacy class actions in Canada is growing.  This case shows us that even if a Canadian court acknowledges the enforceability of a website’s online terms and conditions, the court’s interest in protecting the privacy of its own citizens and upholding its own law will control.  While various news outlets have reported that Facebook plans to appeal the ruling, there’s no denying the fact that Facebook is now in the thick of the fight in the Canadian judicial system, whether it “likes” it or not.
 

Oklahoma Joins the Rapidly Growing Number of States with Social Media Password Laws

This post was written by Rose Plager-Unger and Frederick Lah.

On May 21, 2014, Oklahoma enacted H.B. 2372, following the trend outlined in our earlier article on the growing number of states prohibiting employers from requesting employee or applicant social media account passwords.  H.B. 2372 prohibits employers from requesting or requiring the user name and password of employees’ or applicants’ personal social media accounts or demanding employees or applicants to access the accounts in front of the employer.  The law also prohibits employers from firing, disciplining, or denying employment to employees or applicants who refuse to provide the requested information.

Click here to read the full post on our sister blog AdLaw By Request.
 

FTC Releases Report on Data Brokers - Calls for Legislative Action

This post was written by Frederick Lah and Michael E. Strauss.

On May 27, 2014, the FTC released its report “Data Brokers: A Call for Transparency and Accountability”. In the report, the FTC advocates for more transparency from data brokers, defined in the report as “companies that collect consumers’ personal information and resell or share that information with others.” 

Expounding upon findings from its 2012 privacy report, and gleaning new information from nine data brokers, the Commission’s latest paper characterizes data brokers and their products as both beneficial and risky to consumers.  While the FTC did acknowledge that data brokers' marketing products – for example, the sale of consumer data to businesses – “improve product offerings, and deliver tailored advertisements to consumers,” its praise of the industry was short-lived.  Instead, the FTC emphasized risk, noting that data brokers are storing delicate consumer information, which in the event of a security breach could expose consumers to fraud, theft, and embarrassment.  As the report describes, “identity thieves and other unscrupulous actors may be attracted to the collection of consumer profiles that would give them a clear picture of the consumers’ habits over time, thereby enabling them to predict passwords, challenge questions, or other authentication credentials.”

Perhaps the FTC’s most significant finding – the one driving its push for legislation – is its determination that consumers have little access or control over their information once data brokers obtain it.  Since consumer information is gathered, analyzed, and disseminated from and to a variety of sources, and because data brokers are not consumer-facing entities, the FTC believes that it is virtually impossible for consumers to trace their personal data back to where it originated.  For example, according to the FTC, some data brokers are creating detailed individual consumer profiles that may include sensitive inferences about consumers, such as their ethnicity, income levels, and health information.  If a consumer is denied the ability to complete a transaction based on such profiles, the consumer would have no way of knowing why he or she was denied, and would therefore not be able to take steps to prevent the problem from recurring. As a result, and in light of the foregoing, the Commission contends that the data broker industry is insulated from accountability, and proposes that Congress adopt the following legislative recommendations: 

  • Give consumers an easy way to identify which data brokers have their information, and establish a mode of contact and ability to control such data
  • Force data brokers to disclose the type of information they acquire and subsequently sell to businesses
  • Require that data brokers disclose the original source of the data
  • Require businesses that share consumer data with data brokers to give notice of such, and allow consumers to prevent businesses from doing so

If Congress were to adopt the FTC’s recommendations, the data broker industry would be widely affected.  Not only would data brokers be subject to additional requirements, but businesses on the sale and buy side of the industry would also be subject to greater transparency requirements.  We will be following this issue closely to see if and how Congress acts.    
 

Japanese data privacy developments - global transfers and privacy notices code

This post was written by Taisuke Kimoto, Kate Brimsted, Cynthia O’Donoghue, Matthew N. Peters, and Yumiko Miyauchi.

In recent weeks, Japanese data protection and privacy law has seen developments in two areas:

(1) The Ministry of Economy, Trade and Industry (METI) issuing its first code of practice on privacy notices
(2) The Asia-Pacific Economic Cooperation (APEC) approving Japan’s participation in the APEC Cross Border Privacy Rules (CBPR) system

METI Code of Practice (the Code)

This comes on the back of a period of activity for data protection legislation in Japan.  In December 2013, the IT Strategy HQ of the Cabinet Office published an Institutional Review Policy concerning utilization of personal data, with a plan to publish proposed amendments to the Japanese Data Protection Act in June 2014.

The Code is non-binding and therefore there is no penalty for organisations that do not comply with it. However, it sets out what organisations should notify consumers about the collection and use of their personal data, and includes a checklist of what should appear in all consumer privacy notices, particularly:

• A description of the service
• The nature of the personal data collected, and the process of collection
• How the company intends to use the data
• Whether the data will be shared and with whom
• The extent of the consumer’s rights to object to the collection of their data, or have their personal data corrected, and the procedure
• Organization contact details
• How long the data will be retained, and how it will be destroyed

The Code also calls for standardised and clear notices to avoid confusion among consumers.  With the Australian Privacy Principles (effective since March 2014) also providing guidance on privacy policy content, Japan is not the only APEC jurisdiction where this has been given priority. 

Proposals to revise the Japanese Data Protection Act are expected to be published in June 2014.

 The APEC Cross Border Privacy Rules

Beyond domestic data protection standards across the region, on 28 April, Japan became the third APEC nation (after Mexico and the United States) to have its participation in the APEC CBPR System approved.  This system is designed to develop global interoperability of organisations’ consumer data protection measures, and to complement the EU’s system of Binding Corporate Rules for international data transfers.

Using a common set of principles, adopted by all 21 APEC countries – for ensuring the protection of cross-border data transfers – Japan will now begin the process of undertaking measures to ensure they can provide certification to any organisation wishing to become CBPR compliant.  This begins with a commitment to use an APEC-approved accountability agent, supported by a domestic privacy enforcement authority, in order to meet their obligations under the CBPR System.     
 

Whistleblowing hotlines in France: a welcome lightening of regulation

This post was written by Daniel Kadar.

Implementing whistleblowing hotlines in France has caused significant concern for companies implementing such hotlines globally, as French regulation had considerably narrowed their scope with the major threat of considering non-compliant hotlines as null and void.

Times have changed: a couple of months ago, the French CNIL adopted an important modification of its unique authorisation policy AU-004 dedicated to whistleblowing hotlines, last revised in 2010. Initially, the companies were only allowed to collect and record through a whistleblowing hotline any serious situation related to banking, accounting, financial, and fight against corruption areas, as well as any facts involving compliance with the applicable competition law – but only to “answer to a legislative or regulatory requirement”.

Every time the planned policy fell outside this very limited scope, the company had to ask the CNIL an individual authorization with very limited chances of success, besides an exception concerning harassment.

To face those increasing requests – more than 60 between 2011 and 2013 – the CNIL amended its AU-004 in two ways:

  • In addition to the areas already under its scope, the Commission extended the unique authorization system to environmental protection, fighting against discriminations and harassment in the workplace, and health, hygiene and security at work.
  • The AU-004 now applies in those areas to “answer a legislative requirement or a legitimate interest”.

In order to empower and protect users of such hotlines, the Commission has always insisted on the principle of an identification of the author of the alert , which has been reaffirmed. Nevertheless, the new applicable rules open the way towards anonymous alerts in exceptional cases, when “the gravity of the facts is established and the factual elements sufficiently detailed”. The Commission specifies that processing those anonymous alerts has to be surrounded with special precautions, such as a “preliminary examination, from its first consignee, on the opportunity of its diffusion within the scheme”.

With these amendments, the CNIL obviously seeks to ease, step by step, the use of whistleblowing hotlines in France, and to finally allow global compliance programs to be rolled out without too many exceptions.

California Attorney General Issues Recommendations for Privacy Policies and Do Not Track Disclosures

This post was written by Lisa B. Kim, Paul H. Cho, Divonne Smoyer, and Paul J. Bond.

On May 21, 2014, the California Attorney General, Kamala D. Harris, issued her long-awaited guidance for complying with the California Online Privacy Protection Act (“CalOPPA”).  “Making Your Privacy Practices Public,” which can be found here, provides specific recommendations on how businesses are to comply with CalOPPA’s requirements to disclose and comply with a company-drafted privacy policy. 

As we have written about in the past, CalOPPA is the California privacy statute that requires any company that collects personally identifiable information from a California resident online, whether via a commercial website or a mobile application, to draft and comply with a privacy policy that conforms with the guidelines provided in it.  More recently, CalOPPA was amended to include information on how the website operator responds to Do Not Track signals or similar mechanisms.  The law also requires privacy policies to state whether third parties can collect personally identifiable information about the site’s users. 

Click here to read the issued Client Alert.

To Those Calling Consumers for Marketing Purposes: LISTEN UP!!

This post was written by Judith L. Harris.

The Federal Communications Commission (FCC) yesterday announced the largest Do-Not-Call settlement it has ever reached.  Under that settlement, a major telecommunications company will pay $7.5 million for the mobile wireless company’s alleged failure to honor consumer requests to opt out of phone and text marketing communications.  In addition, the company has agreed to take a number of steps to ensure compliance with the Commission’s Do-Not-Call rules going forward. Those steps include:

  • Developing and putting into action a robust compliance plan to maintain an internal Do-Not-Call list and to honor Do-Not-Call requests from consumers
  • Developing operating procedures and policies to ensure that its operations comply with all company-specific Do-Not-Call rules
  • Designating a senior corporate manager to act as a compliance officer
  • Implementing a training program to ensure that employees and contractors are properly trained in how to record consumer Do-Not-Call requests so that the names and phone numbers of those consumers are removed from marketing lists
  • Reporting to the FCC any noncompliance with Do-Not-Call requests
  • Filing with the FCC an initial compliance report and then annual reports for an additional two years

One of the reasons that the FCC came down so hard on the company was that it was already acting under a 2011 Consent Decree resolving an earlier investigation into similar consumer complaints.  “Ah,” you might say, “this then has no relevance to our company; we have never even been investigated a first time by the Commission.”  While that may be true, this still might be a good time to compare your company’s own internal plan for honoring Do-Not-Call requests with the plan being required of the entity that settled with the FCC. 

If your company’s current plan is missing any of the first four elements listed above, you might want to consider adding them. By laying out these elements, the FCC is sending a strong signal regarding what it considers to be reasonable efforts by an entity to ensure that its agents and employees are well aware of what is expected of them when making marketing calls.

Companies would do well to consider adopting and enforcing a comprehensive compliance plan now, and not wait to have one imposed if some disgruntled consumers complain to a regulatory agency.  At a minimum, adoption and adherence to a comprehensive compliance program would go far in protecting against any trebling of damages in a putative class action, and be a strong mitigating factor in any investigation down the road by an enforcement agency.
 

Online Advertising Targeted by Federal Trade Commission

On May 15, 2014, Maneesha Mithal, Associate Director of the Division of Privacy and Identity Protection at the Federal Trade Commission (“FTC” or “Commission”) testified, on behalf of the FTC, before the U.S. Senate Committee on Homeland Security and Governmental Affairs addressing the Commission’s work regarding three consumer protection issues affecting online advertising: (1) privacy, (2) malware and (3) data security. Below is a summary of the Commission’s testimony regarding these three key areas and the Commission’s advice for additional steps to protect consumers.

Click here to read the full post on our sister blog Ad Law By Request.

The ECJ Google Spain decision: watch out for the long arm of EU data protection law!

On 13 May the Court of Justice of the European Union (“ECJ”) delivered a ground-breaking ruling on the application of the Data Protection Directive 95/46/EC (the “Directive”) to internet search engine operators. In its eagerly anticipated judgment, the ECJ ruled on key issues including the circumstances in which search engines must block certain information from being returned in the results of a search made against the name of an individual (even where those data were originally lawfully published by a third party), the so-called “right to be forgotten”, and the territorial application of the Directive.

Click here for the issued Client Alert.

Two Health Care Entities Pay to Resolve HIPAA Violations Exposed by Theft of Unencrypted Laptops

As mentioned on our Life Sciences Legal Update blog, two separate HIPAA settlements resulted from investigations by the Department of Health and Human Services, Office for Civil Rights (OCR) into two self-reported instances of unencrypted laptop theft from health care entities.  In the first instance, OCR’s investigation found that the company had previously recognized a lack of encryption on its technology but had failed to fully address the issue before the breach occurred.  In the second instance, OCR determined that the company had failed to comply with multiple requirements of the HIPAA Security Rule.  Both instances resulted in settlements that included financial penalties as well as agreement to continued oversight by OCR through Corrective Action Plans.

To read the entire post, click here.

Article 29 Working Party releases opinion on Anonymisation Techniques

This post was written by Kate Brimsted, Katalina Chin, and Tom C. Evans.

In April, the EU’s Article 29 Working Party (Working Party) adopted an opinion on Anonymisation Techniques (Opinion). The Opinion is designed to provide guidance for organisations on the use of common anonymisation techniques, and the risks that can be presented by them.

When data is truly anonymised – so that the original data subject cannot be identified – it falls outside of EU data protection law. The Opinion notes that the re-use of data can be beneficial, providing “clear benefits for society, individuals and organisations”. However, achieving true anonymisation is not easy, and can diminish the usefulness of the data in some circumstances.

The EU regime does not prescribe any particular technique that should be used to anonymise personal data. To guide organisations in designing their own policy on anonymisation, the Opinion examines the two principle forms: (a) randomization and (b) generalization.

In particular, the Opinion looks at the relative strengths and weaknesses of each technique, and the common mistakes and failures that arise in relation to them. Each technique is analysed using three risk criteria, which include:

1. The risk that data identifying an individual could be singled out
2. The ‘linkability’ of two records that relate to an individual
3. Inferences that can be drawn about one set of data based on a second set of data

The Working Party stated that by considering these strengths and weaknesses, organisations will be able to take a risk-based approach to the anonymisation technique used and tailor it to the dataset in question. The Opinion emphasizes that no technique will achieve anonymisation with certainty, and that since the fields of anonymisation and re-identification are actively researched, data controllers should regularly review their policies and the techniques employed.

In addition, the Opinion makes clear that pseudonymisation is not a method of anonymisation in itself. Therefore, organisations that use this technique should be aware that the data they process does not fall outside of the EU data protection regime. These comments are significant because the draft EU General Data Protection Regulation contains specific references to pseudonymisation and the circumstances in which the technique can be used.

At the recent IAPP Europe Data Protection Intensive 2014 held in London, Security Engineering Professor Ross Anderson of the University of Cambridge put to the conference audience that anonymisation will never be a completely infallible tool for the security of personal data – a discussion set in the context of secondary uses of medical records. Despite these wider questions on anonymisation being posed by many, the Working Party’s Opinion will at least provide some useful guidance for organisations that have a need to anonymise data.
 

 

FTC Settlement with Snapchat - What Happens on Snapchat Stays on Snapchat?

Last Thursday, the Federal Trade Commission (FTC) announced that messaging app Snapchat agreed to settle charges that it deceived consumers with promises about the disappearing nature of messages sent through the app. The FTC case also alleged that the company deceived consumers over the amount of personal data the app collected, and the security measures taken to protect that data from misuse and unauthorized disclosure. The case alleged that Snapchat’s failure to secure its Find Friends feature resulted in a security breach that enabled attackers to compile a database of 4.6 million Snapchat usernames and phone numbers.

Click here to read the full post on our sister blog AdLaw By Request.
 

Privacy Regulators of the World Unite to Conduct "Sweep" of Mobile Apps from 12 May

This post was written by Kate Brimsted, Mark S. Melodia, Daniel Kadar, Paul Bond and Cynthia O’Donoghue.

In the week commencing 12 May, members of the Global Privacy Enforcement Network (GPEN) will conduct an online privacy sweep, focusing on the transparency with which mobile apps collect personal data.

GPEN is an informal network of 27 Data Protection Authorities (“DPAs”) that was established in 2007. Its members include the UK’s ICO, France’s CNIL, Spain’s AEPD, Canada’s OPC and the U.S. FTC.

The network’s tasks are to:

  • Support joint enforcement initiatives and awareness campaigns
  • Work to developed shared enforcement policies
  • Share best practices in addressing cross-border challenges
  • Discuss the practical aspects of privacy law enforcement co-operation

The sweep is part of an effort to ensure that consumers are fully aware of the ways in which apps gather and use personal data. To this end, DPAs will focus on the level of permission requested by apps, the way in which the permission is requested and the purposes for which personal data are used. The DPAs will focus in particular on whether the level of permission requested by the app is what would be expected of an app of its type, or whether it appears excessive.

This is the second time that GPEN has conducted an Internet privacy sweep. In May 2013, DPAs from 19 jurisdictions carried out a sweep of websites and apps and their privacy policies.  This looked at (1) was there a privacy policy? (2) was it easy to find? (3) was it easy to read and understand?  This led to regulators following up with a number of organisations, including insurance companies, financial institutions, and media companies, resulting in some substantial changes being made to their privacy policies. 

The results of the 2014 sweep are expected to be published later this year.
 

N.Y. court rules U.S. warrants include overseas data: Microsoft loses first round of legal challenge

This post was written by Cynthia O'Donoghue and Kate Brimsted.

At the end of April, a magistrate judge of the Southern District of New York denied a motion filed by Microsoft for the quashing of a search warrant issued under the Stored Communications Act (the Act). Microsoft had argued that the warrant should be quashed because the data concerned was stored in Ireland, and the Act did not authorize U.S. courts to issue extraterritorial warrants. 

Under the provisions of the Act, the U.S. government can require information from Internet Service Providers (ISPs) in three ways: by subpoena, court order or warrant. The method chosen determines the extent of the information that an ISP is required to provide. In this case, the warrant ordered Microsoft to disclose extensive information, including:

  • The contents of all emails stored in the account
  • Records and information regarding the identification of the account (including everything from the user’s name to their method of payment)
  • All records stored by the user of the account, including pictures and files
  • All communications between Microsoft and the user regarding the account

Denying the motion, the judge stated that although the language of the Act was ambiguous, the interpretation advanced by Microsoft would be inconsistent with the Act’s structure and legislative history. In addition, the judge pointed to the practical consequences if Microsoft’s motion were upheld, noting that the burden on the government would be “substantial”, and that it would lead to reliance on a Mutual Legal Assistance Treaty which “generally remains slow and laborious”.

Microsoft’s robust stance on this issue comes at a time when ISPs face increasing public and political scrutiny of their dealings with investigatory agencies. Following the ruling, Microsoft Corporate VP and Deputy General Counsel David Howard stated, “the US Government doesn’t have the power to search a home in another country, nor should it have the power to search the content of email stored overseas.” It appears that Microsoft intends to take this issue further, with Howard noting that the path of the legal challenge could “bring the issue to a US district court judge and probably to a federal court of appeals.”
 

EU - US Privacy Bridge Project Announced

This post was written by Cynthia O’Donoghue, Katalina Chin, and Matthew N. Peters.

On 2 May, Dutch Data Protection Authority (DPA) Chairman Jacob Kohnstamm announced a new Privacy Bridge Project between the U.S. and the EU at the IAPP Data Protection Intensive.  In his announcement, Kohnstamm highlighted the need for these two privacy regimes to find common ground, and to abandon the age-old position that ‘interoperability’ will only be achieved when one regime has made wholesale changes to its privacy laws.

This announcement follows a period of strained relations between the U.S. and EU on the subject of privacy.  With the threat of suspension hanging over Safe Harbor (should the EU Commission’s proposals to strengthen the framework fail) this announcement offers a new avenue of dialogue, which focuses on compromise and the need ‘to find practical, maybe technological solutions’ to the differences between the U.S. and EU regimes. 

The project team will be made up of around 20 experts from both sides of the Atlantic, and led by the CSAIL Decentralized Information Group at the Massachusetts Institute of Technology, together with the Institute for Information Law of the University of Amsterdam.  The project program will include four two-day meetings, with the intention of delivering a paper of recommendations by summer 2015, and a global DPA conference later that year.  The team’s first meeting was held in Amsterdam on 28 – 29 April, with Fred Cate (Indiana School of Law) and Bojana Bellamy (President of the Centre for Information Policy Leadership), amongst others,  in attendance.  The remaining three meetings are scheduled to be held in Washington DC, Brussels and Boston.      
 

VPPA Class Action Against Hulu Survives

This post was written by Frederick Lah and Lisa B. Kim.

On April 28, the Northern District of California granted in part and denied in part Hulu’s motion for summary judgment over allegations that it violated the Video Privacy Protection Act (VPPA) by sharing users’ information with comScore and Facebook.  The court granted the motion for the comScore disclosures but denied the motion for the Facebook disclosures. 

The VPPA restricts video service providers from disclosing “personally identifiable information” to third parties.  Under the statute, the term “personally identifiable information” means “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.”  In this case, the court drew a distinction between the types of information Hulu was disclosing to comScore and the types of information it was disclosing to Facebook.  The court held that, “[t]he comScore disclosures were anonymous disclosures that hypothetically could have been linked to video watching.  That is not enough to establish a VPPA violation.  As to the Facebook disclosures, there are material issues of fact about whether the disclosure of the video name was tied to an identified Facebook user such that it was a prohibited disclosure under the VPPA.”

According to declarations from Hulu, Hulu’s main source of income is its advertising revenue.  Advertisers pay Hulu to run its commercials during breaks, and the amount they pay is tied to how often an ad is viewed.  Hulu uses analytics providers like comScore to verify those types of metrics.  In Hulu’s case, comScore performed its analytics on the Hulu site and then reported its data to Hulu in the “aggregate and generalized” form.  While the court acknowledged that “comScore doubtless collects as much evidence as it can about what webpages Hulu users visit,” the court held that “there is a VPPA violation only if that tracking necessarily reveals an identified person and his video watching.”  Since there was no evidence that comScore’s tracking did that here, the court granted the motion in Hulu’s favor with respect to the comScore disclosures.

As for the Facebook disclosures, the court’s ruling turned on its determination that “personally identifiable information” was transmitted from Hulu to Facebook via Facebook’s “Like” button.  Each hulu.com watch page has a “Like” button.  During the relevant time period, the URL of each watch page – which was sent to Facebook in order to offer the “Like” button – included the video title that the user watched.  And through Hulu’s cookies associated with the “Like” button, the name of the Facebook user was provided.  Based on these facts, the court found that plaintiff’s VPPA claims for the Facebook disclosures should survive.

The VPPA was amended last year and state iterations of the law have recently given rise to litigation.  With statutory damages of $2,500 per violation, the potential liability under the VPPA can be catastrophic.  Plaintiffs’ counsel has brought VPPA class actions against various content platforms, including Blockbuster, Netflix and Redbox, with mixed results.  The cases against Blockbuster and Netflix each settled.  Redbox’s motion to dismiss was denied by the district court, but on interlocutory appeal the case was dismissed by the Seventh Circuit.

There has also been an recent up-surge of VPPA cases alleging that various news and entertainment organizations violated the VPPA by sharing what consumers watched with third-party analytics companies.  See Perry v. Cable News Network, Inc., et al., Case No. 1:14-cv-01194, N.D. Ill.; Ellis v. The Cartoon Network, Inc., Case No. 1:14-cv-00484, N.D. Ga.; Locklear v. Dow Jones & Co. Inc., Case No. 1:14-cv-00744, N.D. Ga.; Eichenberger v. ESPN, Inc., Case No. 2:2014-cv-00463, W.D. Wash.  These cases are still in the pleading stage and will surely be impacted by this new ruling.
 

French data protection authority ramps up inspections for 2014 - will it be a knock on the door or a "remote audit"?

This post was written by Daniel Kadar and Kate Brimsted.

At the end of April, the French data protection authority (CNIL) released its inspection schedule for 2014 (the Schedule), promising to carry out some 550 inspections over the course of the year.

Approximately 350 inspections are expected to be on-site, a quarter of which will focus on CCTV/video surveillance, and 200 will be carried out using the CNIL’s new powers of online investigation. These powers, introduced in April 2014, enable agents to carry out “remote investigations” into compliance with the French Data Protection Act.

The Schedule sets out six priority areas for inspections in the period, including:

  • Processing personal data by the National Database on Household Credit Repayments
  • Handling data security breaches by electronic communications operators
  • Collecting and using personal data, including sensitive data, by social networks, online dating providers and third-party applications linked to social networks
  • Processing personal data by the government’s system for the payment and collection of income tax
  • Processing personal data by online payment systems
  • Processing personal data by the National Sex Offenders Register

The CNIL will also continue to participate in the Article 29 Working Party’s effort to harmonise the approach of EU data protection authorities regarding Internet cookie compliance.

The CNIL further renewed its commitment to support international cooperation between data protection authorities, and is set to take part in the 2nd Global Privacy Enforcement Network’s Internet Sweep (Internet audits evaluating how well websites protect the data privacy of their users). 

International cooperation is a hot topic for EU data protection authorities. In anticipation of the General Data Protection Regulation and its proposed introduction of a “one-stop shop” mechanism, regulators across Europe will be looking to plan ahead for the changes to come. The CNIL has also, on behalf of the Article 29 Working Party, been leading the EU data protection enforcement against Google after the implementation of its new platform (also covered by this blog here [dated Jan. 21, 2014]).
 

Article 29 Working Party proposes clauses for data transfers from EU processors to non-EU subprocessors

This post was written by Cynthia O'Donoghue, Kate Brimsted, and Matthew N. Peters.

As is well-known, personal data is restricted from leaving the EEA. On 21 March, the EU’s Article 29 Working Party (the WP) issued draft ad hoc model clauses for data transfers from EU processors to non-EU subprocessors.  While not yet approved by the EC Commission, this working document provides useful guidance on the WP’s thinking on data transfers to processors which fall outside the scope of EU Decision 2010/87/EU on standard contractual clauses for the transfer of personal data to processors established in third countries under Directive 95/46/EC (the 2010 Clauses).

The 2010 Clauses apply to situations where an EU-based controller is transferring data to a processor based outside the EEA.  In practice, the initial transfer will often be from an EU-based controller to an EU-based processor, with a subsequent transfer to a non-EU subprocessor.  As the 2010 Clauses do not strictly apply to such arrangements, the guaranteed adequacy of protection is not available and alternative means for overcoming the restriction on extra-EEA data transfer must be found, e.g. valid consent by the individuals to the transfer of their data.

The draft clauses include expected restrictions, such as preventing further subprocessing activity without the controller’s prior consent.  They will also need to be used in conjunction with a suitable Framework Contract between the EU-based controller and processor (to satisfy Article 17 of the Directive).

The working document represents an early, though welcome, step along the path towards Commission-approved supplementary clauses to solve this practical problem. 
 

Domain Dispatches: NETmundial Is Right Around The Corner

On Wednesday, April 23, 2014, Sao Paulo, Brazil will host NETmundial – the Global Multistakeholder Meeting on the Future of Internet Governance. Approximately 800 people will descend on Sao Paulo to spend two days and nights discussing, debating, arguing, cajoling, pleading, and demanding potential changes in the governance of the Internet. Hundreds more will participate remotely, at "remote hubs" and through the Internet (of course).

Click here to read more on our sister blog, AdLaw by Request.

ICANN Goes to Singapore

This post was written by Gregory S. Shatan.

The Internet Corporation for Assigned Names and Numbers (ICANN) held its 49th semi-annual meeting in Singapore in March.  Reed Smith partner, Gregory Shatan, provided real time reports Straight from Singapore on our sister blog AdLaw By Request.

Article 29 Working Party adopts opinion on Personal Data Breach Notification

This post was written by Cynthia O'Donoghue.

At the end of March, the EU’s Article 29 Working Party adopted an opinion on Personal Data Breach Notification (the Opinion). The Opinion is designed to help data controllers decide whether they are obliged to notify data subjects when a ‘personal data breach’ has occurred.

A ‘personal data breach’ under Directive 2002/58/EC (the Directive) broadly covers the situation where personal data is compromised because of a security breach, and requires communications service providers (CSPs) to notify their competent national authority. Depending on the consequences of the personal data breach, CSPs may also be under a duty to notify the individual data subjects concerned.

The Opinion contains factual scenarios outlining the process that should be used by CSPs to determine whether, following a personal data breach, individuals affected should be notified. Each scenario is assessed using the following three “classical security criteria”:

  • Availability breach – the accidental or unlawful destruction of data
  • Integrity breach – the alteration of personal data
  • Confidentiality breach – the unauthorized access to or disclosure of personal data

The Opinion includes practical guidance for notifying individuals, including where a CSP does not have the contact details of the individuals concerned, or where the compromised data relates to children.  The Opinion also stresses the importance of taking measures to prevent personal data breaches.
 

OpenSSL reveals significant security flaw

This post was written by Cynthia O'Donoghue.

On 7 April, OpenSSL released a Security Advisory exposing a flaw which, if exploited, would allow hackers to reveal communications between servers and the computers of Internet users.

OpenSSL is the most popular open source encryption service on the Internet, and is used by a large number of commercial and private service providers, including many social medial sites, email providers and instant messaging platforms.  The tool is used to encrypt information passed between Internet users and website operators, and the encrypted communication should have only been capable of being decrypted by the particular service provider.

When exploited, the security flaw, dubbed “Heartbleed”, revealed the encryption keys of service providers using the system. Once decrypted, the hackers essentially had unrestricted access to the communications.  OpenSSL has released an update to address the security flaw; however, service providers will find it impossible to assess whether the security of their systems has been compromised, making the situation particularly serious. In addition, the update will only protect future communications, and therefore any that may have already been intercepted will remain vulnerable.

Internet users are being advised to change all of their passwords, and in particular those for important services such as Internet banking.

The security flaw is likely to raise data protection issues for organisations, and it may behoove users of OpenSSL to take a proactive approach to communicating with their customers about security issues.  Those organisations that have suffered a security breach may be under a duty to notify individuals, and could be subject to adverse publicity, as well as litigation and regulatory investigation. 
 

Update on Federal Trade Commission v. Wyndham Worldwide Corp.: FTC Allowed To Proceed with Data Security Suit, Rejects Fundamental Challenge to FTC Authority

This post was written by Paul Bond and Christine N. Czuprynski.

A New Jersey federal court is allowing the FTC’s case against Wyndham Worldwide Corporation to go forward, denying Wyndham’s Motion to Dismiss on both the unfairness and deception counts.  In this closely watched case, the court emphasized that in denying Wyndham’s request for dismissal, it was not providing the FTC with a “blank check to sustain a lawsuit against every business that has been hacked.”  The far-reaching implications of this decision, though, cannot be ignored.

The Wyndham decision may well prove rocket fuel to an agency already proceeding at break-neck speed to formulate and enforce (often at the same time) new data security law.  Any company that was still waiting for the FTC to go through a formal rulemaking process on data security can wait no more.  The decision by Judge Salas has arguably ratified all of the reams of informal guidance the FTC has provided over the past decade, plus in enforcement actions, panel discussions, white papers, and more, as though they had gone through the formal notice and comment-based rulemaking process.  Unless a company is confident that it knows, has synthesized, and has applied this informal guidance to its own activities, it stands at risk of being the next target for the FTC's newly affirmed section 5 authority. 

The Federal Trade Commission sued Wyndham Worldwide in June 2012 in the District of Arizona.  The FTC alleged that Wyndham’s failure to properly safeguard the personal information in its possession led to a data security breach that exposed thousands of customers to identity theft and other fraud. The case was transferred to the District of New Jersey in March 2013.  Soon thereafter, Wyndham filed its Motion to Dismiss.

Wyndham challenged the FTC’s authority to regulate unfairness in the data security context.  Wyndham further argued that the FTC could not bring unfairness claims unless and until it had promulgated regulations on the issue.  U.S. District Judge Esther Salas rejected both of these challenges, as well as Wyndham’s third challenge, that the FTC failed to sufficiently plead both its unfairness and deception claims.

Wyndham argued that section 5 of the FTC Act does not confer unfairness authority that covers data security.  Wyndham contrasted section 5 of the FTC Act to the Fair Credit Reporting Act (FCRA), the Gramm-Leach-Bliley Act (GLBA), and the Children’s Online Privacy Protection Act (COPPA), all of which include specific authority for the FTC to regulate data security in certain contexts.  Wyndham argued that those statutes, which were enacted after the FTC Act, would be superfluous if the FTC had the general data security authority it seeks to wield in this case.  The court disagreed and ruled that the FTC’s general authority over data security can coexist with more specified authority in the FCRA, GLBA, and COPPA.

Wyndham also argued that the FTC had not provided fair notice of what data-security practices a business had to implement in order to comply with the FTC Act. . In rejecting that argument, the court held that the FTC was not required to engage in rulemaking before enforcing Section 5 in data-security cases, but could instead develop the law on a case-by-case basis. The court also found that fair notice was provided through the FTC’s public complaints, consent agreements, public statements and business guidance brochure. As such, the FTC was not required to also promulgate formal regulations. In addition, the court found that the FTC had pled with enough particularity to satisfy the heightened requirements in Rule 9(b), even though it was no persuaded that this action fell under that rule.

With respect to the deception claim, the ruling also touched on the respective liability between franchisors and franchisees, and issues we’ve written about recently.  Wyndham sought to exclude Wyndham-branded hotels from the case on the grounds that Wyndham Hotels and Resorts is a legally separate entity from Wyndham-branded hotels.  It therefore argued that statements on the Hotels and Resorts website privacy policy could not form the basis for deception claims, where personal information was accessed from Wyndham-branded hotels.  The court reviewed the language from the privacy policy to determine that a reasonable person could conclude that the privacy policy on the Hotels and Resorts website made statements about data security at both the Hotels and Resorts and the Wyndham-branded properties.  Despite the court’s claims of not providing the FTC with carte blanche to pursue companies that fall victim to hackers, the court’s ruling makes clear that when companies experience data breaches, they – and their franchisees – are now more, not less, likely to face the possibility of enforcement action by the FTC.
 

Spain's AEPD Publishes Draft Privacy Impact Assessment Guide

This post was written by Katalina Chin.

On 17 March, the Spanish data protection agency (la Agencia Española de Protección de Datos - AEPD) published a draft privacy impact assessment guide (Evaluación del Impacto en materia de Protección de Datos Personales). At the same time, the AEPD has initiated a public consultation, open until 25 April, to garner opinion and comments on the guide, after which they will issue a final version.

The guide sets out a framework to improve privacy and data protection in relation to an organisation’s technological developments, with the aim of helping them identify, address and minimise data protection risks prior to the implementation of a new product or service.

In this draft guide, the AEPD comments on the increasing importance for organisations to demonstrate their commitment to the rights of individuals whose personal data they process, and in meeting their legal obligations (essentially advocating the principle of accountability). In this regard, they advise that a developed privacy impact assessment will go a long way in evidencing an organisation’s good diligence, as well as assisting it to develop appropriate methods and procedures for addressing privacy risks.

It is not suggested, however, that the guide will provide the only methodology for carrying out a privacy impact assessment. Indeed, the AEPD says that they would be receptive to organisations who wish to develop an assessment specifically adapted to their business or sector, and they would be open to providing such organisations with guidance to ensure that they meet the minimum regulatory requirements.

As well as providing general guidance on privacy impact assessments, the guide sets out a set of basic questions, together with an ‘evaluation’ tool developed by the AEPD, whereby organisations can ‘check off’ and determine the legal obligations that must be met in order to implement their intended product or service in compliance with data protection legislation.

While this privacy impact assessment is not obligatory in Spain, this type of compliance review could become a legal requirement across the EU if the European Regulation on Data Protection remains as currently drafted (Article 33).
 

The ICO Sets Out Agenda for 2014-2017

This post was written by Cynthia O'Donoghue.

At the end of March, the UK Information Commissioner’s Office (ICO) released its corporate plan for 2014-2017 titled “Looking ahead, staying ahead” (the Plan). Information Commissioner Graham stated that the changes proposed are “about getting better results, for both consumers and for data controllers.”

As the UK’s supervisory body for upholding information rights, the ICO has a wide range of responsibilities. These include educating citizens and organisations about their rights and responsibilities under the various pieces of legislation, and also investigating complaints and taking enforcement action when things go wrong.

In the Plan, the ICO recognises that its role will evolve in light of the proposed EU General Data Protection Regulation and in relation to press regulation stemming from the Levison report. In order to be proactive in fulfilling its duties, the ICO has stated that there will be a “shift in focus, with cases brought to the ICO used to identify broader data protection problems and improve organisations’ current practices.”

The Plan details a number of specific changes and initiatives that organisations can expect to see over the next three years, including:

  • Closer work with organisations such as trade bodies and other regulators to improve compliance and develop privacy seals and trust marks
  • The introduction of an on-line, self-reporting breach tool to assist organisations in complying with the law
  • The development of new and existing codes of practice to ensure organisations have access to up-to-date advice
  • The reactive investigation of offences under the Data Protection Act 1998 and Freedom of Information Act 2000, along with initiatives for increased cooperation between the ICO and other regulators
  • The introduction of a monitoring process to check how quickly data controllers respond to subject access requests
  • A target of resolving 90% of data protection and freedom-of-information complaints within six months of being opened
  • The development of free training materials for organisations to use when training their own staff

Further reform in Australia

This post was written by Cynthia O’Donoghue.

Australia’s privacy protection reform laws came into force in mid-March, making significant changes to the regulation of data. Further reform is now on the horizon, with theAustralian Law Reform Commission (the Commission) publishing a discussion paper titled, ‘Serious Invasions of Privacy in the Digital Era’ (Discussion Paper).

The Commission is carrying out an inquiry at the request of the Australian government to find “innovative ways the law might prevent or redress serious invasions of privacy.”  Two of the Commission’s proposals are likely to be of particular concern to businesses.

First, the Discussion Paper proposes the introduction of a principle for the deletion of personal data. The principle would differ significantly from the ‘Right to Erasure’, one of the headline provisions contained in the proposed EU General Data Protection Regulation.

The current draft of the EU provision would allow citizens to request the deletion of any personal data held about them, where the data controller has no reason for retaining it. Data controllers would also be required to take reasonable steps to inform any third parties to whom they have passed the data of this request. In contrast, the Australian recommendation on data erasure would apply only to data that the citizen had personally provided to a data controller. The Discussion Paper calls for comments as to whether the data controller should be under a duty to inform third parties of this request.

Second, the Discussion Paper contains a proposal to introduce a new Commonwealth Statute which would apply to all territories in Australia. This statute would provide citizens with the ability to bring a cause of action against any individual or entity that seriously invades their privacy. The action would enable individuals to obtain damages independent of breach of the Australian Privacy Act.

The Commission is scheduled to deliver its final report to the Attorney-General in June 2014.

 

Safety of US-EU Safe Harbor Given Boost

This post was written by Cynthia O'Donoghue.

Following months of uncertainty about the future of the EU-U.S. Safe Harbor Framework, political leaders from the EU and the United States reiterated their commitment to the regime in a joint statement issued 26 March (the Statement).

EU-U.S. Safe Harbor is designed to essentially transpose EU data protection law into U.S. law so that organisations certified to the program are deemed to adequately protect personal data transferred from the EU to them in the United States. 

The future of the Safe Harbor regime was cast into doubt last year, following Edward Snowden’s revelations about the extent of NSA information gathering. In November 2013, the European Commission released a Strategy Paper which noted that “the current implementation of Safe Harbor cannot be maintained.” In particular, the paper pointed to shortcomings in transparency, enforcement and the use of the national security exception.

The situation became worse at the beginning of last month when a resolution of the EU Parliament drastically called for the “immediate suspension” of the Safe Harbor regime on the ground that it provides an insufficient level of protection to EU citizens.

The Statement is the latest development in the saga, with officials pledging to maintain the Safe Harbor framework subject to a commitment to strengthening it “in a comprehensive manner by summer 2014”. This demonstrates a slightly more diplomatic approach, which should be reassuring to businesses that currently rely on the Safe Harbor exception.

The Statement also confirms the commitment of the EU to introducing a new “umbrella agreement” for the transfer and processing of data in the context of police and judicial proceedings. The aim of this agreement is to provide citizens with the same level of protection on both sides of the Atlantic, with judicial redress mechanisms open to EU citizens who are not resident in the United States. Negotiations around this agreement commenced in March 2011, and are still on-going.
 

Brazil's Internet Bill: Latest Developments

This post was written by Cynthia O’Donoghue.

At the end of March, the Brazilian Chamber of Deputies voted in favour of the Marco Civil da Internet (Internet Bill), bringing the ground-breaking legislation one step closer to enactment. The Internet Bill will now progress to the Senate for approval.

In the wake of Edward Snowden’s revelations about global surveillance programs, the Internet Bill had included a provision that would have required organisations to store all data held on Brazilian citizens within the country’s border. This controversial requirement has been dropped by the Brazilian government in the latest version of the Internet Bill. However, the text voted on by the Chamber of Deputies now provides that organisations will be subject to the laws and courts of Brazil in cases where the information of Brazilian citizens is involved.

The Internet Bill will introduce a variety of measures to govern the use of the Internet, providing civilians with robust rights and implementing strict requirements for organisations to comply with. The legislation is the first of its kind, and has been hailed by the Brazilian Minister of Justice as a sign that Brazil is at the forefront of efforts to regulate the web democratically. The most important provisions in the legislation are:

  • A statement of the rights of web users, including freedom of expression and the confidentiality of online communications
  • The enshrinement of “net neutrality”, a principle that prohibits ISPs and governments from making a distinction between different types of data traffic. This will prevent organisations from being able to limit access to different websites based upon subscription plans.
  • Confirmation that ISPs cannot be held liable for content uploaded by third parties using their services unless they refuse to remove such content following a court order

ANA Responds to Request for Information on Big Data

This post was written by Frederick Lah.

Earlier this week, the ANA submitted its comments in response to a Federal Register Notice by the White House’s Science and Technology Policy Office seeking comments from industry participants on a variety of issues related to Big Data, including the public policy implications surrounding the use, gathering, storage and analysis of Big Data. The ANA’s views are shared and respected by many in the advertising industry. Government action (if any) in this space should appropriately correlate to the type of data at issue and in coordination with the ongoing efforts by the private sector to develop self-regulatory solutions.

Click here to read more on our sister blog, AdLaw By Request.
 

The EU Cyber Security Directive: Latest Developments

This post was written by Cynthia O'Donoghue.

The Cyber Security Directive (formally known as the Network & Information Security Directive) (the Directive) was considered by the European Parliament (the Parliament) in March. After a first reading of the Directive, MEPs voted strongly in favour of its progression to the next stage of the legislative process. This will involve negotiations between the European Commission (EC) and the Council.

Work on the Directive first began in February 2013, as part of the EU Cyber Security Strategy. In a speech to the Parliament, Vice President Kroes reiterated that the Directive’s main aims are to bring all member states to a minimum security standard, promote cooperation and ensure preparedness and transparency in important sectors.

The Directive will introduce mandatory breach notification for certain organisations and set out minimum security requirements.

The Parliament made substantial amendments to the version of the Directive that had been proposed by the EC, such as:

  1. Narrowing the scope of organisations that fall within the Directive’s requirements to eliminate its application to search engines, social media platforms, internet payment gateways and cloud computing services, software developers and hardware manufacturers, by limiting its application to providers of “critical infrastructure”, such as organisations in the energy, transport, banking, finance, and health sectors.
  2. Developing National Security Strategies, with the assistance of ENISA (European Union Agency for Network and Security), that will allow Member States to develop minimum standards.
  3. Appointment of a single point of contact among national competent authorities (NCAs) for security and network information systems to facilitate cooperation and communication between Member States. NCAs will be responsible for ensuring compliance, including imposing sanctions where an organisation suffers a breach intentionally or where there has been gross negligence. The amendment to the original text of the Directive permits Member States to appoint several NCAs, so long as only one “national single point of contact” is responsible and restricts the imposition of sanctions.

As the Directive progresses to the next stage of the legislative process, additional changes could be made. The Commission aims for the Directive to have completed the legislative process by the end of 2014.

 

N.D. Cal. Denies Class Cert. Motion in Gmail Wiretapping Litigation

This post was written by Mark MelodiaPaul Bond, and Frederick Lah.   

Last week, the Northern District of California denied a motion for class certification in a multidistrict litigation brought against Google over its alleged practice of scanning Gmail messages in order to serve content-based advertising. In re: Google Inc. Gmail Litigation, 5:13-md-02430 (N.D. Cal.). In sum, the court found that questions relating to whether class members had consented to the practice were too highly individualized to satisfy the predominance requirement based on the myriad of disclosures available to class members.

The original complaint in this case dates back to 2010. Six class actions were eventually centralized in the Northern District of California where a consolidated complaint was filed. The complaint sought damages on behalf of individuals who either used Gmail or exchanged messages with those who used Gmail and had their messages intercepted by Google. The causes of action were brought under California’s Invasion of Privacy Act, as well as federal and state wiretapping laws (California, Maryland, Pennsylvania, and Florida).

In general, the Wiretap Act prohibits the unauthorized interception of wire, oral, or electronic communications. Under the federal Wiretap Act, there are several exceptions to this general prohibition, one of which is if the interception is done subject to “prior consent.” So, the issue of whether the class members had consented to the interception, either expressly or impliedly, was a central issue in the case.

Google filed a Motion to Dismiss on the basis that its interception fell within the ordinary course of Google’s business and was therefore exempt from the wiretapping statutes. That argument was rejected by the court. Google also argued that class members had expressly consented to the interception based on Gmail’s Terms of Service and Privacy Policy (collectively, the “Terms”); and even that if they hadn’t viewed the Terms, they impliedly consented to the interception because, per Google, all email users understand and accept the fact that email is automatically processed. In September 2013, the court granted in part and rejected in part Google’s Motion to Dismiss. Only the claims based on California’s Invasion of Privacy Act and Pennsylvania’s wiretapping law (with respect to a subclass) were dismissed; the rest of the claims survived. A month later, the plaintiffs filed their motion for class certification.

What proved fatal for plaintiffs on this go-around was their inability to demonstrate that the proposed classes satisfied the predominance requirement under FRCP 23. There were several proposed classes and subclasses. Members in each of the classes were potentially subject to a different set of disclosures and registration processes. For instance, one of the classes represented users who signed up for Google’s free web-based Gmail service. These users were required to check a box indicating that they agree to be bound by the Terms of Service. Another class was comprised of users of an internet service provider (“ISPs”), Cable One, that had contracted with Google for Google to provide email service under the Cable One domain name. Another class consisted of users from educational institutions, such as the University of Hawaii; similar to Cable One, the educational institutions had contracted with Google for email services. For businesses such as the ISPs and the educational institutions, the contract required that the contracting business, not Google, ensure that end users agreed to Google’s Terms of Service.

With respect to the Terms themselves, it is interesting to note that in the court’s Order denying Google’s Motion to Dismiss, the court previously characterized the Terms as “vague at best and misleading at worst.” Per the court, the Terms of Service stated only that Google retained authority to prescreen content to prevent objectionable content, while the Privacy Policy suggested that Google would only collect user communications directed to Google, not among users. And while the contracting businesses, like Cable One and the University of Hawaii, were required to ensure end users were accepting Google’s Terms of Service, there were variations among the businesses as to how they would present the Terms of Service and obtain consent. Ironically, the fact that the court considered Google’s Terms to be vague or misleading and the fact that the Terms were not presented uniformly to end users appeared to actually help Google avoid certification -- it led to more individualized inquiries as to whether the users had given their express consent.

In addition to Google’s Terms, the court noted that there was also a “panoply of sources” where users could have impliedly consented to Google’s practices, such as Google’s Help pages, Google’s Privacy Center, Google’s Ad Preference Manager (which included a webpage on “Ads on Search and Gmail,” Gmail’s interface itself, the Official Gmail Blog, Google’s SEC filings (which includes the statement, “we serve small text ads that are relevant to the messages in Gmail”), and even media reports (e.g., New York Times, Washington Post, NBC News, PC World, etc.). The breadth of these sources helped to further convince the court that determining whether class members impliedly consented to Google’s interception was a highly individualized determination, and not one based on common questions. Whether each individual knew about or consented to the interception would depend on the sources to which he or she had been exposed. The plaintiffs contended that relying on extrinsic evidence outside of Google’s Terms would violate the parol evidence rule. The court was quick to point out that while that argument might work for a breach of contract case, the parol evidence rule was not applicable under the Wiretap Act, which requires the fact finder to consider all surrounding circumstances in relation to the issue of consent.

Putting aside the question of whether Google’s Terms were in fact vague or misleading, a key takeaway for businesses from this case should be the importance of educating customers about their data practices. Google was able to avoid certification based on the fact that they offered a variety of other opportunities for their customers to learn more about their services and products. Outside of a website or app terms of use and privacy policy, businesses need to understand that the disclosures they make elsewhere can help to educate and inform users about their practices, e.g., on the websites or apps themselves, a “Help” or “FAQ” section, on advertisements or in promotional emails, or in their subscription or license agreements. Of course, the more disclosures a business offers, the more challenging it can be to make sure that the message being delivered remains consistent. Plus, businesses should revisit their disclosures regularly to make sure that they are clear, conspicuous, current, and forthcoming.

The fact that these cases were brought under wiretapping laws adds another interesting wrinkle. The federal Wiretap Act comes with $100 in statutory damages per day, which could lead to billions of dollars in penalties. Various other web companies have recently faced privacy class actions pursuant to the Wiretap Act over the alleged data mining of user communications, including Yahoo!, LinkedIn and Facebook. We’ll continue to monitor this area closely to see how the recent Google decision might affect this wave of cases.

Original posted in the IAPP Privacy Tracker.

ICO issues updated code of practice on subject access requests

This post was written by Cynthia O'Donoghue.

The UK Information Commissioner’s Office (ICO) has issued an updated code of practice (the Code) on subject access requests, less than a year after releasing its original guidance paper on the topic. The Code is designed to help organisations fulfill their duties under the Data Protection Act 1998 (DPA) and contains guidance in relation to recognising and responding to subject access requests.

The “right of subject access” enables individuals to request from organisations information about what personal data is held about them. The information may include source of the personal data, how it is processed and whether it is passed on to any third parties. The DPA also permits individuals to request a copy of the personal data held. Unless an exemption applies, organisations are under a duty to provide this information when requested.

The Code is not legally binding, but it does demonstrate the steps that the ICO considers to be good practice. The ICO also points out that by dealing with subject access requests efficiently an organisation may enhance the level of customer service offered.

The main recommendations of the Code relate to the handling of a subject access request and cover the following issues:

  1. Taking a positive approach to subject access.
  2. Finding and retrieving the relevant information.
  3. Dealing with subject access requests involving other people’s information.
  4. Supplying information to the requester (not just copies); and
  5. Exemptions.

The Code also contains guidance in relation to “special cases” and enforcement action by the ICO.

UK Information Commissioner's Office and U.S. Federal Trade Commission sign Memorandum of Understanding

This post was written by Cynthia O'Donoghue.

At the beginning of March, the UK Information Commissioner’s Office (ICO) signed a memorandum of understanding (MOU) with the U.S. Federal Trade Commission (FTC) at the IAPP Global Privacy Summit. The memorandum is aimed at increasing cooperation between the agencies, with UK Information Commissioner Graham stating that the arrangement would be “to the benefit of people in the United States and the United Kingdom.”

Whilst the MOU does not create legally binding obligations between the two agencies, it sets out terms for cooperation during investigations and enforcement activities. The FTC and ICO will cooperate on serious violations. The methods for cooperation include:

  • Sharing information, including complaints and personal information
  • A mutual undertaking to provide investigative assistance to the other agency through use of legal powers
  • Coordinating enforcement powers when dealing with cross-border activities arising from an investigation of a breach of either country’s law, where the matter being investigated is the same or substantially similar to practices prohibited by the other country

Measures to encourage cooperation between national regulators have been introduced by several international organisations. For example, in 2010, the Asia-Pacific Economic Cooperation (of which the United States is a member) launched a Cross-border Data Privacy Initiative, recognising that “trusted flows of information are essential to doing business in the global economy.”

The MOU is a joint acknowledgment by the FTC and ICO that consumer protection and data protection require close collaboration, and it serves as a warning to organisations that the agencies will be proactive in carrying out investigations of serious violations of consumer protection and data protection laws.

European Parliament votes in favour of new Data Protection Regulation

This post was written by Cynthia O'Donoghue.

In March, the European Parliament voted overwhelmingly in favour of implementing the draft Data Protection Regulation, making its commitment to reforming the European regime irreversible. In order to become law, the Regulation must now be negotiated and adopted by the Council of Ministers.

Discussions around reform began in January 2012, in recognition of the growing economic significance of personal data. With estimates that by 2020 the personal data of European citizens will be worth nearly €1 trillion annually, it is important that any reform ensures an adequate level of protection for citizens while not overburdening businesses. To this end, Vice-President Viviane Reding has stated that the Regulation will “make life easier for business and strengthen the protection of our citizens.” The Regulation will make four key changes to the data protection regime, which are summarised below:

  1. Equal application in all EU Member States by replacing the “current inconsistent patchwork of national laws”, making compliance easier and cheaper
  2. Creation of a “one-stop shop” for organisations to deal with one data protection authority where their EU QA is located, rather than across various member states, reducing the administrative burden on organisations. EU residents may still bring complaints to the authority in their home country.
  3. Application of the Regulation to any organisation that operates within the single market to ensure that businesses are competing equally
  4. A right of EU residents to request that their data be removed where a data controller no longer has a legitimate reason to retain it

The draft Regulation continues to contain robust sanctioning powers with fines of up to 5% of annual worldwide turnover, a significant increase by the European Parliament on the 2% limit that had previously been recommended.

Despite the Parliament’s vote and ringing endorsement for the draft Regulation, the text is still subject to input from the Council of Ministers, who appear to be taking a more pragmatic approach aimed at promoting the EU Digital Agenda and continued growth in the digital marketplace. The next Council meeting is in June, so we may yet see further revisions to the existing draft.

Edward Snowden submits written testimony to the EU Civil Liberties Commission

This post was written by Cynthia O'Donoghue.

When Edward Snowden alerted the media to the extent of global intelligence surveillance programmes in 2013, he sparked investigations and debate into the gathering of data by intelligence agencies worldwide. He is now contributing to the debate again,submitting written testimony (the Statement) to the investigation of theEU Committee on Civil Liberties (the Committee). 

The Committee’s investigation has involved a broad examination of the ways in which data on EU citizens is collected by both American agencies and agencies in its own “back yard”. In January, the Committee released a draft report on the investigation, with MEPs condemning the “vast, systematic, blanket collection of personal data of innocent people”.

In the Statement, Snowden explains the extent of the data gathered by agencies, stating that while working for the NSA, he could “read the private communications of any member of this committee, as well as any ordinary citizen”. Snowden criticises the use of resources to fund mass, suspicionless surveillance at the cost of “traditional, proven methods”, citing a number of examples of incidents that have not been prevented despite the use of mass surveillance.

The Statement also contains details of cooperation between EU Member States and the NSA’s Foreign Affairs Directorate (FAD), stating that FAD systematically attempts to influence legal reform across the EU. When successful, Snowden claims that FAD encourages states to perform “access operations” which allow it to gain access to bulk communications of telecoms providers within the jurisdiction.

In relation to whistleblowing within intelligence agencies, Snowden points out that the current legal protections in the United States do not apply to the employees of private companies and therefore do not provide a satisfactory level of protection to concerned individuals employed by such organisations. In addition, the Statement indicates that raising concerns internally is ineffective as other employees are fearful of the consequences that may follow.

For businesses, Snowden’s remarks when questioned about industrial espionage are likely to be the most interesting. Snowden states that the fact that “a major goal of the US Intelligence Community is to produce economic intelligence is the worst kept secret in Washington”. In addition, the Statement points out that evidence of industrial espionage can be seen in the press, with an example being recent reports that GCHQ successfully targeted a Yahoo service to gain access to the webcams of devices within citizens’ homes.

The Statement paints a concerning picture of the way in which politics influence the level of protection given to citizens. As Snowden points out, the Statement is limited to information that has already entered the public domain, and so it is unlikely to impact the Committee’s findings. However, with the European Parliament scheduled to vote on the draft data protection regulation and Safe Harbor Program, it will intensify analysis of the legal reforms being implemented in Brussels.
 

Article 29 Working Party and APEC authorities release "practical tool" to map the requirements of the BCR and CBPR regimes

This post was written by Cynthia O'Donoghue.

At the beginning of March, representatives of the EU Article 29 Working Party and the Asia-Pacific Economic Cooperation (which includes, among others, the United States and the People’s Republic of China) announced the introduction of a new Referential on requirements for binding corporate rules (the Referential).

Both the EU and Asia-Pacific Economic Cooperation (APEC) regimes place restrictions on the transfer of data across borders. Under the EU regime, implementing a set of binding corporate rules (BCR) that have been approved in advance by national authorities will allow a company or group of companies to transfer data outside of the EEA without breaching the EU Data Protection Directive. Under the APEC regime, Cross-Border Privacy Rules (CBPR) serve the same purpose, allowing data to be transferred between participating economies. Both regimes require the rules to be approved in advance by regulators before they can be relied on.

The Referential does not achieve mutual recognition of both the EU and APEC systems, but it is intended to be a “pragmatic checklist for organizations applying for authorization of BCR and/or certification of CBPR”. The Referential acts as a comparison document, setting out a “common block” of elements that are shared by both systems, and “additional blocks” which list their differences. For example, while both systems require appropriate training to be given to employees, the EU regime requires only that this training is given to employees with permanent or regular access to personal data. In contrast, the APEC regime appears to extend to all employees.

Work on the referential began early in 2013, with Lourdes Yaptinchay stating that cooperation between APEC and the EU “is an important next step towards better protecting personal data and could provide a foundation for more fruitful exchange between companies with a stake in the two regions.”

The comparative nature of the Referential highlights the challenges that face organisations that want to satisfy both the EU and APEC regimes in a single set of rules. By drafting a set of rules that complies with the most stringent regime on any one point, organisations can use the document to navigate the approval process with more ease.

Information Commissioner's Office issues updated code of practice on conducting Privacy Impact Assessments

In February, the UK Information Commission’s Office (ICO) issued an updated code of practice on conducting Privacy Impact Assessments (PIA), with a six-point process for organisations to follow (the Code).

A PIA is intended to focus the attention of an organisation on the way that data is held and used in any project, and reduce the risk that this creates. A PIA is not a legal requirement, but the Code states that carrying one out will help organisations to make sure that they are complying with the law. Carrying out a PIA can also provide reputational benefits as individuals gain a better understanding of why and how data about them is held. 

The Code is aimed at “organisations of any size and in any sector”, and organisations are encouraged to carry out a PIA early on in the life of a project. The PIA process provided by the Code is designed for use by non-experts, making the process accessible to organisations of all sizes.

The Code recommends that organisations consultation at all stages of a PIA. Consultations should be carried out both with internal colleagues and external people who will be affected by a project. A high-level summary of the six-point process is as follows:

  1. Identifying the need for a PIA.  The Code includes screening questions which are designed to be included in an organisation’s normal project management procedure. By doing this, the ICO intends that the need for a PIA to be carried out will be considered in each project.
  2. Describing information flows.  Organisations should consider and document how and why information travels around an organisation in order to effectively map the risk.
  3. Identifying privacy and related risks.  At this stage, an organisation can understand the risks posed by the data highlighted in steps 1 and 2. The Code encourages organisations to adopt their own preferred method of categorizing the risks that are identified.
  4. Identifying and evaluating privacy solutions. Having identified the risks, organisations should consider ways of mitigating them. The Code states that “Organisations should record whether each solution results in the privacy risks being eliminated, reduced or simply accepted.”
  5. Signing off and recording the PIA outcomes. The Code stresses the importance of keeping a record of the PIA process in order to facilitate the implementation of its findings.
  6. Integrating PIA outcomes back into the project plan.  Having completed a PIA, organisations should implement the measures identified into the project management process.

The Code takes an expansive approach to the process of conducting a PIA, providing several annexes with tools to assist in the process. Organisations should be reassured that by following the provisions of the Code, they are becoming more compliant with the terms of the Act.
 

App Industry meets with the European Commission and National Regulators

This post was written by Cynthia O'Donoghue.

The value of the EU app sector has grown exponentially over the past few years, with a recent EU report estimating that spending on apps in the EU rose to €6.1 billion in 2013, and forecasts that by 2018, the industry could be worth €63 billion per year to the EU economy. However, complaints from consumers across Europe have caused concerns to national regulators about the selling techniques employed in the app industry.

Last month, the European Commission held meetings with representatives of the app industry and national regulators from several European countries. The aim of these meetings was to discuss the concerns of national regulators and draw up a plan to implement solutions within a clear timeframe.

The most pressing concern of the regulators is games being advertised as “free”, when in fact charges for gameplay later become apparent. In the Common position of national authorities within the CPC (the Common position), national authorities state that if the term “free” has the potential to mislead users, it could be incompatible with the law. The term should only be used to describe games that are “free” in their entirety, or for games that are marketed with accurate details of additional costs given upfront. In addition, regulators state that purchasers of apps should be provided with full information about payment arrangements, rather than being debited by default, as has sometimes been the case.

Also discussed at the meeting was the interaction of the app industry with children. In the Common position, national authorities state that games that either target children or that can be reasonably foreseen to appeal to children should not contain direct encouragements to children to buy items. When considering their game, developers should consider the way that the app displays messages.

The encouragement and development of the app industry is of great importance to the EU economy, and its contribution to commerce is set to grow. At this stage, it is important that regulators and lawmakers strike the right balance between allowing developers the autonomy to create products, and appropriate regulation that protects the interests of consumers. The approach of the EU Commission in holding collaborative talks with the industry is a promising sign which indicates that a balanced outcome is achievable.

Hong Kong's Office of the Privacy Commissioner for Personal Data releases Best Practice Guide on Privacy Management Programmes

This post was written by Cynthia O'Donoghue.

Last month, Hong Kong’s Office of the Privacy Commissioner for Personal Data (OPCP) released a Best Practice Guide on Privacy Management Programmes (PMP) (the Guide). Striking a similar chord to the UK Information Commissioner’s Office in the recently released code of practice on conducting Privacy Impact Assessments, the OPCP notes that despite no requirement within the Personal Data (Privacy) Ordinance (the Ordinance) for PMPs, organisations that do adopt them are likely to benefit from increased levels of trust among their customers and employees, as well as demonstrating compliance with the Ordinance.

The Guide does not provide a “one-size-fits-all” solution, and organisations will need to consider their size and nature when developing a PMP. To this end, the Guide addresses both the fundamental components of a PMP and the ongoing assessment and revision.

The Guide notes that implementation of PMPs will require organisations to consider their policies, staff training, and the processes that are followed when contracting with third parties. The Guide states that the key components of a PMP are:

  1. Organisational commitment: this includes buy-in from top management, designating a member of staff to manage the PMP (this could be a full-time member in a large organisation, or a business owner in a small organisation), and establishing reporting lines.
  2. Program controls: an inventory of personal data held by the organization should be made. Internal policies should also be put in place to address obligations under the Ordinance, with risk-assessment tools to allow new or altered projects to be assessed.

The Guide is a welcome development for Hong Kong organisations, which, by following its terms, will be able to demonstrate their compliance with the Ordinance. However, organisations should also note that the Guide indicates that the OPCP expects organisations to take positive steps towards fulfilling their obligations.

Indian Centre for Internet and Society issues call for comments on draft Privacy (Protection) Bill

This post was written by Cynthia O'Donoghue.

A nonprofit research organisation, the Indian Centre for Internet and Society (ICIS), has issued an open call for comments on its draft Privacy (Protection) Bill 2013 (the Bill). Consultations on the Bill started in April 2013, with a series of seven roundtable talks being held in partnership with the Federation of Indian Chambers of Commerce and Industry, and the Data Security Council of India.
ICIS states that it has “the intention of submitting the Bill to the Department of Personnel and Training as a citizen’s version of privacy legislation for India.” India’s current data protection regime, a product of piecemeal development, imposes very limited duties on organisations that collect, process and store data. No national regulator is in place to oversee the data protection regime.

Described by ICIS as “a citizen’s version of privacy legislation for India,” the draft Bill contains provisions for a new Data Protection Authority of India to be established with wide powers of investigation, review and enforcement. Penalties detailed by the Bill for infringement include a term of imprisonment and a fine. In addition, the Bill proposes the introduction of comprehensive regulation, including:

  • Regulation of the collection, processing and storage of data by any person
  • Regulation of the use of voice and data interception by authorities
  • Regulation of the manner in which forms of surveillance not amounting to interceptions of communications may be conducted

If implemented, the draft Bill would be a considerable step forward for the privacy landscape in India, which has so far lacked the impetus provided by international instruments such as the European Data Protection Directive (95/46/EC).
 

The Inevitable - EMV Payments On a Fast Track to Becoming a New Standard in the United States

This post was written by Timothy J. Nagle and Angela Angelovska-Wilson.

Last week, congressional leaders in Washington continued with their focus on the safety of the U.S. payments system in the aftermath of the massive retailer breaches at Target, Neiman Marcus and others. The House Committee on Financial Services held its session March 5, while the House Committee on Science, Space and Technology hearing was held March 6. The message coming out of the hearings was that the adoption of EMV cards, payment cards utilizing smart-chip technology instead of a magnetic stripe, is just one of many steps that need to be taken to secure the U.S. payments system.

Click here to read the full issued Client Alert.

CFTC Issues Recommended Best Practices for Security of Financial Information

This post was written by Timothy J. Nagle, Philip G. Lookadoo, and Christopher Fatherly.

Last week, the Staff of the Commodity Futures Trading Commission (CFTC) issued Staff Advisory 14-21 on the subject of “Gramm-Leach-Bliley Act Security Safeguards.” The CFTC had issued guidance previously in Part 160 of the CFTC’s regulations on “Privacy of Consumer Financial Information” (April 27, 2001). Swap Dealers (SDs) and Major Swap Participants (MSPs) were added to those Part 160 regulatory obligations by the CFTC on July 22, 2011. The fact that the Commission “at this time…believes it important to outline recommended best practices for covered financial institutions” is noteworthy, especially in light of its overburdened staff which has focused on other issues such as electronic or automated trading. It demonstrates that cybersecurity is a significant issue in the financial industry and that the CFTC wants to be relevant to and actively participate in the discussion of cybersecurity.

As noted in Staff Advisory 14-21, its provisions reflect similar guidance from the Federal Financial Institutions Examination Council and the Federal Trade Commission and draft guidance from the Securities and Exchange Commission. The “recommended best practices” include maintaining a written information security and privacy program, designating an employee with management responsibility for security and privacy who “is part of or reports directly to senior management or the Board of Directors,” identifying risks and implementing safeguards to address those risks, training staff, regularly testing such controls as access management, use of encryption, and incident detection and response, retaining an independent party to evaluate the controls on a regular basis, and re-evaluating the program at intervals. Three additional practices that reflect increasing emphasis by other regulators include supervision of third party service providers to include security-related contract requirements, establishing a breach response process and providing an annual assessment of the program to the Board of Directors.

The CFTC’s Division of Swap Dealer and Intermediary Oversight, which issued Staff Advisory 14-21, will “enhance its audit and review standards as it continues to focus more resources on GLBA Title V compliance.” This echoes recent statements from the Financial Industry Regulatory Authority in a January 2014 Targeted Examination Letter on Cybersecurity, and the SEC’s announcement that it will conduct a round table later this month on cybersecurity issues.

The “covered entities” which are subject to Staff Advisory 14-21 (futures commission merchants (FCMs), commodity trading advisors (CTAs), commodity pool operators (CPOs), introducing brokers (IBs), retail foreign exchange dealers, SDs and MSPs) are not consumer-facing but they are part of the financial system. For banks and other large financial institutions, Staff Advisory 14-21 will support the goal of maintaining comprehensive, consistent security and privacy standards throughout the enterprise. Other firms, such as broker dealers, asset managers and insurance companies which have not been subject to the same level of regulation on security and privacy matters as national banks, should see this as just one more indication that all financial institutions will eventually be expected, through regulation or industry practice, to implement and maintain the essential elements of an information security program. In this respect, it would not be surprising to see the SEC re-issue the draft Regulation S-P for public comment and implementation.

For others in the commodities world, which have not yet focused on the security of personal (or proprietary) information, Staff Advisory 14-21 will require additional compliance obligations on top of their other new regulatory responsibilities to the CFTC. For example, in the energy, agriculture and metals commodity trading industries, major players in those industries have only recently begun to register as SDs, MSPs or other registered entities and more registered entities are expected in the near future. In addition to the CFTC’s recordkeeping, reporting and other business conduct obligations these entities have only recently become obligated to embrace, they can now add regulatory compliance obligations related to security of personal (or proprietary) information under Part 160 and the recommended best practices in Staff Advisory 14-21.

While the focus of Staff Advisory 14-21 is on personal information, the recommended practices apply equally to sensitive proprietary information that any financial and commodities firm would want to protect. In the past, a firm may have considered it prudent to implement some level of information security and privacy practices. Now, they can expect to be subject to government audit in those areas.

Data Breach Class Action Settlement Gets Final Approval - Payment to Be Made to Class Members Who Did Not Experience ID Theft

This post was written by Mark Melodia, Steven Boranian and Frederick Lah.  

Last week, a judge for the Southern District of Florida gave final approval to a settlement between health insurance provider AvMed and plaintiffs in a class action stemming from a 2009 data breach of 1.2 million sensitive records from unencrypted laptops. The settlement requires AvMed to implement increased security measures, such as mandatory security awareness training and encryption protocols on company laptops. More notably, AvMed agreed to create a $3 million settlement fund from which members can make claims for $10 for each year that they bought insurance, subject to a $30 cap (class members who experienced identity theft are eligible to make additional claims to recover their monetary losses). According to Plaintiffs’ Unopposed Motion and Memorandum in Support of Preliminary Approval of Class Action Settlement  (“Motion”), this payment to class members “represents reimbursements for data security that they paid for but allegedly did not receive. The true measure of this recovery comes from comparing the actual, per-member cost of providing the missing security measures—e.g., what AvMed would have paid to provide encryption and password protection to laptop computers containing Personal Sensitive Information, and to otherwise comply with HIPAA’s security regulations—against what Class members stand to receive through the Settlement” (p. 16). It’s been reported that this settlement marks the first time that a data breach class action settlement will offer monetary reimbursement to class members who did not experience identity theft. In defending the fairness, reasonableness, and adequacy of the settlement, plaintiffs noted in the Motion, “[b]y making cash payments available to members of both Classes—i.e., up to $30 to members of the Premium Overpayment Settlement Class, and identity theft reimbursements to members of the Identity Theft Settlement Class members—the instant Settlement exceeds the benefits conferred by other data breach settlements that have received final approval from federal district courts throughout the country” (p. 16).

The finalization of this settlement marks the end of a hard fought battle between the parties. After AvMed obtained a dismissal with prejudice in the District Court based on plaintiffs’ failure to allege a cognizable injury, the dismissal was appealed to the Eleventh Circuit. Resnick v. AvMed, Inc., 693 F.3d 1317 (11th Cir. 2012). There, the Eleventh Circuit found that plaintiffs had established a plausible causal connection between the 2009 data breach and their instances of identity theft. The court also determined that plaintiffs’ allegations —that part of the insurance premiums plaintiffs paid to defendant were supposed to fund the cost of data security, and that defendant’s failure to implement that security barred it from retaining the full amounts received—were sufficient to state a claim for unjust enrichment. On remand, AvMed answered plaintiffs’ complaint and filed a motion to strike class allegations, which was denied by the District Court as premature.

We’ve been particularly interested in this case for quite some time. Last year, we blogged about the unique nature of the settlement after the agreement was reached. Class action plaintiffs’ lawyers in the data breach context have often had their cases dismissed on the basis that they are unable to prove the class suffered any sort of injury or loss. With the AvMed settlement now final, we expect plaintiffs’ lawyers to try to leverage similar payment terms into their own data breach class action settlements. As we previously noted, class action settlements are only binding upon the parties that enter into them, but their terms can serve as models for future proposed settlements.

Court of Appeal Confirms a Person's Name Constitutes Personal Data

This post was written by Cynthia O'Donoghue.

A judgment from the Court of Appeal on 7 February 2014 in the case of Edem v The information Commissioner & Financial Services Authority [2014] EWCA Civ 92, has held that “a name is personal data unless it is so common that without further information, such as its use in a work context, a person would remain unidentifiable despite its disclosure” (see paragraph 20 of judgment).

The definition of ‘personal data’ within the meaning of the Data Protection Act 1998 (‘DPA’) is often debated. Section 1(1) of the DPA defines ‘personal data’ as “Data which relates to a living individual who can be identified from those data, or from those data and other data which is in the possession of or is likely to come into the possession of the data controller and includes any expression of opinion about the individual and any indication of the intentions of the data controller or any other person in respect of the individual.”

The Court of Appeal previously interpreted the application of this definition in the case of Durant v Financial Services Act [2003] EWCA Civ 1746 [2011] 1 Info LR 1 (Durant). Paragraph 28 of Auld LJ’s judgment provided two notions to be used to determine if information is personal data. The first was whether the information is biographical in a significant sense, the second was if the information has the data subject at its focus and whether disclosure of such information would affect that data subject’s fundamental right to privacy derived from the European Data Protection Directive 95/46/EC.

The most recent case before the Court of Appeal has now elaborated further on this interpretation, specifically examining whether an individual’s name is automatically deemed personal data. Mose LJ, Beaton LJ and Underhill LJ questioned whether a person’s name is automatically considered personal data simply because it identifies and relates to that individual, or whether it is necessary for that information to arise in some form of context which reveals more information about an individual beyond merely his or her name.

The facts of the case are very similar to Durant, involving an application by Mr Edem under the Freedom of Information Act 2000 (‘FOIA’) for the disclosure of information relating to complaints he had made to the Financial Services Authority (‘FSA’) regarding its regulation of a company. Specifically, Mr Edem sought information about the complaints and the names of the three individuals within the FSA who handled these complaints. The Information Commissioner declined to order the disclosure of these names in response to Mr Edem’s information request, on the grounds of section 40(2) of the FOIA which permits exemption from disclosure of information which is personal data. On appeal, the First Tier Tribunal decided that the names of the officials did constitute personal data and ordered that they be disclosed. However the Upper Tribunal (Administrative Appeals Chamber) reversed this decision preventing the disclosure of the information, leading Mr Edem to appeal to the Court of Appeal.

The Court of Appeal sought to distinguish this case from that of Durant, finding that Auld LJ’s two notions outlined above were not applicable to the facts of this case, where the issue was whether information comprising a person’s name could be automatically considered personal data, rather than the issue of whether information which did not obviously relate to or specifically name an individual could amount to personal data within the meaning of the DPA.

In reaching the conclusion of the judgment and dismissing the application of Auld LJ’s reasoning in the case of Durant, the Court of Appeal reiterated guidance from the Information Commissioners Office, which clarifies “It is important to remember that it is not always necessary to consider ‘biographical significance’ to determine whether data is personal data. In many cases data may be personal data simply because its content is such that it is ‘obviously about’ an individual. Alternatively, data may be personal data because it is clearly ‘linked’ to an individual because it is about his activities and is processed for the purpose of determining or influencing the way in which that person is treated. You need to consider biographical significance, only where information is not obviously about an individual or clearly linked to him.”

Applying this guidance to the facts of the case, the Court of Appeal declared that the names of the individuals did amount to personal data, upholding the decision of the Upper Tribunal (Administrative Appeals Chamber) to prevent the disclosure of such information on the grounds of section 40(2) of the FOIA.

This case is significant because it adds weight to argument that the Durant test for determining if information is personal data within the meaning of the DPA is not definitive and is limited to certain factual scenarios. Furthermore it reconfirms that the Durant test should not be applied in isolation, without consideration of further tests that have proliferated, such as those arising from the case of Kelway v The Upper Tribunal, Northumbria Police and the Information Commissioner (2013) EWHC HC 2575 (Admin) (see our previous blog).
 

Mexican Data Protection Authority Intends to Increase Investigations and Enforcement for 2014

This post was written by Cynthia O'Donoghue.

On February 4, 2014, the Mexican data protection authority, the Institute of Access to Information and Data Protection (IFAI), issued a statement to Bloomberg BNA announcing it anticipates issuing an abundance of fines in 2014 following an unprecedented increase in violations of Mexico’s Federal Law on the Protection of Personal Data in the Possession of Private Parties (the Federal Law).

Article 64 of the Federal Law empowers the IFAI to issue fines from 100 to 320,000 the times the Mexico City minimum wage (approximately US$480 to US$1,534,275 for violation of the Federal Law. In addition to monetary penalties, three months to three years imprisonment may be imposed on data controllers for any security breach of databases under their control. Such sanctions above can be doubled twice again for violations concerning sensitive data.

IFAI President Gerardo Laveaga stated that a number of new investigations have been opened, following a 20% increase in the number of data-protection complaints from individuals from 2012 to 2013. The IFAI issued fines totalling 50 million pesos ($3.7 million) in 2013, a figure that is set to markedly increase for 2014. This included the $1 million fine levied against the bank Banamex and the $500,000 fine imposed on cellular company Telcel. The IFAI reported that it intends to challenge all appealed 2013 fines, which will likely swell the coffers of the IFAI in 2014 even further. Organisations should take heed; evidently the IFAI is increasingly willing to show its teeth to enforce compliance with the Federal Law.

First European Cookie Fine Issued By Spanish Data Protection Authority

This post was written by Cynthia O'Donoghue.

The Spanish data protection authority, the AEPD, has issued the first European cookie fine for the violation of Article 22.2 of Spain’s Information Society Services and Electronic Communications Law 34/2002 (Spanish E-Commerce Act), as amended by Royal Decree Law 13/2012 which implements the e-Privacy Directive (Directive 2002/58).

On 29 April 2013, the AEPD issued guidelines on the use of cookies (Cookies Guide). This guide clarified how to interpret Article 22.2 of the Spanish E-Commerce Act. This guidance recommends that information on the use of cookies must be sufficiently visible, provided in one of the following ways:

  1. In the heading or foot page of the website
  2. Through the website Terms & Conditions
  3. Through a banner which offers information in a layered approach
    • First layer: highlighting essential information about the use of cookies, including the relevant purpose, also detailing the existence of any third-party cookies
    • Second layer: link to a cookies policy with more detailed information on cookie use, specifically the definition and function of each cookie, information about the types of cookies used, information about how to delete cookies and identification of all parties who place cookies

The Cookies Guide also clarifies the way in which consent to cookies must be obtained. This includes:

  • Acceptance of website terms and conditions or privacy policy
  • Configuration of browser functions
  • Feature led when a website offers a new function
  • Download of specific website content
  • Configuration of website functions

Implied consent can only be deemed from a user’s specific action, as opposed to inactivity, such as the use of a scroll bar in the vicinity of where cookies information was highly visible, or otherwise clicking on website content.

In July 2013, four months after issuing the Cookies Guide, the AEPD began investigations into Navas Joyeros S.L and Luxury Experience S.L and their use of cookies for their promotional websites. Article 38.4(g) of the Spanish E-Commerce Act empowered the AEPD to impose monetary penalties totalling €3500 against both companies. In the Resolution No. R/02990/2013, the AEPD declared the companies had failed to provide sufficiently clear and comprehensive information about the use of cookies in violation of Article 22.2 of the Spanish E Commerce Act. Specifically, the information on cookie use was not provided in the layered manner required by the AEPD’s Cookies Guide. Furthermore the notices neglected to detail the cookies used or the types of cookies set, merely specifying broad purposes for the use of cookies and omitting to mention which cookies were controlled by the website or by third parties, and failing to provide website users with information about how to deactivate cookies or revoke consent to their use.

The AEPD’s landmark decision has resulted in the first EU cookie fine being issued, and could well set the precedent for further penalties in the future for website operators with slack cookie practices.

IAB Future of the Cookie Working Group Publishes White Paper on Privacy and Tracking In a Post-Cookie World

This post was written by Cynthia O'Donoghue.

The ‘Future of the Cookie Working Group’, established by the International Advertising Bureau (IAB) in 2012, has published a white paper titled ‘Privacy and Tracking in a Post-Cookie World’, which addresses the limitations of the traditional cookie.

The Future of the Cookie Working Group takes issue with the fact that the cookie often forms the crux of many privacy-related debates. Furthermore, the cookie is increasingly being regarded as a hindrance to our Internet browsing, with cookie-clogging leading to frustratingly slow page-load times. Perhaps the most significant problem rests in the fact that cookie is becoming an outdated tool in our advancing digital environment. Steve Sullivan, Vice President, Advertising Technology, IAB, commented ‘The cookie has been central to the success of Internet advertising. However, the industry has evolved beyond the cookie’s designed capability.’ Anna Bager, Vice President and General Manager, Mobile Marketing Center of Excellence, IAB, added, ‘With the proliferation of Internet-connected devices, cookies are an increasingly unreliable tool for tracking users. Since they cannot be shared across devices, cookies lack the ability to provide users with persistent privacy and preferences as they jump from smartphone to laptop to tablet. At the same time, it leaves publishers unable to seamlessly communicate customized content to their audiences on a variety of screens. This report is the first step in correcting the problem and eliminating one of the biggest limitations impacting mobile advertising today.’

As a way forward, the white paper proposes five solutions that could potentially replace the role of the traditional cookie, including:

  • Device – Use of statistical algorithms to infer a user’s ID from information provided by the connected device, browser app or operating system.
  • Client – A user’s browser app or operating system tracks user information and manages preferences, then passes the information along to third parties.
  • Network – Third-party servers are positioned between the users’ device and publishers’ servers set IDs that are used to track user information and manage preferences.
  • Server – The current approach using cookies to track user information and manage preferences.
  • Cloud – Tracks user information and manages preferences via a centralized service that all parties agree to work with.

The white paper then analyses the feasibility of each of these solutions against IAB Guiding Principles, which identify the core needs of publishers, consumers and industry as set out below:

  • Publishers
    • A single privacy dashboard
    • Comprehensive privacy controls
    • Significantly fewer third-party pixels
    • Improved user tracking and targeting
    • Reduced cost for privacy compliance
    • Ability to detect non-compliant third parties
    • Open competition
    • Minimal deployment overhead
  • Consumers
    • A single privacy dashboard
    • A universal privacy view
    • Comprehensive privacy controls
    • Persistent and universal consumer preferences
    • Ability to detect non-compliant publishers and third parties
    • Free online service
  • Industry
    • Decreased ramp-up time and cookie churn
    • Lower operating cost
    • Better cross-device tracking
    • Better consumer transparency and control
    • High-integrity frequency capping
    • Less redundant data collection and transfer
    • Reduced regulatory threats
    • Clear value to the consumer
    • A non-proprietary solution not limited to one vendor
    • Minimal deployment overhead

The white paper concludes by confirming that all solutions proposed would prove more effective in achieving the objectives in IAB’s Guiding Principles than the current cookies-based approach. Taking this into account, and the fact that the proposed EU data protection regulation intends to impose more stringent rules on profiling, this could signal the demise of the traditional cookie as we know it.

ICO January Updates for 2014

This post was written by Cynthia O'Donoghue.

The ICO has had a busy January with some key updates to note for the start of 2014.

The ICO has produced a series of quarterly reports:

  • Spam text messages
    • The main three topics for the subject of unsolicited marketing text messages were found to be debt management, payday loans and payment protection insurance.
    • Enforcement activity for 2014 will focus on culprits in breach of the Privacy and Electronic Communications Regulations (PECR) 2003.
    • The ICO has lobbied the Department for Culture, Media and Sport to lower the threshold to trigger enforcement by monetary penalty fines for violation of PECR, submitting the case that the trigger of demonstrating substantial damage or distress is too high, with too many organisations sending unsolicited marketing texts slipping through the grasp of the ICO’s enforcement powers.
  • Marketing calls
    • The top three subjects of live sales calls covered payment protection insurance, accident claims and energy.
    • The level of complaints about cold calls is at its lowest level since October 2012, totalling 4,996 in December 2014.
    • The ICO correlates the decline in complaints as a direct result of fines issued in 2013, such as that against DM Design Bedrooms Ltd of £90,000 for making 2000 unsolicited marketing calls in breach of PECR.
  • Cookies
    • The ICO received 53 complaints during the period of October-December 2013 about cookies via the ICO website.
    • The ICO is focusing enforcement on the most visited UK websites, which have taken no steps to raise awareness about cookies or sought to gain user consent.
    • The ICO has now written to a total of 265 organisations about compliance with cookie rules.

The ICO has experienced mixed fortunes with enforcement action. On January 24, the ICO successfully sentenced six investigators of ICU Investigations Ltd for conspiring to unlawfully obtain personal data about its clients, finding the two managers of the company guilty of a criminal offence under section 55 of the Data Protection Act 1998, and fined the investigators a total of £37,107. Furthermore, back in December 2013, the ICO issued a fine of £175,000 against payday loan company First Financial UK for sending millions of unauthorized marketing text messages. In juxtaposition to this, the First Tier Tribunal (Information Rights) overturned a £300,000 monetary penalty notice issued against Tetrus Telecoms for sending unsolicited text messages to consumers. In spite of this, the ICO is keen to stress it will be appealing this decision further to demonstrate that breaches of the PECR will not be tolerated.

The ICO has also issued a report analysing the strengths and weaknesses of data-processing activities in GP practices involving sensitive patient data. The report identifies a series of recommendations to improve existing practices, including ensuring all data breaches are reported, improving the way in which patients are informed about how their data will be used, greater awareness about the risks of using fax machines to process patient data, and more careful management of large volumes of patients paper records. This report will likely be particularly potent in light of NHS England’s latest plans as part of its care.data scheme, scheduled to be launched this March, and which will create a central database for all patient records in the UK.

Finally, the latest draft guidance from the ICO, ‘Data Protection and Journalism – a guide for media’, has also been issued for public consultation. The guide has emerged in the context of finding from the Leveson Inquiry into the Culture, Practices and Ethics of the Press in November 2012, which highlighted the need for the ICO to issue good practice guidelines to ensure appropriate standards of data processing are adhered to by the press and media. The deadline for public responses on the draft is 22 April 2014.

NHS Advocates Selling Confidential Patient Data For Secondary Purposes

This post was written by Cynthia O'Donoghue.

Latest plans announced by the UK’s Health and Social Care Information Centre (HSCIC) have resulted in a flurry of media controversy condemning NHS England (NHS) for advocating the sale of patient data to third parties for profitable gain.

HSCIC, together with the NHS, has pioneered a new scheme, known as the ‘care.data’. From March 2014, patient data from GP practices will be extracted, anonymised and aggregated in a central database for sale to third parties such as drug and insurance companies. Such data will include information about every hospital admission since 1980, family history, vaccination records, medical diagnoses, referrals, health metrics such as BMI and blood pressure, as well as all NHS prescriptions. This information will be combined with other confidential patient data, such as date of birth, postcode, gender, and NHS number, to allow the NHS to assess patient care. The NHS then intends to sell such pseudonymised information to any organisation which can meet certain questionable criteria for conditions of release. These include broad circumstances such as for health intelligence, health improvement, audit, health service research and service planning. Critics have condemned such moves as highly controversial, considering that most patients believe any information shared with their GPs is given in the strictest confidence; yet this will be shared automatically as part of the care.data scheme unless patients explicitly opt-out. 

The British Medical Associations supports the initiative, which advocates the secondary use of patient data. Interestingly, the scheme has also received approval from the ICO, on the grounds that the Health and Social Care Act 2012 permits the NHS to extract patient data under the care.data scheme, which provides a lawful basis for processing data for the purposes of the Data Protection Act 1998. The NHS insists that the data will only be used for the benefit of the health and care system to improve the quality of care delivered to patients.

In spite of these reassurances, privacy critics fear that the scheme will result in patients losing track of their data, with no information about whom their information has been shared with, and for what purposes it may be used. Mark Davies, Public Assurance Director of the HSCIC, has also raised concerns by commenting that there is a small risk that patients could be re-identified, given the potential for third parties to match the pseudonymised patient data against their own records.

To ease anxiety, the NHS is in the process of sending out leaflets titled ‘Better Information Means Better Care’ to 26 million households as part of an awareness campaign about the scheme. Critics have similarly condemned this campaign, for failing to clearly explain the privacy risks to patients and inadequately highlighting the right to opt out of the scheme. Ultimately the scheme opens up a chasm of uncertainty about the confidentiality of patient data entrusted to the NHS in its capacity as a data controller.
 

EU Research Group Condemns EU Regulation for Restricting Growth in Life Sciences Sector

This post was written by Cynthia O'Donoghue.

The Wellcome Trust has collaborated with a number of leading medical research organisations to lobby the European Parliament and the Council of Ministers against amendments to the proposed EU Regulation, which could severely restrict the future growth of the life sciences sector in the EU.

The lobby group comprises of the European Organisation for Research and Treatment of Cancer, the Federation of European Academies of Medicien, France’s Institut Pasteur, Sweden’s Vetenskapsradet, Germany’s VolkswagenStifung and ZonMw, and the Netherlands Organisation for Health Research and Development. The group intends to urge the European Parliament to reject amendments to the proposed General Data Protection Regulation (the Regulation) previously successfully voted on by the European Parliamentary Committee on Civil Liberties, Justice and Home Affairs (LIBE) in October 2013 (see our previous blog.)

The original draft proposal for the Regulation, first released in January 2012, included a requirement for specific and explicit consent for the use and storage of personal data for secondary purposes; however, provided an exemption from this requirement for research purposes subject to stringent safeguards. As a result, this initial draft was considered “measured and sensible and struck the right balance between protecting the individuals and making possible huge benefits for all our health,” commented Wellcome Trust Director Jeremy Farrar in a statement to Bloomberg BNA 29 January.

However, LIBE’s amendments, if approved by the European Parliament and Council of Ministers, propose to remove the exemption from the consent requirement making the use of pseudonymised health data in research without specific consent illegal at worst, and unworkable at best. In effect, this could make it difficult, if not near on impossible, for research bodies such as the Wellcome Trust to use pseudonymised health data for secondary research purposes without specific consent. This could impose severe restrictions on the biotechnology industry, preventing growth in clinical trials and scientific research for the benefits of the health of European Citizens. The coalition lobby group aims to convince MEPs that health data is a vital resource for scientific breakthroughs, which will become impossible if the current draft of the Regulation is not challenged further.

One key argument of the lobby group is that the current position under the EU Directive 95/46/EC already offers a sufficiently robust governance framework which ensures an individual’s data is only used for research in the public interest and within the constraints of strict confidentiality measures. Furthermore in reality, the majority of participants of research studies already voluntarily provide their consent, rendering the requirement for specific consent under the Regulation superfluous. Reform of the EU data protection law as currently drafted therefore represents the worst case scenario for bodies such as the Wellcome Trust.

Reinforcing the coalition group’s campaign, UK Advocacy Group, the Fresh Start Project released a report titled ‘EU Impact on Life Sciences’ which similarly deplores the draft EU Regulation as exemplifying a biotech hostile regulatory framework. It condemns the Regulation for leaving member states with little room for manoeuvre to determine their own policies for data protection. It is perceived that if amendments to the Regulation are approved, this will constrain growth in health and scientific research, creating a ‘global slow lane for biotechnology’ and undermining Europe as a hub of biotechnology. This could effectively force Europe to take a backseat in a biotechnology revolution, inhibiting the chances of securing future investment for economic growth. Furthermore, this will put at risk significant European investments currently in place, including ‘The European Prospective Investigation Into Cancer and Nutrition Study’ involving more than half-a-million European citizens, not to mention plans for the European Medical Information Framework project worth €56 million, due to link together existing health data from sources across Europe to provide a central bank of information available to researchers for vital studies.

It remains to be seen whether the European Parliament will listen to concerns of the lobby group and examine the provisions of the draft Regulation in more detail. The fear is that such lobbying will go unheard. This is in light of recent comments from Vice President Viviane Reding, indicating that European Parliament is keen to adopt the current draft of the Regulation as approved by LIBE, in order to push forward full speed ahead for the much-anticipated EU data protection reform in 2014.
 

Cyber-Security in Corporate Finance

This post was written by Cynthia O'Donoghue and James Wilkinson.

The ICAEW has partnered with a task force, including the Law Society, the London Stock Exchange, the Takeover Panel and the Confederation of British Industry, to publish a guide on ‘Cyber-Security in Corporate Finance’ for 2014.

Please click here to read the issued Client Alert.

 

 

The Final NIST Cybersecurity Framework Document Is Out: Now What?

This post was written by Timothy J. Nagle.

The year-long process – led by the National Institute of Standards and Technology (NIST) and the Department of Homeland Security (DHS) – of conducting outreach to the private sector, issuing drafts, receiving and evaluating input, and facilitating interagency coordination, ended with the publication last week of the “Framework for Improving Critical Infrastructure Cybersecurity” (Version 1.0). It is a comprehensive document that was initiated by Executive Order 13636 (“Improving Critical Infrastructure Cybersecurity”), and draws heavily from existing standards such as NIST 800-53, ISO 27001, COBIT and others. The Framework represents significant effort by NIST, sector-specific agencies, industry organizations and individual companies to provide an approach for managing cybersecurity risk “for those processes, information, and systems directly involved in the delivery of critical infrastructure services.” This last quote from the “Framework Introduction” section states the purpose and scope of the document. What remains to be seen is the process for implementation, extent and variety of adoption across sectors and industries, and assertion as a “standard” outside of the critical infrastructure context.

Please click here to read the issued Client Alert.

 

Report Released on Coordinating Standards for Cloud Computing in Europe

This post was written by Cynthia O'Donoghue.

The European Commission has announced that the European Telecommunications Standards Institute (ETSI) has finally released areport titled ‘Cloud Standards Coordination’. This report marks an important step in materialising the European Cloud Computing Strategy ‘Unleashing Potential in the Cloud,’ first published in 2012.

The European Commission tasked ETSI to ‘cut through the jungle of standards’ that have proliferated for cloud computing services. Interestingly, the ETSI report states that cloud standardization is far more focused than originally anticipated. Furthermore, it has confirmed that while the Cloud Standards landscape is complex, ‘it is not chaotic and by no means a jungle.’

The report usefully sets out the following:

  • A definition of the key roles in cloud computing and illustrative diagram of the roles played by the Cloud Service Customer, Cloud Service Provider and Cloud Service Partner
  • An analysis and classification of more than 100 cloud computing use cases across three phases, including acquisition, operation and termination of cloud services
  • A list of more than 20 relevant organisations involved in cloud computing standardization, including, for example, the European Union Agency for Network and Information Security, the International Organisation for Standardization, and the National Institute of Standards and Technology
  • A map of core cloud computing documents including a selection of more than 150 resource documents, such as standards and specifications, as well as reports and white papers related to different activities to be undertaken by Cloud Service Customers and Cloud Service Providers over the cloud service life-cycle

The report also lists a series of recommendations, including:

  • Interoperability in the cloud requires standardization in APIs, data models and vocabularies
  • Existing security and privacy standards must keep pace with technological advances in the cloud industry, and must develop a common vocabulary
  • More standards must be developed in the area of service level agreements for cloud services, including an agreed set of terminology and service-level objectives
  • The legal environment for cloud computing is the key barrier to adoption. Given the global nature of the cloud and its potential to transcend international borders, there is a need for an international framework and governance, underpinned by agreed global standards.

Neelie Kroes, European Commissioner for the Digital Agenda, commented, “I am pleased that ETSI launched and steered the Clouds Standards Coordination (CSC) initiative in a fully transparent and open way for all stakeholders. Today’s announcement gives a lot of hope as our European Cloud Computing Strategy aims to create 2.5 million new European jobs and boost EU GDP by EUR 160 billion by 2020.”

Director General at ETSI, Luis Jorge Romero, added, “Cloud computing has gained momentum and credibility…in this perspective, standardization is seen as a strong enabler for both investors and customers, and can help increase security, ensure interoperability, data portability and reversibility.”

The report concludes by recommending that the European Commission should task ETSI to work on an updated version of the report in 12 to 18 months, considering that the rapid maturation of standardization will likely be significant over this period.

LIBE Publishes Amendments to Draft Proposal for a Network and Information Security Directive

This post was written by Cynthia O'Donoghue.

The Committee on Civil Liberties, Justice and Home Affairs (LIBE) of the European Parliament has published the latest draft of the proposed Network and Information Security (NIS) Directive (the ‘Directive’) following a series of amendments by MEPs. The proposal for the Directive was first published by the European Commission 7 February 2013 as part of the EU Cyber Security Strategy (see our previous client alert). Recital 30(a) of the latest draft estimates that cybercrime causes estimated losses of €290 billion each year, while Recital 31(b) states 1.8% of EU citizens have been victims of identity theft, and 12% have been victims of online fraud. These facts and figures only reinforce the argument that the need for a coordinated EU security strategy is more prevalent than ever.

However, the UK’s Information Commissioner’s Office previously criticised the proposed draft Directive (see our previous blog), specifically the provisions governing data breach notifications. The ICO was particularly reluctant to take on the responsibility of becoming the UK’s national competent authority (NCA) to handle a potential abundance of notifications concerning network information security incidents, unrelated to personal data, in which it has no expertise or experience. The UK Government Department for Business, Innovation & Skills was similarly critical following an impact assessment, which revealed the extortionate costs that will be disproportionately imposed on organisations to comply with the proposed Directive (see our previous blog).

The latest draft from the European Parliament includes a series of new amendments, in particular the following:

  • The obligation for each Member State to nominate an NCA responsible for coordinating NIS issues remains, with the additional obligation to establish a cooperation network to share information and ensure a harmonious implementation of the Directive
  • Each Member State must set up at least one Computer Emergency Response Team (CERT) to be responsible for handling incidents
  • Organisations must consider protection of their information systems as part of their ‘duty of care’
  • Organisations must implement appropriate levels of protection against reasonably identifiable threats and areas of vulnerability, the standard for which will differ depending on the nature of risk for each organisation
  • Member States will not be prevented from adopting provisions to ensure a higher level of security than that offered under the Directive, though maintaining measures that conflict or diverge from the minimum expectations enshrined in the Directive will not be permissible
  • Each Member State will be required to draft a national NIS strategy within 12 months of the adoption of the Directive
  • The threshold which triggers notification is to be defined in accordance with ENISA technical guidelines on reporting incidents for Directive 2009/140/EC
  • Each Member State will be obliged to notify the relevant competent authority about both the incident and the threat information having an impact of the security of the core services they provide. Notification must be complete and must be within a timeframe without measureable delay.
  • Organisations will be obliged to report and announce any incidents involving their corporation in their annual business report
  • The penalties under Article 17 will only be imposed in circumstances of gross negligence or an organisation’s intentional failure to fulfil any obligations under the Directive

However, perhaps the most significant amendment to note is that which states implementation of the Directive will be postponed until after the anticipated reform of the EU data protection framework, upon adoption of the General Data Protection Regulation. Judging from recent comments from the European Commission, this could be a long time coming.

Full Speed Ahead for EU Data Protection Reform

This post was written by Cynthia O'Donoghue.

Coinciding with ‘Data Protection Day’ on 27 January 2014, the European Commission released a memorandum confirming the status of the anticipated reform of the European data protection framework. The promised overhaul of the 1995 EU Data Protection Directive (95/46/EC) has certainly not been as rapid as hoped, with publication of the memorandum marking exactly two years since reform was first proposed in January 2012. Over this time, we have monitored and reported on the frustrating to-ing and fro-ing in discussions among the EU’s 28 member states, which has led reform to be significantly delayed (see our previous blog).

Eager to push forward, Vice President Viviane Reding has commented, “Europe has the highest level of data protection in the world. With the EU data protection reform, Europe has the chance to make these rules a global gold standard. The European Parliament has led the way by voting overwhelmingly in favour of these rules. I wish to see full speed on data protection in 2014.”

To finalize the reform, European Parliament and the EU Council must separately agree their positions on the draft proposals, before proceeding to negotiating the final outcome. The EU Council is expected to finalize its position by mid-2014, with aim to strike a deal with the Parliament by the end of 2014.  Reform certainly seems to be a priority for the new Greek Presidency, who convened a meeting in Athens on 22 January 2014 with the European Commission and two European Parliament Rapporteurs, Jan-Philipp Albrecht and Dimitrios Droutas, and with Italy, the next EU Presidency, to agree a road map for swift data protection reform in 2014.

European Parliament spokeswoman Natalia Dasilva has commented that as part of the April 2014 Plenary session,  Parliament is expected to adopt the LIBE version of the draft regulation, which was successfully voted on back in October 2013 (see our previous blog). A summary of some of the key points under the LIBE draft included:

  • The right to be forgotten
  • The right to data portability
  • Explicit consent requirements
  • Notification of serious data security breaches to data subjects and supervisory authorities within 24 hours
  • One continent, one law: single pan-European law for data protection to replace current inconsistent patchwork of national laws
  • One-stop-shop: organisations will only have to deal with one single supervisory authority
  • Data protection authorities to have strong enforcement powers, including ability to fine companies up to 2%-5% of their global annual turnover
  • Regime for notifications to supervisory authorities will be scrapped

While certain aspects of the LIBE draft remain controversial, if Parliament should proceed to adopt this version, it will avoid starting the whole procedure from scratch, and will eliminate the fear that details of the text could be reopened for discussion causing even further delay.
 

Google To Get Grilling Before UK Courts for Covert Safari Browser Tracking

This post was written by Cynthia O'Donoghue 

High Court Judge Mr Justice Michael Tugendhat has declared in the case Vidal-Hall & Ors v Google Inc. [2014] EWHC 13 (QB) (16 January 2014) that infamous U.S. corporation Google Inc. (‘Google’) will face the scrutiny of the UK courts in a privacy claim brought by three British Internet users (the ‘Claimants’), who have started a group known as the ‘Safari Users Against Google’s Secret Tracking’ (the ‘Claimants’).

Click here to read the issued client alert.

No Harm, Big Foul: With Spokeo, Ninth Circuit Finds Willful FCRA Violation Is Sufficient for Suit, With or Without Actual Injury

This post was written by Paul Bond and Christine Czuprynski.

On February 4, the Ninth Circuit ruled that a plaintiff need not show actual harm to have standing to sue under the Fair Credit Reporting Act (FCRA); a violation of the statutory right is a sufficient injury in fact to confer standing. The case, Robins v. Spokeo, Inc., may open the door for plaintiffs to get past the motion to dismiss stage in FCRA cases, as well as potentially in other cases that involve violations of statutory rights.

Here, the plaintiff alleged that Spokeo posted inaccurate credit-related information on its website in a willful violation of the FCRA. After his original complaint was dismissed by the Central District of California for failure to allege actual or imminent harm, the plaintiff amended his complaint to include an allegation that Spokeo’s posting of false information caused harm to his prospects for employment, and caused him anxiety and stress about his lack of employment prospects. The district court denied Spokeo’s motion to dismiss the amended complaint, but reconsidered its ruling after Spokeo moved to certify an interlocutory appeal. Upon reconsideration, the district court dismissed Robins’ complaint because Robins lacked Article III standing.

In reversing the district court’s ruling, the Ninth Circuit found that the FCRA cause of action does not require a showing of actual harm when the suit alleges a willful violation. The court agreed with a 2009 Sixth Circuit case, Beaudry v. Telecheck Services, Inc., that violations of statutory rights created by FCRA are the kind of de facto injuries that Congress can elevate to legally cognizable injuries. The court found that Robins satisfied the two constitutional limitations on congressional power to confer standing – he alleged that Spokeo violated his specific statutory rights, not just the rights of other people, and his personal interests in handling his credit information were individualized rather than collective. By surviving the motion to dismiss, Robins can now focus on the merits of his claim.

Many class actions arising from the loss or theft of financial information are pleaded under the FCRA, even where the status of the defendant as a consumer reporting agency is far from clear. In addition, though the Robins v. Spokeo case – and the Beaudry case it cites – are specific to the FCRA, the decision could potentially implicate standing arguments in cases alleging other statutory violations.

Google Exposed as in Breach of Dutch Data Protection Law

 This post was written by Cynthia O'Donoghue.

The Dutch data protection authority, the College Bescherming Persoonsgegevens (CBP), has released a report following a seven-month investigation examining Google’s changes to its privacy policy. CBP’s report condemns Google for violating Dutch data protection law, the Wet bescherming persoonsgegevens (Wbp).

Controversially in March 2012, Google made changes to its privacy policy (GPP2012) to allow the combination of data collected from all of its services (including Google Search, Google Chrome, Gmail, Google DoubleClick advertising, Google Analytics, Google Maps and YouTube, as well as cookies via third-party websites).  Most significantly, CBP found that Google failed to demonstrate that adequate safeguards had been put in place to ensure the combination of data in this manner was limited to that which was strictly necessary, and Google was therefore in breach of Article 8 Wbp.

CBP also found that in breach of Article 33 & 34 Wbp, GPP2012 demonstrated a lack of information as to Google’s identity as data controller, and the types and extent of data collected or the purposes for which Google needs to combine this data. GPP2012 states that the purpose of its data-processing activities is ‘the provision of the Google service’. CBP found this statement to be ambiguous and insufficiently specific. CBP held without any legal grounds for processing, that Google had no legitimate purpose to collect data in this manner and was therefore found in breach of Article 7 Wbp. 

Furthermore, and specifically in relation to Google’s data-processing activities associated with tracking cookies, CBP declared Google in breach of Article 11.7a of the Dutch telecommunications act Telecommunicatiewet (Tw), which requires unambiguous consent. CBP found that Google failed to offer any prior options to consent, reject or later opt out of such data-processing activities. CBP reiterated it was insufficient for Google to claim that acceptance of its general terms of service and privacy policy amounted to consent.  

Jacob Konstamm, CBP Chairman, commented, “Google spins an invisible web of our personal data, without consent. That is forbidden by law.” In response, Google commented, “Our privacy policy respects European law and allow us to create simpler, more effective services…We have engaged fully with the Dutch DPA throughout this process and will continue to do so going forward.”

California Senate Passes SB 383 Expanding The Song-Beverly Credit Card Act to Online Transactions of Downloadable Content

This post was written by Lisa Kim and Jasmine Horton.

On January 30, 2014, the California Senate approved SB 383, which amends the Song-Beverly Credit Card Act (Song-Beverly Act) to apply to online credit card transactions of electronic downloadable content (e.g., music, videos). Originally crafted to apply to all online credit card transactions, the bill has been resurrected in pared-down form from its death in the Senate last May.

The revised SB 383 allows online merchants to collect personal information, such as zip codes and street addresses, in connection with online credit card transactions of electronic downloadable products, provided that the information is: (1) used only for fraud detection and prevention purposes, (2) destroyed after use, and (3) not shared unless obligated by law to do so. The bill also allows for the collection of additional personal information only if the consumer elects to provide it, and if s/he is informed of the purpose and intended use of the requested information, and has the ability to opt out before the online transaction is complete.

The Song-Beverly Act, as it currently stands, prohibits merchants from asking for any personal identification information, other than a form of personal identification (e.g., driver’s license), in order to complete a credit card transaction. While there are specific exceptions to this rule, such as allowing zip codes at gas pumps and personal information when it's incidental to the transaction (i.e., for shipping and delivery purposes), it is unclear whether such prohibitions apply to online transactions where there is no actual human interaction. Indeed, in February of last year, the California Supreme Court held that Song-Beverly did not apply to online transactions involving downloadable products. See Apple Inc. v. Superior Court, 56 Cal.4th 128 (2013).

SB 383 is in direct response to the Apple case, but given its narrow application to just downloadable products, it still does not answer the question of whether the Act applies to other online transactions, such as those where the product is mailed to the consumer or picked up in the store. Many trial courts are holding that it does not, and plaintiffs are challenging these decisions in the appellate courts. See e.g., Salmonson v. Apple, Cal. Court of Appeals, Case No. B253475 (appealing court decision that Song-Beverly Act did not apply to online transactions picked up at store); Ambers v. Buy.com, 9th Circuit Case No. 13-55953 (appealing court’s decision that Song-Beverly Act did not apply to online purchase shipped to customer). Arguably, since the Legislature had the opportunity in the original bill to apply the Act to all online transactions and yet chose not to do so, online merchants may have some additional legislative history to assist them in upholding these rulings.

We will be keeping our eyes on this bill as it moves through the Assembly. It will be interesting to see whether the pending appeals impact the development of this legislation, and vice versa.

China Drafts Rules on Administration of Personal Health Data

This post was written by Cynthia O'Donoghue and Zack Dong.

For the first time in China, draft measures for the administration of personal health data have been introduced by the National Health and Family Planning Commission (NHFPC). The NHFPC released the draft November 19, 2013, and invited public commentary on its website.

Under the measures, ‘personal health information’ is broadly defined to include:

  • Population information (including family composition and family planning)
  • Electronic health archives (health records)
  • Electronic medical records (generated by medical personnel)
  • Other information generated for management and administration of health institutions

The main requirements of the rules are:

  • Only approved health and family planning institutions may collect personal health information to the limited extent required to carry out their duties and responsibilities
  • Health data cannot be collected or used for commercial purposes
  • Individuals must be informed of the purpose for collection and their consent must be obtained
  • Amending, deleting, duplicating or disclosing health data without consent of the data subject is not permissible
  • Cross-border transfers are restricted
  • Health data shall not be used for purposes beyond those indicated at the time of collection without authorisation
  • Storing personal health information in any server located outside of China is prohibited

The rules will take immediate effect upon final publication. However, the measures fail to implement any fine or sanction for violation of the rules; therefore, it remains to be seen how effective they will be in practice.
 

LIBE Committee Report on U.S. Surveillance Activities Calls for an End to EU-U.S. Data Transfers

This post was written by Cynthia O'Donoghue.

Recently leaked, the LIBE Committee draft report on surveillance activities signals a dim future for the international free flow of data in the eyes of the European Parliament. The report despairs of the recent revelations by whistle-blowers about the extent of U.S. mass surveillance activities, causing the trust between the EU and the United States to be profoundly shaken. LIBE argues that the magnitude of blanket data collection goes beyond what would be reasonably expected to counter terrorism and other security threats. LIBE condemns the deficiencies of international treaties between the EU and the United States, and the inadequate checks and balances in place to protect the rights of EU citizens and their personal data.

LIBE proposes a controversially drastic solution to the vulnerabilities exposed by NSA surveillance activities in the United States. Contrary to the ideal of achieving the international free flow of data in our digital society anticipated by European data protection reform, LIBE proposes to shut down all trans-Atlantic data flows, effectively isolating Europe.

Critics have argued that the following measures proposed by LIBE are wholly disproportionate and unrealistic:

  • EU member states and U.S. authorities should prohibit blanket mass surveillance activities and the bulk processing of personal data.
  • EU and U.S. authorities should take appropriate steps to revise legislation and existing treaties to ensure that the rights of EU citizens are protected.
  • The United States should adopt the Council of Europe’s Convention 108 with regard to the automatic processing of personal data.
  • The Commission Decision 520/2000, which declared the adequacy of Safe Harbor as a mechanism for EU-U.S. transfers, should be suspended, and all transfers currently operating under this mechanism should stop immediately.
  • The adequacy of standard contractual clauses and BCRs in the context of mass surveillance should be reconsidered, and all transfers of data currently authorised under such mechanism should be halted.
  • The status of New Zealand and Canada as adequate protection countries for data transfers should be reassessed.
  • The adoption of the whole Data Protection Package for reform should be accelerated.
  • The establishment of the European Cloud Partnership must be fast-tracked.
  • A framework for the protection of whistle-blowers must be established.
  • An autonomous EU IT capability must be developed, including ENISA minimum security and privacy standards for IT networks.
  • Commission must present an EU strategy for democratic governance of the Internet by January 2015.
  • EU member states should develop a coherent strategy with the UN, including support of the UN resolution on ‘the right to privacy in the digital age’.

The report concludes by highlighting a priority plan with the following action list:

  • Adopt the Data Protection Package for Reform in 2014
  • Conclude an EU-U.S. Umbrella Agreement ensuring proper redress mechanisms for EU citizens in the event of data transfers to the United States for law enforcement
  • Suspend Safe Harbor mechanism and all data transfers currently in operation
  • Suspend data flows authorised on the basis of contractual mechanism and Binding Corporate Rules
  • Develop a European Strategy for IT independence

Critics have condemned LIBE’s report as a step backwards, and suggest it should be considered as a call for action rather than a realistic solution.
 

A new "target" on their backs: Target's officers and directors face derivative action arising out of data breach

This post was written by David Z. Smith, Christine Z. Czuprynski, Carolyn H. Rosenberg and J. Andrew Moss.

In the wake of its massive data breach, Target now faces a shareholder derivative lawsuit, filed January 29, 2014. The suit alleges that Target’s board members and directors breached their fiduciary duties to the company by ignoring warning signs that such a breach could occur, and misleading affected consumers about the scope of the breach after it occurred. Target already faces dozens of consumer class actions filed by those affected by the breach, putative class actions filed by banks, federal and state law enforcement investigations, and congressional inquiries.

This derivative action alleges that Target’s board members and directors failed to comply with internal processes related to data security and “participated in the maintenance of inadequate cyber-security controls.” In addition, the suit alleges that Target was likely not in compliance with the Payment Card Industry’s (PCI) Data Security Standards for handling payment card information. The complaint goes on to allege that Target is damaged by having to expend significant resources to: investigate the breach, notify affected customers, provide credit monitoring to affected customers, cooperate with federal and state law enforcement agency investigations, and defend the multitude of class actions. The derivate action also alleges that Target has suffered significant reputational damage that has directly impacted the retailer’s revenue.

Target announced the breach December 18, 2013, stating that 40 million credit and debit card accounts may have been affected, and notified its customers via email shortly thereafter. Though PINs were not thought to have been part of the breach, on December 27, Target announced that encrypted PINs had also been accessed. In January, the retailer began offering credit monitoring to affected individuals. On January 10, 2014, Target announced that it uncovered a related breach of customer information – name, address, phone number, and/or email address – for up to 70 million customers. With that announcement, many news outlets are reporting that the total number of affected individuals is 110 million.

This lawsuit is part of a growing trend of derivative and securities fraud complaints based on alleged lack of internal controls over data security and privacy that have been filed against companies like Google, Heartland Payment, ChoicePoint, TJX, and Sony. We previously blogged about the Google derivative suit here.

The prevalence of these suits highlights the fact that insurance is an important protection that should not be overlooked. What follows are key Rules for the Road:

  • Derivative suits against directors and officers are typically covered under a D&O policy. However, other relevant policies to review may include cyberliability/data privacy, professional liability (E&O) coverage, and fiduciary liability (FLI) coverage (if the company’s employee benefit plans allow investment in the company’s own securities).
  • Notice should be given timely to all primary and excess insurers pursuant to the policy provisions.
  • D&O policies typically provide that the insureds must defend the claim, subject to obtaining the insurer’s consent to the defense arrangements. Accordingly, it is important to obtain the insurer’s consent to proposed defense arrangements that consent should not be unreasonably withheld.
  • Potential exclusions or other terms and conditions impacting coverage should be analyzed. Some may apply, if at all, only to a portion of a claim. Others may not apply to defense costs, and others may not apply unless and until there is a “final adjudication” of the subject matter of the exclusion. It is important to carefully review the coverage defenses raised, and push back on the carriers’ coverage challenges.
  • If settlement is being considered, review the policies’ provisions regarding cooperation, association in the defense and settlement of the case, and requirements to obtain the insurer’s consent to a settlement. Carefully review coverage for all components of a settlement, including settlement amounts, plaintiffs’ attorneys’ fees, interest, and defense costs.
  • Review the policy’s dispute-resolution provisions so that in the event of a coverage challenge, the insureds understand whether there is a policy requirement or option to mediate or arbitrate. Consider the provisions in excess policies as well.

Though it is tempting to conclude that Target is being attacked from all sides – including this most recent attack from a shareholder – because of the size of the breach, these kinds of responses from consumers, banks, regulatory agencies, legislative bodies, and shareholders are becoming all too common in the aftermath of many security breaches. It is an important reminder of the need for strong data security, internal controls, insurance protection, and compliance with all relevant processes and procedures.

ENISA Publishes Report & Good Practice Guide on Government Cloud Deployment

This post was written by Cynthia O'Donoghue.

The EU Agency for Network and Information Security (ENISA) announced in a press release that it has produced a report titled ‘Good Practice Guide for Securely Deploying Governmental Clouds’, which analyses the current state of play regarding governmental Cloud deployment in 23 countries across Europe, categorised on a scale of “Early adoptors”, “Well-Informed”, “Innovators” or “Hesitants”.

A high-level summary of the results for each category were as follows (country-specific analysis is available in full in the report):

  • Early adoptors: UK, Spain and France
  • These countries have a Cloud strategy in place and have taken place to implement the governmental Cloud
  • Well Informed: The Netherlands, Germany, Republic of Moldova, Norway, Ireland, Finland, Slovak Republic, Belgium, Greece, Sweden and Denmark
  • These countries have strategy but are yet to take steps to implement the governmental Cloud
  • Innovators: Italy, Austria, Slovenia, Portugal and Turkey
  • These countries do not have a Cloud strategy but may have a digital agenda that considers adoption of Cloud computing, but already have some Cloud services running based on bottom-up initiatives. Cloud implementation is forthcoming but will need to be supported by national/ EU level regulation.
  • Hesitants: Malta, Romania, Cyprus and Poland
  • These countries are planning to implement governmental Cloud in the future to boost competitive business, but currently have no strategy or Cloud initiatives in place

The report also sets out 10 recommendations for the secure development of governmental Clouds. These include:

  1. Support the development of an EU strategy for governmental Clouds
  2. Develop a business model to guarantee sustainability, as well as economies of scale for government Cloud solutions
  3. Promote the definition of regulatory framework to address the locality problem
  4. Promote the definition of a framework to mitigate the loss-of-control problem
  5. Develop a common SLA framework
  6. Enhance compliance to EU and country specific regulations for Cloud solutions
  7. Develop certification framework
  8. Develop a set of security measures for all deployment models
  9. Support academic research for Cloud computing
  10. Develop provisions for privacy enhancement

The Executive Director of ENISA, Professor Udo Helmbrecht, commented, “This report provides the governments the necessary insights to successfully deploy Cloud services. This is in the interest of both the citizens, and for the economy of Europe, being a business opportunity for EU companies to better manage security, resilience, and to strengthen the national cloud strategy using governmental Clouds.”

EU Nominates Expert Group To Develop Standard Cloud Computing Contract

This post was written by Cynthia O'Donoghue.

The European Commission announced that the European Cloud Partnership facilitated the meeting of an expert group of lawyers, cloud service providers and customers on November 20, 2013 to “cut through the jungle of technical standards on cloud computing” by setting down safe and fair terms and conditions, and develop an exemplary template contract for cloud computing in accordance with the 2012 European Cloud strategy ‘Unleashing Potential in the Cloud’.

The hot topics on the agenda:

  • Data preservation after termination of the contract
  • Data disclosure and integrity
  • Data location and transfer
  • Ownership of the data
  • Direct and indirect liability
  • Change of service by cloud providers
  • Subcontracting

Commission Vice President Viviane Reding commented “The group's aim is to provide a balanced set of contract terms for consumers and small to medium-sized businesses to support them to use Cloud computing services with more confidence.”

The outcome of the meeting of the experts will be produced in a report due to be published by the Commission in early 2014.
 

European Commission Aims for Europe to be World's Leading 'Trusted Cloud Region'

This post was written by Cynthia O'Donoghue.

Developing on the European Cloud strategy, ‘Unleashing the potential of Cloud Computing in Europe’ released in 2012, the European Commission has released a memo to foster greater support for cloud computing services in Europe, with the ambition for Europe to become the world’s leading trusted cloud region and a harmonious single market for cloud computing known as ‘Fortress Europe’.

The Commission demands faster widespread adoption of cloud computing to improve productivity levels in the European economy, even in spite of recent doubts about cloud security in the context of revelations about PRISM and other surveillance activities. To relay concerns for security, the Commission established the European Cloud Partnership Steering Board. Furthermore, to restore trust in cloud services, the Commission call for greater transparency, specifically by government bodies.

The Commission highlights that the recent proposal for a new EU data protection regulation scheduled to be adopted in 2015 will provide a uniform legal base for the protection of personal data across Europe.  Furthermore, the European Telecommunications Standards Institute has been working with ENISA and the Select Industry Group to develop EU-wide voluntary certifications schemes to help cloud computing suppliers demonstrate to customers that they adhere to high standards of network and information security.

The Commission concludes Europe can pride itself on high standards for data protection and data security, and be reassured that this provides the strong foundation for secure cloud computing. The memo therefore implores Europe to ‘embrace the potential economies of scale of a truly functioning EU-wide single market for cloud computing where the barriers to free data-flow around Europe would be reduced providing a massive boost to competitiveness.’

Mexican Data Protection Authority Issues New Data Security Guidelines

This post was written by Cynthia O'Donoghue.

The Mexican data protection authority, the Institute of Access to Information and Data Protection (the IFAI), has issued data security guidelines for businesses to ensure measures are implemented to comply with the data security provisions of the Mexican data protection law, the Federal Law on the Protection of Personal Data in the Possession of Private Parties (the Federal Law).

Mexico’s Data Protection Secretary, Alfonso Onate-Laborde, commented, “Although the Mexican Data Protection Law required companies to implement a minimal set of security measures by 21 June 2013, many companies have not done so and stay at a low level of compliance with the rules. The Guidelines will provide useful advice for companies on how to implement security rules into their operating processes.”

To ensure compliance with Article 19 of the Federal Law in particular, the IFAI guidelines recommend that companies adopt a Safety Management System of Personal Data based on a four-step process ‘Plan-Do-Check-Act’ ( the PDCA cycle), which can be summarised as follows:

  1. Plan - identify key security objectives, examine data flows within the organisation and conduct a risk analysis
  2. Do - implement the necessary policies, procedures and plans to help achieve data security objectives
  3. Check - audit and evaluate whether policies, procedures and plans are achieving security objectives
  4. Act - take corrective action and other remediation measures to continually improve security, including training relevant personnel

While adoption of the guidelines is voluntary and not mandatory, companies are warned that the IFAI has the power to issue fines of up to $3 million to penalise incidents involving data security breaches. The IFAI is set to hire third-party contractors to conduct data security inspections to reinforce the IFAI’s increasingly punitive enforcement reputation of recent months, such as the €1 million fine against Banamex, the Mexican division of Citibank.

Alfonso Onate-Laborde commented, “An increasing number of Mexican companies are taking affirmative steps to improve their data security, realising there is no more time left to postpone compliance...the IFAI will focus on enforcement and conduct data security audits of companies to determine compliance with the guidelines.”

 

Setting Higher Standards for Payment Card Data Security

This post was written by Cynthia O'Donoghue.

To enhance security standards to protect customer payment data in the context of increasing e-commerce, the Payment Card Industry (PCI) Security Standards Council has announced it has releasedversion 3.0 Payment Application Data Security Standards (PA-DSS) and version 3.0 of the PCI Data Security Standard (PCI-SS), which will become effective from 1 January 2014. The package of standards set key requirements for the storage and processing of customer payment card data to prevent cardholder data security breaches.

Details of the changes from version PCI-SS 2.0 to 3.0 can be read here. In summary, the new key requirements are:

  • Evaluate evolving malware threats for any systems not considered to be commonly effected
  • Combine minimum password complexity and strength requirements into one, an increased flexibility for alternatives
  • For service providers with remote access to customer premises, use unique authentication credentials for each customer
  • Where other authentication mechanisms are used (e.g., physical security tokens, smart cards or certificates), these must be linked to an individual account and ensure only the intended user can gain access
  • Control physical access to sensitive areas for onsite personnel, including a process to authorize access, and revoke access immediately on termination
  • Protect devices that capture payment card data via direct physical interaction with the card, from tampering and substitution
  • Implement a methodology for penetration testing and including any segmentation methods used to isolate cardholder data
  • Implement a process to respond to any alerts generated by the change detection mechanism
  • Maintain information about which PCI DSS requirements are managed by each service provider
  • For service providers, provide written agreement to their customers

Full details of the updates to the PA-DSS can be read here. A summary of the new requirements include:

  • Payment application developers must verify the integrity of source code during the development process
  • Payment applications must be developed according to industry best practices for secure coding techniques
  • Payment application vendors must incorporate risk assessment techniques into their software development process
  • Application vendors must provide release notes for all application updates
  • Vendors with remote access to customer premises for maintenance must use unique authentication credentials for each customer
  • Organisations must provide information security and PA-DSS training to vendor personnel with PA-DSS responsibility annually

The Payment Card Industry (PCI) Security Standards Council commented the package of standards “will help organisations make payment security part of their business-as-usual activities by introducing more flexibility, and an increased focus on education, awareness and security as a shared responsibility.”

Organisations should be reminded that failure to adhere to the PCI standards could result in enforcement by the ICO. In August 2011, the ICO made an example of retailer LUSH following a security lapse which resulted in hackers being able to access the payment details of 5,0000 customers of the company’s website, with 95 customers victims of card fraud. As a consequence, the ICO demanded LUSH sign an undertaking to ensure future customer credit card data must be processed in accordance with the PCI-SS.
 

New UK Cyber Security Principles Released

This post was written by Cynthia O'Donoghue.

Back in 2011, the Cabinet Office launched a cyber security strategy outlining steps the UK Government would take to tackle cyber crime by 2015. The National Cyber Security Programme invested £650 million funding to support the strategy ‘Protecting and Promoting the UK in a digital world’. Measures proposed by the strategy included:

  • Reviewing existing legislation, e.g., Computer Misuse Act 1990, to ensure remains relevant and effective
  • Pioneering a joint public-private sector cyber security to allow exchange of data on cyber threats across sectors to manage response to cyber attacks
  • Seeking to agree a voluntary set of guiding principles with Internet Service Providers
  • Developing kite marks for approved cyber security software
  • Encouraging UK courts to enforce sanctions for online offences under the Serious Crime Prevention Order
  • Creating a new national cyber crime capability as part of the National Crime Agency
  • Creating a single reporting system for cyber crime using the Action Fraud portal run by the National Fraud Authority
  • Strengthening the role of Get Safe Online to raise awareness and education about online security

In line with the third proposal of the strategy, the Department for Business, Innovation and Skills has now issued new guiding principles developed and agreed between government and leading Internet Service Providers (ISPs), such as ISPA, BT, Sky, Talk Talk, Vodafone and Virgin Media, to promote cyber security and protect ISP customers from online threats. 

The first section of the principles propose ISPs must:

  • Increase customer awareness of cyber security issues (including by directing to Get Safe Online and other national campaigns), and educate customers on the basic online threats, how to practise safe online behaviour, and how to spot cyber crime and report through Action Fraud
  • Empower customers to protect themselves from online threats through providing tools such as anti-virus software, anti-spyware, anti-spam, malware protection or firewall protection
  • Provide clear mechanisms to encourage customers to report compromises or threats to minimise the impact of cyber threats

The second section mandates government must:

  • Continue to make businesses aware of cyber threats and educate them how to respond through guidance, e.g., Cyber Security Guidance for Business issued 2012 and Small Business Guidance for Cyber Security 2013
  • Advise nationally on improving cyber security, e.g., Get Safe Online
  • Increase enforcement of online threats through the national crime capability of the National Crime Agency

The guidelines conclude by highlighting cyber security issues the government and ISPs will partner to resolve jointly going forward to achieve the aims of the UK cyber security strategy.
 

UK Government to Adopt New Cyber Security Standard

This post was written by Cynthia O'Donoghue.

On 28 November 2013, UK Government Department of Business, Innovation and Skills (BIS) announced, following a report on “UK Cyber Security Standards”, that a new cyber security standard is to be created based on ISO27000-series. This new standard will be created after BIS reviews the more than 1,000 separate cyber security standards that are currently in operation globally. The announcement came as a surprise as it had previously been suggested that the government would endorse an existing standard; however, BIS has concluded that there is no single standard or ‘one size fits all’ that fully met its requirement for effective cyber risk management.

The main findings of the BIS report were:

  • 52% of organisations at least partially implement a standard relevant to cyber security, but only 25% implement it fully, and of those businesses only 25% seek external certification of compliance with those standards.
  • 7/10 was the average level of importance placed on cyber security certification, with 10/10 representing the highest importance
  • Cost is the main barrier to adoption of cyber security standards and investment in external certification with no financial incentive to invest
  • Only 35% of organisations plan an increase in cyber security spending
  • 48% of organisations implemented new policies to mitigate cyber security risks
  • 43% conducted cyber security risk assessments and impact analysis
  • 25% of organisations believe standards to not be important at all

BIS called for support in 2013 for a new cyber security standard, and business groups that responded overwhelmingly supported the ISO27000-series of standards. However, BIS has rejected a straight adoption of those standards in light of flaws it has identified with that framework. To this extent, BIS commented, “ISO27000-series of standards have perceived weaknesses in that implementation costs are high and that due to their complexity SME’s sometimes experience difficulties with implementation…the fact that in previous versions businesses were free to define their own scope for which area of their business should be covered by the standard can also make auditing ineffective and inconsistent.” However, despite these flaws, the report proposes that a new implementation profile security standard will be based on key ISO27000-series standards, and that this will be the government's preferred standard. So far, BIS has support for the new standard from key industry players such as BAE Systems, BT, Lockheed Martin, Ernst & Young, GlaxoSmithKline and British Bankers Association.

To support the new profile standard, the government intends to create a new assurance framework whereby organisations that have passed their audit will be able to publicly state that their cyber risk management satisfies the government's preferred standard. This will act as an accreditation for businesses to promote themselves and assure others that they have achieved a certain level of cyber security.

BIS anticipates the new standard to be launched in early 2014. BIS commented, “This will do more than fill the accessible cyber hygiene gap that industry has identified in the standards landscape…it will be a significant improvement to the standard currently available in the UK. We view the use of an organisation standard for cyber security as enabling businesses and their clients and partners to have greater confidence in their own cyber risk management, independently tested where necessary.”

ENISA Releases Reports on EU Cyber Security Measures

This post was written by Cynthia O'Donoghue.

ENISA, the European Union Agency for Network and Information Security, has released a series of reports and guidance tackling the topic of cyber security.

  • ENISA Threat Landscape (ETL) Report 2013
    The report reviews more than 250 incidents of cyber attacks that took place in 2013.  A table in the report analyses fluctuations in the top 10 threat trends, including trojans, code injection, exploit kits, botnets, identity fraud, phishing, spam, data breaches, to name a few. The findings show that threat agents have increased the sophistication of their attacks with migration to the mobile eco system and the emergence of a digital battlefield relating to big data and the Internet. To counter these threats, ENISA highlights the successes achieved by cyber security officials and law enforcement authorities in 2013, as well as increases in reports of attacks facilitating greater threat analysis. The report ultimately calls for greater sharing of security intelligence, speed in threat assessment, and elasticity in IT architectures to ensure they remain robust against innovative cyber tactics.
  • Updates to Cyber Security Strategies Map
    ENISA lists the countries that have adopted the National Cyber Security Strategies (NCSS) across the world. The latest countries to adopt NCSS include Belgium, the Netherlands, Poland, Slovenia and Spain, with updates that Montenegro, Ghana and Thailand are planning to develop NCSS soon.
  • Feasibility Study on European Information Sharing and Alerting System (EISAS)
    EISAS is meant to increase awareness about IT security issues among citizens and SMEs, and foster a collaborative information-sharing network to improve capability to respond to network security threats. In 2009, ENISA published the EISAS RoadMap with a deployment plan to implement this concept.  In 2012, ENISA also published a Basic Toolset for the large-scale deployment of EISAS across Europe by 2013. The feasibility study is the last stage in the implementation of EISAS. The study includes a three-year action plan for deployment, and examines which entities could commit to leading EISAS network, as well as what operational measures would need to be implemented, and the funding required to ensure sustainable success of the infrastructure.
  • Report on supervisory control and data acquisition (SCADA) programs and Guide on Mitigating Cyber Attacks On ICS
    Much of Europe’s critical infrastructure is controlled by SCADA systems, a subgroup of Industrial Control Systems (ICS). The report recognises that in the past decade, SCADA technology has transformed from isolated systems into standards technologies that are highly interconnected with corporate networks. Simultaneously, SCADA systems have become increasingly vulnerable to attack. The report recommends the implementation of patching management strategies by way of software upgrades to tackle this.
     
    Like SCADA technology, ICS are equally vulnerable to cyber attack and are seen as lucrative targets for intruders. The guide aims to provide good practices for entities that are tasked to provide ICS Computer Emergency Response Capabilities (ICS-CERC).
  • Report on National Roaming for Resilience to Cyber Attacks
    In the context of more than 79 incidents of network outage occurring across the EU in 2012, ENISA’s report discusses the potential for mobile roaming to be used as a resource to improve the resilience of mobile communications networks. The report also proposes recommendations to mitigate the impact of network outages, including:
    • Service prioritizations in outages
    • Open Wi-Fi as alternative solution for data connectivity
    • Establish an M2M inventory of all SIMS per service and provider to assess the possible impact and strategy in case of outage
    • Identify key people within Critical Infrastructure Services to be prepared for eventual mobile network outage
  • CERT Guidance and Updated Training Materials
    ENISA has published guidance for government on mechanisms available to support CERTs via organisations such as TF-CSIRT TI, FIRST, The Internet Engineering Task Force, CERT Coordination Center and the International Organisation for Standardisations. Complimentary to this, ENISA have also expanded the breadth of training materials for CERTS to include 29 scenarios such as recruitment of CERT staff, incident handling and cooperation with law enforcement agencies – all available with downloadable handbooks and toolsets and online training presentations.
     

UK Data Protection Watchdog Launches Public Consultation on Future Governance Strategy, 'A 2020 Vision for Information Rights'

This post was written by Cynthia O'Donoghue.

The UK Data Protection Watchdog, the Information Commissioner’s Office (ICO), has launched a public consultation on their future governance strategy, the ’2020 Vision for Information Rights’. The ICO is being challenged by significant changes in the regulatory landscape triggered by imminent reform of EU data protection law. Simultaneously, the UK regulator is facing cutbacks in grant-in-aid, resulting in a funding crunch with resources being stretched to the maximum. Meanwhile, the public perception of the importance of information rights is growing; therefore the ICO has rightly recognised it must find a way to ‘do better for less’.

The public consultation sets out the ICO’s mission to ‘uphold information rights in the public interest’, and a vision ‘to be recognised as the authoritative arbiter of information rights – a good model for regulation’. The ICO’s goal is to achieve a society in which organisations collect personal information responsibly, all public authorities are transparent, and people understand how to protect their personal information and feel empowered to enforce their information rights.

The ICO set out five aims for the next years:

  1. Educate –
    Issue further guidance for organisations; influence advice at EU level; work with other regulators and sectoral bodies to secure compliance; and embed information rights within school’s curriculum
  2. Empower –
    Provide more guidance for citizens; develop privacy seals, kitemarks and accreditation schemes to make privacy rights more prominent; make reporting concerns easier with online mechanisms
  3. Enforce –
    Focus more intently on organisations that have significant breaches; collaborate with sectoral regulators in enforcement actions
  4. Enable –
    Demystify information rights to ensure data protection law is not seen as a roadblock to information sharing in the public interest
  5. Engage –
    Be up to date with developments in business and technology nationally and internationally to keep informed and to influence areas of reform

Overall, the ICO intends to be more outcome focused, prioritising only the highest information-rights risks, and reducing casework and response to individual enquires to give greater attention to wider compliance issues. Furthermore, it intends to engage in greater dialogue to coordinate with government policy makers, sectoral bodies and other international regulators. The ICO seems likely to restructure in order to increase reliance on such partnerships to help provide a more sustainable funding model for the future.

Public consultation is due to close 7 February 2014. Responses can be made by completing this form and emailing to consultations@ico.org.uk. The ICO anticipate they will publish their final strategy in light of responses from public consultation by March 2014, along with a corporate plan for three years 2014-17.

Maximum administrative fine issued by the CNIL against Google: More to come?

This post was written by Daniel Kadar.

After almost two years of back and forth with Google, the French CNIL has, similarly to the Spanish Data Protection authority (€900,000 fine), sanctioned Google with a €150,000 fine, as Google refused to review its integrated platform and to modify its privacy policy as requested by the Working Party 29.

In addition to this fine, the CNIL has ordered Google to post a warning on Google’s French home page within eight days after the CNIL’s notification, and during two days reflecting this condemnation.

Google has decided to appeal the CNIL’s condemnation in front of the French Council of State (“Conseil d’Etat”), France’s highest administrative jurisdiction, in order to obtain the cancellation or reversal of the decision.

One could wonder why Google puts so much energy into trying to reverse a condemnation that is “bearable” from a financial point of view: there are at least two reasons for this.

First, the warning to be posted being the “real” condemnation – as it is deemed to be displayed to millions of Google users – Google has no other choice but to appeal the decision in order to avoid it. And as the appeal does not hinder the immediate enforceability of the sanction, Google had simultaneously introduced a petition for suspension before the Conseil d'Etat.

The hearing is scheduled to take place February 6, 2014.

Second, and more importantly, this condemnation could be the first step before criminal penalties this time: the French criminal code provides that failure to comply with the French Data Protection Act shall be punished per infringement with a fine of €300,000 and imprisonment of up to five years.

These sanctions can only be ordered if the CNIL has issued before an “administrative” sanction it has alone the power to take. This is what happened earlier this month.

Note that according to article 131-38 of the French Criminal Code, if a legal person is being convicted, the amount of the fine is multiplied by five, and in addition by two in case of recidivism.

Therefore, on that basis, Google could face a risk of being convicted to a fine of €1.5 million or even €3 million for recidivism, per infringement.

There lies a real financial threat since, after this first “administrative” fine has been ordered by the CNIL, a criminal case could now follow.

The legal proceedings against Google in France may only have commenced.

 

The implementation of the French transparency regulation: first good news?

This post was written by Daniel Kadar.

French health care companies have faced hard times over the past months with their new transparency obligations. They have been required to declare the equivalent of 18 months (!) of agreements and benefits in a very short period of time.

They were scheduled to disclose this information to the unique state portal the French government had announced, but which ultimately was not in place in time.

As a consequence, the government issued “transitory provisions,” according to which health care companies acting in France had to disclose their information:

  • To the National Medical Association (there are seven of them…)
  • On a dedicated company website some international companies had to put in place on purpose

As a cherry on the cake, the French medical association set up its own template, making compliance again more difficult. Without surprise, as of today, only half of the declarations transmitted were compliant with the French Medical Association’s template.

Things should be changing now as the unique state portal is finally up and running. After a first registration, disclosure of information should become easier.

Registration and authentication

When first connecting to the unique state portal, companies will have to register. They will be required to provide different details such as information on their headquarters, company registration, and contact information, as well as the procedure an HCP will have to follow in order to modify the displayed data. (Note that the transparency disclosure being mandatory, no right of opposition is granted to the HCP as data owner, contrary to general principle of data protection.)

After this first registration, a unique pair user ID/password will be assigned to the company.

Information will remain available on the public state portal for five years, but will be securely stored by the government for 10 years.

Disclosure of information

This new state portal seems to be more “customer friendly” as three possibilities are set up for the disclosure and transfer of data:

  • An online script can be filled in online
  • A specific formatted document can be transferred directly to the website
  • An automatic sending through a web service can also be set up

The transmission is deemed secure and done on an https website. The unique website will in addition have to comply with the French Data Protection Authority-CNIL’s provisions, and avoid any indexing by external search engines.

The specific format that has been set up for the unique state portal is quite similar to the one already established by the French Medical Association. It will almost allow companies to continue working on their previous template.

This unique state portal will be officially launched April 1, 2014 at the latest. It is already accessible to companies at the following address:

https://www.entreprises-transparence.sante.gouv.fr/flow/login.xhtml

Health care companies do not have to continue to add information on their own dedicated transparency website that will have to be maintained for the data disclosed during the “transitory period.”

However, this new state portal should not make companies forget that they need to comply with data protection regulation regarding their obligation to inform the HCPs they are contracting with or paying advantages to, that they are due to disclose data in that respect.

 

Information Rights Tribunal Rules Self Reporting Breaches To ICO Does Not Provide Immunity From Fines

This post was written by Cynthia O'Donoghue.

A judgement of the Upper Tribunal of the UK Information Rights Tribunal (the Tribunal), in the case of Central London Community Healthcare Trust v Information Commissioner [2013] UKUT 0551 (AAC), has ruled that organisations which voluntarily report incidents of data security breaches to the ICO do not gain automatic immunity from penalty fines in relation to that breach.

The Tribunal rejected the appeal of the Central London Community Healthcare Trust (the Trust) against an ICO decision to serve a monetary penalty notice of £90,000 in 2012. The monetary penalty notice was issued following a data breach which involved 45 separate fax messages containing lists of palliative care inpatients, including particularly sensitive and confidential data like medical diagnoses, being sent to the wrong recipient – a member of the public – instead of a hospice, over a period of two months. While the Trust did not deny the breach, they argued the ICO was wrong to issue a monetary penalty notice on the grounds that it had self-reported the breach notifying the ICO.

Upper Tribunal Judge Nicholas Wikeley ruled, “The logical implication of the Trust’s construction of the legislative scheme is that a data controller responsible for a deliberate and very serious breach of the DPA would be able to avoid a monetary penalty notice by simply self-reporting that contravention and co-operating with the Commissioner thereafter. Such an offender would be in a better position than a data controller acting in good faith, but unaware of a breach, who could be subject of a monetary penalty notice because a third party reported the matter to the Commissioner. Such an arbitrary outcome would necessarily undermine both the effectiveness of, and public confidence in the regulatory regime.”

Commentators have been quick to point out that in spite of this ruling, the benefits of informing the ICO about serious data breaches continue to significantly outweigh the risks associated with being served a fine. Deputy Information Commissioner David Smith commented that the UK regulator does look favourably on companies that self-report data breaches even though the act of reporting does not give automatic immunity from fines. Furthermore, informing the ICO directly gives organisations the chance to justify their case and have some influence over the rectification measure the ICO may impose through their enforcement regime. To this extent, self-reporting must be seen as a mitigating factor that the ICO consider when determining the level of monetary penalty notices they issue.

Regulations Released to Implement Peru's Personal Data Protection Law

This post was written by Cynthia O'Donoghue.

The PeruvianLaw 29733 for Personal Data Protection (the Law) was enacted in July 2011 however it was only recently in May 2013, two years on, that the law’s implementing regulations were approved through Supreme Decree No.003-2013-JUS (the Regulations). This blog intends to provide more details on the scope of the Regulations which we provided a high level summary of in our previous blog when the Regulations were first released.

The Law and Regulations will apply to any processing of data by an establishment in Peru, by a holder of a database in Peru or even where the holder of the database is not located in Peru but uses means located in Peru for the purposes of processing data. Processing for personal purposes relating to family or private life will not be regulated.

The key provisions of the Law to note are as follows:

Consent

  • Processing of personal data requires prior express and unequivocal consent of the data subject that is obtained freely without bad faith or fraud.
  • Sensitive data requires consent in writing by a handwritten signature, a fingerprint, or electronic digital signature.
  • Consent must be informed including details of the objective purpose, recipients, the database, identity of the database owner, intended transfers or disclosure to third parties, the consequences or providing their information or failure to do so, rights of the individual available under the Law. Informing by publication of privacy policies is acceptable.
  • Children over 14 and under 18 may consent to processing without parental authority which is compulsory for children under 14.
  • Exceptions to the consent requirement include when data is
    • related to a person’s health;
    • in the public domain;
    • related to financial solvency;
    • necessary for  the execution of a contractual relationship

Notification

  • All databases must be registered with the public National Registry for the Protection of Personal Data. Any subsequent amendments to notifications require cancellation of the prior registration and submission of a new registration.

Security

  • Security measures must be established to ensure the confidentiality and integrity of data stored implementing the following Peruvian technical standards:
    • NTP-ISO/IEC 177799: 2007 EDI Technology of Information Code of Good Practice for the Good Management of the Security of Information
    • NTP-ISO/IEC 27001: 2008 EDI Technology of Information Code of Good Practice for the Good Management of the Security of Information Requisites.

Data Transfers

  • Transfers of data within an organisational group are permitted provided there is an internal code of conduct regulating the protection of personal data with the group and processing in accordance with the Law and Regulations.
  • International transfers must be notified to the DPA and requires prior consent and can only be made to countries with adequate levels of protection for personal data similar to that offered under the Law and the recipient guarantees to provide the same level of protection
  • Cloud computing is permitted provided the service provided guarantees compliance with the Law and Regulations and any subcontracting must be reported.
  • Data processors by subcontract to third parties provided an agreement is entered into and prior consent of the database holder must be obtained.

Subject Access Requests

  • A holder of a database will have the following time limits to respond to data subject requests:
    • Information request - 80 days
    • Access request - 20 days
    • Requests for correction or deletion -10 days

There will be a two year transition period for owners of existing databases to comply with the provisions of the Law implemented by the Regulations, however the obligation to register all databases with the DPA will take immediate effect. Violation of the Law or Regulations can result in a fine ranging from US $7,150 to $142,000.
 

ICO Enforcement Powers Challenged as Tribunal Overturns £300,000 Monetary Penalty Notice

This post was written by Cynthia O'Donoghue.

The First Tier Tribunal (Information Rights) granted appeal against a monetary penalty notice of £300,000 issued by the Information Commissioner in the case of Christopher Niebel v The Information Commissioner (EA/2012/2060), ruling that the penalty notice should be cancelled.

The monetary penalty notice had been issued against Christopher Niebel, owner of Tetrus Telecoms, for sending unsolicited ‘spam’ text messages seeking potential claims for mis-selling PPI loans or accidents. The message were sent from unregistered sim cards that allowed Mr. Niebel to conceal himself as the sender. The ICO found Mr Niebel's actions to be in breach of the Privacy and Electronic Communication (EC Directive) Regulations 2003 (the PECR Regulations). Under Regulation 22, it is unlawful to use text messages for direct marketing unless the recipient has either asked for or specifically consented to such a communication. Regulation 23 also requires that the identity of the sender should be clear, and the text must contain a valid address which permits the recipient to contact the sender or request that the text messages stop. The ICO found that Mr. Niebel obtained neither permission nor consent to send the text messages, and that he withheld his name and address.

The PECR Regulations incorporate s.55A of the Data Protection Act 1998. This section gives ICO enforcement powers to impose monetary penalties for breach of the PECR Regulations up to a maximum of £500,000, provided the contravention is serious and of a kind likely to cause a victim substantial damage or substantial distress.

In the First Tier Tribunal, NJ Warren found that the ICO monetary penalty issued to Mr Niebel over-exaggerated the nature and scale of his contravention of the PECR Regulations to involve hundreds of thousands of messages, when in fact it only related to 286 texts. Furthermore, NJ Warren did not consider the minor irritation of having to delete a spam text or the small charge to respond ‘STOP’ enough to be considered likely to cause a receiver ‘substantial damage’ or ‘substantial distress’. As a result, it was held that s.55A (1)(b) of the DPA had not been satisfied, and therefore the ICO had insufficient grounds to issue the monetary penalty notice. NJ Warren therefore made decision to cancel the penalty notice in this case.

 

Big Data Is Better, Urges EU Commissioner

This post was written by Cynthia O'Donoghue.

At Europe’s biggest digital technology event ICT 2013, Vice President of the European Commission Responsible for the Digital Agenda, Neelie Kroes, made a speech despairing that Europe is lagging behind the rest of the world in taking advantage of opportunities presented by big data. Kroes recognised it would be beneficial to “put the data together…the value of the whole is far more than the sum of its parts… to create a coherent data ecosystem.”

The speech highlights that plans under new EU data protection legislation will ensure more public data is shared, and this will be supported by the creation of a new pan-European open data portal. To support this vision, Kroes pleads for a big data partnership to be formed between public and private-sector organisations, adding “a European public-private partnership in big data could unite all the players who matter.”

Kroes is quick to point out that the benefits of big data do not have to come at the cost of privacy. She comments, “for data that does concern people, we need firm and modern data protection rules that safeguard this fundamental right…we need digital tools that help people take control of their data so that they can be confident to trust this technology.”

Overall, the EU Commissioner proposes Big Data can become a fashionable slogan – “a recipe for a competitive Europe.”

OIG Report Indicates OCR Not Overseeing and Enforcing HIPAA Security Rule

This post was written Nancy E. Bonifant and Brad M. Rostolsky.

A November 21, 2013 report published by the Office of the Inspector General (OIG) concluded that The Department of Health & Human Services (HHS) Office for Civil Rights (OCR) is not fully enforcing the HIPAA Security Rule and layed out recommendations for the OCR to implement. The OIG’s report also concluded separately that OCR is not in full compliance with the cybersecurity requirements in the National Institute of Standards and Technology (NIST) Risk Management Framework, to which OCR responded describing the actions it has taken since May 2011 in regards to OIG concerns.

Click here to read more on our sister blog, Life Sciences Legal Update.

 

Theft of Unencrypted Flash Drive Causes OCR to Issue Settlement and Corrective Action Plan for Physician Practice

This post was written by Brad M. Rostolsky and John E. Wyand.

The Department of Health and Human Services’ Office for Civil Rights (OCR) opened an investigation of Adult & Pediatric Dermatology, P.C. (APDerm) after a report was made regarding the theft of an unencrypted flash drive. To settle potential violations of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy, Security, and Breach Notification Rules, APDerm is to pay $150,000 to OCR and will be required to put into effect a corrective action plan to amend shortcomings in its HIPAA compliance program.

Click here to read more on our sister blog, Life Sciences Legal Update.

Ukrainian Data Protection Authority Sets Parameters for Cross-Border Data Transfers

This post was written by Cynthia O'Donoghue.

The Ukrainian data protection authority, the State Service of Ukraine on Personal Data Protection (The Service), has issued a letter (No.10/203-13) to clarify the permitted circumstances for transfers of data outside of Ukraine.

Ukraine is not a member of the EU and therefore has not implemented the EU Data Protection Directive 95/46/EC, including provisions governing cross-border transfers. However, the Law of Ukraine “On Personal Data Protection” (No.2297-VI, dated 1 June 2010) (the “OPDP Law”), is similar to the EU Directive. The Service’s letter seeks to clarify that under the OPDP Law, transfers of personal data outside the Ukraine are permitted, provided the transfer and storage of any personal data is only to countries which provide an adequate level of personal data protection. The Service confirms that the following countries are deemed to provide adequate protection:

  1. The European Economic Area member states
  2. States party to the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data
  3. States on the list adopted by a resolution of the Council of Ministers of Ukraine (publication of the list is pending)

Any transfers that fail to fall within these circumstances will require the signing of an Agreement on Personal Data Transfer in accordance with a template provided by the American Chamber of Commerce in Ukraine.

Poland's Draft Data Protection Law To Alter the Rules for Data Transfer and Data Privacy Officers

This post was written by Cynthia O'Donoghue.

A draft of Poland’s new draft data protection law has been released and has the potential to significantly change the rules in Poland governing international data transfers and data privacy officers.

Under existing rules, Poland is an EU member state that does not currently recognise the Standard Contractual Clauses or Binding Corporate Rules (BCRs) as sufficient legal basis for international transfers of data to third countries unable to provide an adequate level of data protection. This has meant that, to date, organisations have encountered difficulties and an administrative burden each time they transfer new categories of data for a new purpose. The only way such transfers have previously been permitted is with prior written consent of every data subject (impractical for large organisations) or with the prior consent of the Polish data protection authority GIODO (such approval taking up to six months in some cases).

The draft law proposes that GIODO’s consent will no longer be required for any international data transfers where a data controller has ensured adequate safeguards for the protection of privacy, and the rights and freedoms of data subjects by the execution of data transfer agreements, incorporating Standard Contractual Clauses approved by the European Commission. Transfers to another controller or data processor within the same group in a third country will also be permitted, where the data controller has in place BCRs approved by the Inspector General.

Regarding privacy officers, the existing law only specifies that the data controller may appoint such person, with no further specifications as to functions or requirements.

The draft law expands on the right to appoint a privacy officer, adding that privacy officers may only be appointed if they have a university education and an adequate knowledge of data protection. The functions of privacy officers will be defined as:

  • Ensuring compliance with personal data protection law
  • Checking that processing personal data complies with the rules on data protection, and preparing a compliance report for the data controller to submit to GIODO
  • Overseeing the development and updating documentation required by data privacy law
  • Providing authorized persons who process personal data with information about the rule for processing
  • Keeping a register of databases containing personal data

GIODO must be notified of each data privacy officer appointed by an organisation. This notification will mean that that organisation will be exempt from registration in respect of databases.

UK High Court Defines Tests To Determine if Data is Personal

This post was written by Cynthia O'Donoghue.

The UK High Court was forced to re-examine the concept of ‘personal data’ in the recent case of Kelway v The Upper Tribunal, Northumbria Police and the Information Commissioner (2013) EWHC 2575 (Admin). The case involved an application for judicial review by Dr Kelway against two decisions of the Upper Tribunal refusing a request for disclosure of information by Northumbria Police, who were investigating a complaint by Dr Kelway that a district judge had committed a serious criminal offence by arranging for a tape of court proceedings to be tampered with. Dr Kelway had requested information from the district judge’s statement given to police officers investigating the alleged criminal offence. Dr Kelway challenged that any references to the district judge in the statement were not biological and did not have him as their focus, and therefore were not personal data and should be disclosed to him without consent of the district judge.  In determining to refuse the grant of the application for disclosure, HHJ Thornton was forced to examine the concept of personal data in circumstances where the answer is not clear. 
 
Section 1(1) of the Data Protection Act 1998 (DPA) defines personal data as “data which relate to a living individual who can be identified from those data, or from those data and other data which is in the possession of or is likely to come into the possession of the data controller and includes any expression of opinion about the individual..”

The EU Article 29 Data Protection Working Party added in their 2007 ‘Opinion on the Concept of Personal Data’ that in addition to considering the content of the data, the result of the use of the data on the individual must also be considered to determine if that data is personal.

The Court of Appeal previously held in the case of Durant v Financial Services Authority (2004) FSR 28 that in order for data to be personal data within the meaning of the DPA, “the data must go beyond the mere mention of an individual’s involvement in a matter that has no personal connotations… the data should have the individual as its focus, rather than some other person with whom they may have been involved or some transaction or event in which the individual may have had an interest.”

In this case, HHJ Thornton analysed the existing ‘tests’ to determine if data is personal data, and concluded three questions must be answered affirmatively:

  1. Is the data information which is being processed or recorded or forms part of an accessible record?
  2. Is it possible to identify a living individual from the data?
  3. Does the data relate to an individual?

HHJ recognised that to be able to answer questions 2 and 3, the following considerations have to made:

  • Does the data contain biographical information rather than just record the data subject's mere involvement in a matter or event
  • Does the data have the data subject at its focus (The Durant Test)
  • Does the data relate to the individual in the sense that it is about the individual because its content refers to their identity and characteristics, and is the use of that data likely to have an impact on the individual's rights and interests (The Working Party Test)
  • Can a living individual be identified from the data and is that data obviously about a particular individual; or is that information used in a way to influence decisions affecting that individual; and does that data focus on the individual as the central theme with the potential to impact that individual (The ICO Guidance Test)

HHJ Thornton concluded that the impact of the use of the data on the individual must be considered, in addition to whether the data has the individual as its focus.  This judgment therefore refused to grant Dr Kelway the application for disclosure of the information without the consent of the district judge on the grounds that the information constituted personal data given the impact it would have on the district judge should this information have been disclosed. This judgment confirms that the Durant Test is therefore only part of the test that must be applied to consider if data is personal data within the meaning of the DPA.
 

Spanish Court Ruling Validates Employee Monitoring

This post was written by Cynthia O'Donoghue.

Spain’s constitutional court, the Tribunal Constitucional, made a landmark ruling in the case of Pérez González v. Alcaliber S.A. in early October, finding that companies are permitted to access and monitor employee communications via company IT resources, including emails and texts, as part of investigations into employee misconduct.

Pérez González was dismissed by Alcaliber for disseminating trade secrets to competitors.  Alcaliber accessed Pérez González’ company emails and laptop hard drive in the presence of the notary public following suspicions of wrongdoing to confirm grounds for dismissal. Emails in both 2007 and 2008 were found to confirm suspicions that Pérez González had disclosed information about the year’s poppy crops from his company account to a competitor of Alcaliber.

Pérez González challenged the dismissal with a claim for wrongful termination. He refuted the validity of the emails as evidence for his dismissal on the basis of his fundamental right to secrecy in communications under Article 18 of the Spanish Constitution. However, the constitutional court held that Pérez González did not have a reasonable well-founded expectation of confidentiality when using a company email account or other workplace communications where monitoring is foreseeable. Furthermore, the company collective bargaining agreement clearly prohibited the use of company-owned communications networks for non-work reasons. On this basis, the constitutional court upheld the decisions of the Madrid Labour Court and the High Court of Justice to affirm the dismissal.

The Tribunal Constitucional held that dismissal was not disproportionate in light of the severity of sharing confidential company information. Furthermore, the court ruled that a company must be permitted to monitor employee communications to verify well-founded suspicions of transgression where such monitoring is necessary to provide evidence to justify dismissal.

This ruling recognises that employee privacy rights must be balanced against employers’ rights to investigate employee wrongdoing, and further acknowledges that employees’ rights to privacy in the EU are not absolute.
 

Hungarian DPA Decision Redefines Concept of Data Controllers and Data Processors

This post was written by Cynthia O'Donoghue.

Hungary’s data protection authority, the National Authority for Data Protection and Freedom of Information (NAIH), recently issued a decision fining PepsiCo €5,000 for a data breach. The decision has, however, had wider repercussions, reclassifying the concept of data controllers and data processors.

In October 2012, it was exposed that a Turkish hacker group had been able to hack into PepsiCo’s Hungarian domain, resulting in a data breach which caused 50,000 data subjects, including names, birth dates, telephone numbers and email addresses to be publicised across the Internet  over a period of nine months. This data that had been collected by an agency, Createam, in connection with a promotional game ‘See the World in 3D’.  Reacting to complaints, PepsiCo deleted all the data and implemented notice and remediation procedures to mitigate the breach. Through the Internet Corporation for Assigned Names and Numbers (ICANN), PepsiCo was able to locate the hacker and instigate criminal proceedings against the group. Despite the proactive steps PepsiCo had taken, NAIH instigated an investigation.  NAIH argued PepsiCo was in breach of Section 7 of the Hungarian data protection law Act No. CXII of 2011 on the self-determination of information and freedom of information, which requires a data controller to carry out data processing operations in compliance with the Act, including the implementation of adequate safeguards and security measures to protect that data against unauthorized access, alteration, deletion, accidental loss or public exposure. 

PepsiCo attempted to argue that, based on the definitions in the contract with Createam and the fact that all processing activities were outsourced to the agency, Createam was the data controller liable for the breach.  To reinforce the argument, PepsiCo relied upon the provisions of the EU Directive 95/46/EC, which defines a data controller as ‘the natural or legal person, public authority or agency or any other body which alone or jointly with others determines the purposes and meaning of the processing of personal data.’ Following this interpretation, PepsiCo argued Createam was the entity collecting all data, to which PepsiCo had no access, and should be considered the data controller on this basis. Createam successfully rebutted that PepsiCo was the data controller considering the agency received all instructions in relation to the processing from PepsiCo, and the breach itself was of a website hosted by PepsiCo’s hosting provider that had no legal relationship with the agency.

The case therefore prompted reconsideration of the definitions of data controller and data processor. NAIH turned to the Article 29 Data Protection Working Party’s Opinion 1/2010 on the concepts of ‘controller’ and ‘processor’, and found that being a data controller is defined by the factual circumstances that an entity has chosen to process personal data for its own purposes and, irrespective of the contract between the parties, PepsiCo was deemed the data controller. Similarly, NAIH reinforced that Createam was the data processor on the basis that it was a separate legal entity processing data on a controller’s behalf. Consequently, PepsiCo was found liable for the breach of Article 7 of the Hungarian law due to lack of security measures, resulting in a fine of €50,000.

The significance of this decision lies in the fact that when deciding where liability fell between the parties,  NAIH, while adopting the recognised definitions of controller and processor under EU law, went further in reserving the right to reclassify the roles considering the context and purposes of data processing to decide the role of the parties, regardless of definitions of the parties under contractual agreement.
 

ECJ Rules Exceptions To Obligation To Notify Data Subjects of Processing Are Optional

This post was written by Cynthia O'Donoghue.

The European Courts of Justice (ECJ) ruled in the case of Institut professionel des agents immobiliers (IPI) v. Englebert, E.C.J No. C 473/12, 11/07/13) that EU member states have the option, but not an obligation, to transpose the list of exceptions provided under Article 13 of the EU Data Protection Directive 95/46/EC, which allows for the collection and processing of personal data without notifying the data subject in the following limited necessary circumstances:

  • To safeguard national security
  • For defence
  • For public security
  • For the prevention, investigation, detection and prosecution of criminal offences or breaches of ethics for regulated professions
  • For an important economic or financial interest of a Member State, including budgetary or taxation matters
  • For monitoring, inspection or regulatory functions
  • For the protection of the data subject or the rights and freedoms of others.

The case involved the use of private detectives by the Belgian Professional Institute of Estate Agents (IPI) to collect information on a real estate company that allegedly breached regulatory rules. The admissibility of the private detectives’ evidence in court was questioned on the grounds that the estate agents had not been informed that their personal data would be processed by third parties in accordance with Article 11(1) of the EU Directive. IPI argued that the use of private detectives would fall within the exception under Article 13(1)(d) of the EU Directive which permits the collection of data without consent for the prevention, investigation, detection and prosecution of breaches for regulated professions.

Article 9 of the Belgian data protection Act of December 8, 1992 on Protection of Privacy in relation to the Processing of Personal Data (as amended) (the Belgian Law), corresponds to Article 10 and 11 of the EU Directive and imposes the obligation to inform data subjects of data processing activities.  There are the following exceptions to this obligation under Articles 3(3)-(7) under the Belgian Law:

  • For journalistic purposes
  • For artistic or literary expression
  • By public authorities for exercising judicial police duties
  • By police services for the purpose of administering police duties
  • By the European Centre for Missing and Sexually Abused Children

Belgian law did not therefore strictly transpose comparable exceptions to those set out in Article 13 of the EU Directive, which led the Belgian Constitutional Court to refer to the ECJ for clarification on the requirement for national laws of member states to directly implement Article 13.

The ECJ ruled that if Belgium had transposed the exceptions of Article 13 of the EU Directive into national law, then the use of private detectives would fall within the exception under Article 13(1)(d) of the EU Directive. However, the ECJ clarified that the transposition of the exceptions to the requirement that data processors inform data subjects about the processing of their personal data under Article 13 of the Directive is optional, not obligatory, for EU member states. Therefore, notwithstanding the relevant exception under Article 13 (1)(d), Belgian law did not provide a comparable exception in this case. To this extent, the agents should have been informed of the data processing by the private investigators in accordance with Article 10 and 11 of the EU Directive (and Article 9 of the Belgian Law).

The ruling highlights the importance of the proposed EU Data Protection Regulation, which will harmonise national data protection laws across all EU member states to eradicate discrepancies in interpretation and implementation of the EU Directive. 
 

Australian Data Protection Authority Issues Further Guidelines On Australian Privacy Principles

This post was written by Cynthia O'Donoghue.

The Australian data protection authority, the Office of the Australian Information Commissioner (OAIC), has issued two sets of guidelines further to our previous blog analysing earlier guidelines issued on the Australian Privacy Principles (APPS) that will provide the framework for Australia’s Privacy Amendment (Enhancing Privacy Protection) Act 2012 scheduled to take effect beginning 12 March 2014. The most recent sets of guidelines relate to rights of data subjects under APP 12 ‘access to personal information’ and APP 13 ‘correction of personal information’.

The key points to note from APP 12:

  • APP entities that hold personal information about individuals must give individuals access to that personal information on request (whether in writing or otherwise informally).
  • Applications for access requests must be free of charge, and any charges relating to providing the information must not be excessive.
  • The right to access information under APP 12 operates alongside other legal procedures, e.g., the Freedom of Information Act (FOI Act).
  • APP entities can refuse to grant access to information by providing the individual written notice justifying the circumstances for refusal. These circumstances include the grounds for refusing consent under the FOI Act, as well as the following:
    • Reasonable belief that giving access would pose a serious threat to life, health or safety of an individual
    • Access would have unreasonable impact on privacy of other individuals
    • The request is frivolous or vexatious
    • Information relates to anticipated or existing legal proceedings and would not be disclosable under discovery
    • Access would reveal intention of negotiations with the individual or would prejudice enforcement activities for misconduct
    • Access would reveal information in connection with a commercially sensitive decision-making process
    • Giving access would be unlawful
    • APP entities must respond to access requests within 30 calendar days by either providing a notice of refusal or granting access in the manner requested by individual.

They key points to note from APP 13:

  • APP entities must take reasonable steps to correct personal information to ensure information held is accurate, up-to-date, relevant and not misleading.
  • Privacy policies must provide a mechanism for individuals to make a request to an APP entity for correction of their personal data.
  • Reasonable steps must be taken to notify other APP entities of the correction.
  • Individuals who request that their information be corrected but are refused must be provided with a complaint mechanism and written notice of the grounds for the refusal to correct the information.
  • It is not permissible to impose any charge on individuals for requesting the correction of their personal information.
  • APP entities must respond to requests for correction within 30 calendar days by either correcting the information or notifying the individual of the grounds for refusing the correction.

 

New Zealand Data Protection Authority's Powers Held Back By Government Veto

This post was written by Cynthia O'Donoghue.

The Privacy (Giving Privacy Commissioner Necessary Tools) Amendment Bill that would have given greater powers of control to the New Zealand data protection authority, the Office of the Privacy Commissioner (the DPA), has been blocked by a negative vote in New Zealand Parliament.

The draft bill proposed by the Labour opposition party states, “At the moment, enforcement of the Privacy Act 1993 is complaints-driven. People can complain to the Privacy Commissioner about breaches of their privacy rights under the Act. But the Commissioner has only limited powers to take action about breaches of the Act of its own initiative. Such a system is not well suited to addressing underlying systematic problems.”

The draft bill aimed to give the DPA broader powers to audit government authorities and issue compliance notices to ensure that personal information held by public sector bodies is not abused. The ambition is for the DPA to take a more hands-on approach to data breaches to prevent security problems in the context of a number of serious privacy breaches by government agencies recently.

The draft bill proved unsuccessful in Parliament as the ruling National Party have bigger plans for a more comprehensive reform package of New Zealand privacy law, which will address the powers of the DPA  simultaneously with wider issues covered in the Law Commission’s 2011 review of New Zealand privacy legislation (152 PRA, 8/8/11). Therefore, in spite of the negative vote, New Zealand remains committed to privacy reform.
 

Mexican Transparency Bill To Put An End To Government Corruption

This post was written by Cynthia O'Donoghue.

In an effort to enhance transparency in government and end financial corruption, the Mexican Congress has approved a draft amendment bill that will reform the Mexican Constitution by requiring all government entities to publicly report their finances and expenditures. The lower chamber of the Mexican Congress, the Chamber of Deputies, voted 424 to 16 in favour of the bill, which has now been passed to President Enrique Pena Nieto and is expected to become law by the start of 2014.

President of the Chamber of Deputies, Julio Cesar Moreno Rivera, commented, “this reform will increase accountability and bring Mexico toward having a democratic and transparent government.”

The draft bill tasks the Mexican DPA, the Federal Institute for Access to Public Information (IFAI), with overseeing public transparency and government accounting for federal, state and municipal institutions.  The IFAI will be able to publish information held by the executive, legislative and judicial entities, as well as political parties and trade unions. The IFAI will have administrative autonomy from the federal government and is empowered to make final resolutions and binding decisions, except in the case of national security.

Institutional Revolutionary Party Deputy Lizbeth Gamboa Song commented, “This is a historic step with the transformation of the IFAI into a constitutionally autonomous agency, because this strengthens the powers of transparency agencies by expressly spelling it out in the text of the Constitution.”

Senator Cristina Diaz Salazar added, “the IFAI’s transformation into an autonomous body marks an important step in Mexico’s decision to generate higher standards to oversee transparency and financial reporting by public institutions, the public’s right to access that information and the protection of personal data.”

There are, however, concerns within the International Chamber of Commerce that the initiative will overwhelm the IFAI and could trigger a reshuffle of the Mexican DPA with two additional commissioners scheduled to be appointed. The bill even implies that a new entity will be appointed to take over the IFAI’s responsibility for personal data protection, leaving the IFAI free to focus on public sector freedom of information issues. Spectators suspect this could be the Secretariat’s Subsecretary of the Digital Economy operating under the Mexican consumer regulator, the Profeco, or even a completely new entity.

While the bill makes remarkable progress for greater transparency, it has simultaneously created a great deal of uncertainty about the future of Mexican data protection regulation.
 

Regulations Released Implementing Malaysian Data Protection Act

This post was written by Cynthia O'Donoghue.

The Minister of the Malaysian Communications and Multimedia Commission (the Minister) has announced by Gazette that Malaysia’s Personal Data Protection Act 2010 (the PDPA) will finally take effect as of 15 November 2013, introducing a privacy regime in Malaysia for the first time. To accompany this announcement, a series of regulations have been issued to implement the provisions of the PDPA. Data controllers will have three months from the date of enactment to comply with the PDPA to avoid enforcement.

The Regulations on Classification of Data Users highlight that the PDPA requires certain organisations to register as data users with Malaysia’s new Personal Data Protection Commissioner. These include:

  • Banking and financial institutions
  • Communications service providers
  • Tourism and hospitality providers
  • Insurers
  • Real estate firms
  • Education bodies
  • Direct marketing organisations
  • Transportation firms
  • Utility providers

TheRegulations on Registration of Data Users sets out the costs of registration, which are valid for a period of 24 months prior to renewal. Failure to register as a data user could result in a fine of up to 500,000 Ringgit and imprisonment of up to three years.
 

New Data Protection Law For South Africa

This post was written by Cynthia O'Donoghue.

After 10 years of debate, South Africa’s President Jacob Zuma has finally signed South Africa’s first framework privacy bill into law, the Protection of Personal Information Bill (PoPI). PoPI will reinforce the right to privacy under Article 14 of the South African Constitution. PoPI will take effect one year after the date of enactment, though there is potential for this transition period to increase to three years, dependent on discussions between the Minister of Justice and Constitutional Development, and the newly established data protection authority (DPA). After this date, the DPA will be empowered by PoPI to enforce fines of up to 10 million Rand ($957,171) for non-compliance.

PoPI will provide protection for both individuals and juristic persons, including corporations. The new law will also allow the DPA to file lawsuits on behalf of individuals against data controllers. Data controllers will have to be aware of this strict liability they will bear, and the potential for remedies sought to include aggravated damages.

PoPI is based upon a framework of conditions, including:

  • Accountability – Data controllers will bear ultimate liability and responsibility for compliance with PoPI
  • Processing Limitation – data may only be processed lawfully and excessively, and with consent of the data subject (which can be withdrawn at any time.) Explicit consent of the data subject is required for processing sensitive data.
  • Purpose Specification – data may only be collected for a specific, explicitly defined and lawful purpose. Any data collected should not be retained any longer than is necessary for achieving that purpose. Explicit consent is required for direct marketing.
  • Further Processing – any further processing must be compatible with the original purpose of collection
  • Information Quality – reasonable steps must be taken to ensure data is complete, accurate, not misleading and updated when necessary
  • Openness – Data controllers must retain open records documenting all processing operations undertaken, and must take reasonable steps to ensure data subjects are informed of the purpose and extent of data collected; the identity of the data controller; whether provision of the information is mandatory or voluntary and the consequences of failure to supply that information; any subsequent disclosure to third parties; and the full extent of rights available to data subjects under PoPI
  • Security Safeguards – Data controllers must take responsible steps to secure the integrity and confidentiality of personal data in their possession by taking appropriate technical and organisational security measures. Any security compromises must be notified to the DPA and the data subject concerned.
  • Data Subject Participation – A data subject has rights under PoPI to access or correct their personal information held by the data controller

President Jacob Zuma commented, ‘PoPI will give effect to the right to privacy by introducing measures to ensure that the personal information of an individual is safeguarded when it is processed by responsible parties.”

Kazakhstan Introduces New Privacy Law

This post was written by Cynthia O'Donoghue.

The government of the Republic of Kazakhstan has announced that Kazakhstan’s framework data protection law, Law No. 94-V (unofficial English version), has been enacted and will be effective as of November 25, 2013. Kazakhstan is now the second country in Central Asia to enact a data privacy law.

The new statute governs the protection of human rights in the collection and processing of personal data in Kazakhstan. Law No. 94-V will operate in conjunction with existing regulatory rules for data processing, in contrast with the old version of the law, which regulated personal data protection on a sector-specific basis. Law No. 94-V does not appoint a central data protection authority; instead, each state agency is expected to supervise data protection practices within the industry or government sector for which it is responsible.

The key provisions to note from Law 94.V are as follows:

  • Personal data is defined as any information that identifies an individual, including biometric data
  • A database operator (data controller) is defined to include any government agency, business or individual
  • Consent must be obtained prior to the collection of any personal data, except when in accordance with international treaties, by law enforcement agencies and courts, or for the purposes of government statistics
  • Collection, use and storage of personal data must be limited to that which is strictly necessary for the relevant purpose notified to the individual
  • Individuals must be notified prior to any transfer of personal data to any third parties
  • Cross-border transfers of personal data are permitted, provided the country to which data is transferred is deemed to have adequate protection laws in place
  • Data processing does not include personal data collected for personal family circumstances or government national security purposes

The State Prosecutors Office will supervise compliance with Law 94.V, including enforcement in accordance with the fines set out under Article 84-1 of the Code of Administrative Offences (the Code). The Code sets out a scale of fines measured in monthly calculation indexes (MCIs). For illegal data collection and processing, individuals and small to medium-sized businesses could face a fine of 50 MCIs ($556), whilst large businesses can be fined up to 100 MCIs ($1,130). Failure to take technical measures to secure personal data can result in fines of up to 100 MCIs ($1,130) for individuals, 200 MCIs ($2,260) for small to medium businesses, and 300 MCIs ($3,390) for large businesses. Article 142 of the Criminal Code further stipulates that any serious violation of the new privacy law can result in fines of between 400-700 MCIs ($4,520-$7,910). Any substantial harm caused to an individual as a result of a failure to implement adequate security measures can cause a fine of up to 1000 MCIs ($11,300) and prison terms of up to three years, increasing to 2000 MCIs ($22,600) and five years' imprisonment if committed by a government official or business executive.

To help database operators avoid facing these hefty penalties, a short set of guiding regulations (unofficial English version) have been drafted to implement the new privacy law. Database operators will have three months from the date of enactment (25 November 2013) to ensure they are in compliance with the new privacy law.

Target Breach Yields A Dozen Plaintiffs Suing...

This post was written by Paul Bond.

Target announced the compromise payment card information taken in-store between November 27 and December 15, 2013. Twelve putative class actions have been filed in federal courts arising from this announcement. Attached is a word cloud formed using those Complaints, excluding very common words. (Created using Wordle [http://www.wordle.net/].) Heading into 2014, Plaintiffs’ counsel seem to be pushing essentially the same buttons against Target that they did in 2007 in the litigation frenzy against TJ Maxx.  That situation cost the retail giant more than $250m.

EU Commission Back-Pedals On Data Protection Reform

This post was written by Cynthia O'Donoghue. 

After many setbacks and delays in developments for EU data protection reform (see our previous blogs on the first European Parliament vote delay and subsequent vote pushback), October was an exciting month of progress. The Ministers in the Council agreed on a centralised, one-stop-shop mechanism for data protection, and the European Parliament voted in favour of the proposed new data protection regulation and directive.  However, all this enthusiasm came to nothing, with European Council Conclusions revealing that the reformed EU Data Protection Framework is not to be adopted until 2015 – long after the next European Parliament elections in May 2014. 

Hopes for a swift reform have further been dampened by recent announcements (13/1027 and 13/1029) by Vice President of the European Commission Viviane Reding, following theEU Justice and Home Affairs Council meeting on 6 December 2013.

Reding’s most recent announcements condemn the Council and the Lithuanian Presidency for failing to seize the opportunity to ultimately deliver on EU data protection reform. Reding blames further delay on the Council getting caught up in the legal complexities regarding the one- stop-shop mechanism, rather than progressive political discussion. The Council previously agreed to the concept of a one-stop-shop principle in October, subject to a few further clarifications as to how the mechanism would ensure proximity between individuals and the nominated leading supervisory authority. Concerns about facilitating this proximity have not been resolved and new concerns have arisen about how data subjects will be granted effective redress under this mechanism.  Reding despairs that the Council’s legal service has reopened questions already resolved and agreed upon in October.

When asked about the meeting on 6 December, she commented “We were almost there! Today we have moved backwards; instead of seeing the wood for the trees, Ministers have got bogged down in details, meaning that even after three months of discussions on the one-stop-shop principle there is still no workable solution on the table. I have always been vocal calling for a swift agreement on data protection reform, but today I must say not at any cost, I cannot support a reformed framework with a one-stop-shop that would become an empty shell. This Council has been a missed opportunity.”

Reding deplores the back-pedalling witnessed throughout the Lithuanian Presidency, and calls for the incoming Greek Presidency to take the reins and put progress for reform back on track.  The incoming Greek Presidency seem equally as keen to conclude discussions and provide a swift solution for reform, with 10 days of meetings on data protection already scheduled on the agenda. 
 

Shoring Up Safe Harbor: EU and U.S. Must Work Together to Rebuild Trust in EU-US Data Flows

This post was written by Cynthia O'Donoghue and Kate Brimsted.

Revelations of systematic mass surveillance of EU citizens’ data by the United States did little for transatlantic relations generally, and even less for the EU-US Safe Harbor scheme in particular. The European Commission (the ‘Commission’) conducted a review into whether Safe Harbor was still fit for the purpose of preserving EU citizens’ data protection rights when their data flowed to the United States, and in November published a strategy paper aimed at rebuilding trust in EU-US data flows.

The Commission also published an analysis of the existing operation of Safe Harbor. The Commission offer 13 recommendations to make the Safe Harbor framework ‘safer,’ focusing on greater transparency, with particular regard to the extent of U.S. government access, and more effective enforcement and application of privacy principles. The recommendations require:

  1. Self-certified companies (SCCs) must publicly disclose privacy policies.
  2. SCCs privacy policies should set out the extent to which U.S. law permits public authorities to collect data under the Safe Harbor.
  3. SCCs must include a link to the Department of Commerce Safe Harbor website on website privacy policies.
  4. SCCs must include a link to an ADR (alternative dispute resolution) provider or EU panel for redress.
  5. SCCs must publish the privacy conditions contained in any contracts concluded with subcontractors or third-party vendors, e.g., cloud service providers.
  6. SCCs should flag all companies on the Department of Commerce Safe Harbor list that are not current members.
  7. ADR bodies in the Safe Harbor scheme must make ADR readily available and affordable.
  8. The U.S. Department of Commerce should monitor ADR providers for accessibility and transparency.
  9. A percentage of certified companies should be subject to ex-officio investigations on privacy policy compliance.
  10. Complaints or findings of non-compliance should be subject to follow-up investigations.
  11. The U.S. Department of Commerce should inform competent EU data protection authorities in the event of doubts on compliance or complaints.
  12. False claims of Safe Harbor adherence should be investigated thoroughly.
  13. The national security exception should be limited to use that is strictly necessary or proportionate.

The U.S. Department of Commerce has commented that it is ‘delighted with the genuine willingness on the part of the Commission to save the mechanism and look forward to further constructive dialogue with the EU on the operational aspects of the Safe Harbor framework’. However, the Commission Vice-President, Viviane Reding, reiterated that if recommendations are not implemented by the next review in mid-2014, the Commission will have to resort to the ‘Damocles sword that the Commission has taken out and is hanging over Safe Harbor.’
 

NIST Cybersecurity Framework

This post was written by Timothy J. Nagle.

NIST published the “Preliminary Cybersecurity Framework,” comprised of a Core, a Profile, and Information Tiers, in October.  Comments were due by December 13th, and many industries, sectors and organizations have provided input.  There is general industry support for the purpose, content, and collaborative development of the Framework, but less certainty around its eventual effect and implementation.  Also, while the Framework is a rigorous set of standards and methodologies that one would expect of  NIST, there is a possibility that a focus on privacy principles may frustrate development of the Framework as it has with information sharing legislation.

Please click here to read the issued Client Alert.

European Commission Proposes Directive To Protect Trade Secrets

This post was written by Cynthia O'Donoghue.

The European Commission has announced a proposal for a directiveon the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure.” This measure will prove popular in the context of a recent survey in the “Study on Trade Secrets and Confidential Business Information in the Internal Market" (MARKT/2011/128/D), which found that one in five companies suffered at least one attempted trade secret theft in the past decade, and the number of companies reporting theft of information has increased from 18% in 2012 to 25% in 2013.

European Commission Vice President Antonio Tajani added, “The loss of a trade secret and disclosure of a key invention to competitors means a catastrophic drop in value and future performance for an SME…With this legislation the Commission will protect EU businesses’ livelihood and the trade secrets that form a vital part of it.”

The aim of the directive is to provide a unified level of protection across all member states to harmonise the fragmented patchwork of different national laws governing trade secrets, with some countries having no laws to protect against trade secret misappropriation whatsoever. In summary, the key provisions and commonly agreed definitions of the Directive to note are:

  • Trade secret – means any information which is secret to the extent it is not known or readily accessible, has commercial value because it is secret and has been subject to steps to keep it secret
  • Unlawful acquisition of a trade secret – means acquisition of a trade secret without the consent of the trade secret holder by unauthorised access to or copying of any documents, objects or electronic files lawfully under the control of a trade secret holder, or by theft, bribery, deception, or breach of a confidentiality agreement
  • Unlawful use or disclosure of a trade secret – means use or disclosure of a trade secret unlawfully acquired in breach of a confidentiality agreement or contractual duty without the consent of the trade secret holder
  • Lawful acquisition, use and disclosure – means information obtained through independent discovery, creation, observation, study, disassembly or test of a product or object that has been made available to the public or in conformity with honest commercial practices
  • The limitation period for claims under the Directive shall be not more than two years after the date when the applicant became aware of unlawful acquisition use or disclosure of a trade secret
  • Participants to legal proceedings in relation to the unlawful acquisition, use or disclosure or a trade secret shall not be permitted to disclose any trade secret to which they have become aware as a result of the proceedings
  • Remedies will include cessation, declaration of infringement, destruction of infringing goods or any document related to the trade secret, as well as pecuniary compensation and damages

Commissioner for Internal Market and Service Michel Barnier said, “Cybercrime and industrial espionage are unfortunately part of the reality that businesses in Europe face every day. We have to make sure our laws move with the times and that the strategic assets and the trade secrets of our companies are adequately protected against theft and misuse… This proposal aims to boost the confidence of businesses, creators, researchers and innovators … they will no longer be dissuaded from investing in new knowledge by the threat of having their trade secrets stolen..”

The proposed Directive will now be transmitted to the Council of Ministers and the European Parliament for adoption, and could come into force by the end of 2014.

UN Urged to Create International Digital Bill of Rights

This post was written by Cynthia O'Donoghue.

Nobel Prize laureates have joined 562 authors from 81 countries around the world to sign a petition demanding that the UN create an international ‘digital bill of rights’ to put an end to government surveillance activities and the erosion of the human right to privacy in the digital age.  The petition is likely to be particularly potent in the aftermath of the Edward Snowden revelations about mass surveillance techniques deployed by the GCHQ with Tempora and PRISM by the NSA.

The opening to the petition states, ‘All humans have the right to remain unobserved and unmolested. This fundamental human right has been rendered null and void through the abuse of technological developments by states and corporations for mass surveillance purposes.’

The petition therefore demands the right for all people to determine to what extent their personal data may be collected, stored and processed and by whom, to obtain information on where their data is stored and used, and to obtain deletion upon request. 

The release of the petition follows a day after leading technology companies, including AOL, Apple, Facebook, Google, LinkedIn, Microsoft, Twitter and Yahoo, published a letter to President Barack Obama demanding the United States take the lead in global government surveillance reform. 

The letter calls for governments to endorse and implement five principles to reform state-sponsored surveillance, including:

  1. Codify limitations on government authority to collect user information, including avoiding bulk data collection
  2. Intelligence agencies seeking to compel production of information should do so within a clear legal framework
  3. Transparency about government demands by allowing companies to publish the number and nature of the demands
  4. Respect the free flow of information across borders
  5. Avoid conflict of laws by governments working together

The letter stated, ‘It is time for the world’s governments to address the practices and laws regulating government surveillance of individuals and access to their information.’ Facebook CEO Mark Zuckerberg added, ‘US government should take this opportunity to lead this reform effort and make things right to restore trust.’
 

UN Passes Internet Privacy Resolution Recognising Human Rights Online

This post was written by Cynthia O'Donoghue.

The UN General Assembly’s Human Rights Committee has announced that a draft resolution sponsored by Brazil and Germany, ‘The Right To Privacy in the Digital Age,’ has been unanimously approved.

The resolution recognises that rapid technological development has created new opportunities for governments and organisations to undertake surveillance and interception in violation of an individual’s right to privacy under article 12 of the Universal Declaration of Human Rights. The resolution expresses a deep concern about the negative impact that surveillance on a mass scale may have on the exercise of an individual’s human rights. The resolution therefore reaffirms the right to privacy, especially an individual’s right to be free from arbitrary or unlawful interference online. Brazil’s Ambassador Antonio de Aguiar Patriota commented, “the resolution establishes for the first time that human rights should prevail irrespective of the medium and therefore needs to be protected online as well as offline.”

In a UN press release, independent expert the Special Rapporteur on freedom of expression Frank La Rue commented, “If States are truly committed to ensuring that all the rights which apply offline continue to be valid online, they urgently need to take concrete steps to secure respect for the privacy of communications as a universal right.” Mr La Rue added, “Blanket indiscriminate surveillance should never be legal…privacy is a recognised human right and for decades there has been a solid understanding of this concept.”

The unanimous adoption of the resolution will mean it will also pass the 193-member General Assembly in December. While the resolution will not be legally binding, it will carry significant political weight to reflect the global consensus on Internet privacy. This symbolic resolution is welcomed in the context of controversial revelations regarding U.S. surveillance activities by the NSA, in particular concerning a number of foreign leaders, including Brazilian President Dilma Rousseff and German Chancellor Angela Merkel.
 

Newsflash from Luxembourg: Data Retention Directive is incompatible with Charter of Fundamental Rights

This post was written by Katharina A. Weimer, LL.M.

According to the opinion of the Advocate General Pedro Cruz Villalón, published 12 December 2013, the Directive 2006/24/EC is as a whole incompatible with the requirement, laid down in the Charter of Fundamental Rights, that every limitation on the exercise of a fundamental right must be provided for by law. The Directive itself should already contain the principles that would govern the minimum guarantees for access to the data, retention and use of the data. These guarantees, and their establishment, application and review of compliance, need to be defined in the Directive itself. Further, the Advocate General deems the Directive not proportionate because it obligates the Member States to retention for a maximum period of two years. The Advocate General fails to see a justification for retaining the data for longer than one year.

However, Advocate General Cruz Villalón does not recommend a finding of immediate invalidity to the European Court of Justice. Rather, the effects of such finding should be suspended pending the adoption of measures that remedy the invalidity by European legislature, within a reasonable time frame. The objectives of the Directive itself are not illegitimate, but the measures required to reach these objectives are incompatible with the fundamental rights of the citizens.

In Germany, the Federal Constitutional Court had already declared that the national implementation of the Directive does not conform with the German Constitution, which rendered the implementing legislation invalid. While the potential parties to the coalition have already committed to a re-implementation of the Directive in the draft coalition agreement, it remains to be seen whether they will step back from such plan after the opinion of the Advocate General and at least wait for the decision of the European Court of Justice, which usually follows the opinion of the Advocate General.
 

New Jersey AG Continues Privacy Enforcement Efforts

This post was written by John P. Feldman and Frederick Lah.

On November 13, 2013, California-based mobile app developer, Dokogeo, Inc., entered into a consent order with the New Jersey Attorney General to settle charges of violations of the Children's Online Privacy Protection Act ("COPPA") and New Jersey's Consumer Fraud Act. The settlement, which was announced on November 22, is the second one entered into between an app developer and the NJ AG over alleged COPPA violations. For our analysis on the previous settlement, please visit here.

According to the consent order, Dokogeo offers a geolocation scavenger hunt app for users to visit new locations and gather photos and notes from people they meet. Users have the option to create a profile, which includes an email address. The NJ AG alleged that the Dokogeo app was directed to children, although the only allegation made was that there was animation on the site. While the presence of animated characters on a site is a factor that the FTC Rule now includes for determining whether a website or online service is “directed to children” under 13 years of age, the FTC has stated that the presence of animated characters alone is not definitive evidence that the site is directed to children. So, it is somewhat unclear exactly how much analysis was done to determine whether COPPA should apply at all. The AG then alleged that Dokogeo violated COPPA because it collected "personal information" from children (including email address, photos, and geolocation), and that it did not obtain verifiable parental consent in connection with such collection. As part of the settlement, Dokogeo agreed to enhance its privacy disclosures -- both on its apps and on its website -- and to stop collecting personal information about users under 13 years old -- including geolocation data. The company was also fined $25,000, which will be vacated after ten years if the company remains in compliance with the consent order, COPPA, and the New Jersey Consumer Fraud Act.

State AGs continue to focus their efforts on consumer privacy, with states like New Jersey and California leading the charge. We will continue to monitor developments on the state level, in particular to see whether other states will follow suit.

The Future of the Internet: New gTLDs have arrived

This post was written by Gregory S. Shatan.

The number of Top Level Domains (such as .com, .net, .eu, etc.) are expanding exponentially. On October 31st, the first of the new gLTDs opened registration for a trademark “Sunrise,” which allows trademark owners to register domains matching their trademarks. The first English language new gLTDs will begin their registration period on November 26th. To learn more about domain registration in the Trademark Clearinghouse (TMCH) and protecting and maintaining your brand, visit our sister blog, AdLaw By Request.

 

 

Canada Supreme Court Declares Alberta Privacy Law Unconstitutional

This post was written by Mark S. Melodia, Cynthia O’Donoghue, and Frederick Lah.

On November 15, 2013, the Supreme Court held Alberta’s Personal Information Protection Act (“PIPA”) to be unconstitutional, holding that an individual’s right to freedom of expression in the labor strike context outweighs the individual’s right to control his or her information in public. The ruling is suspended for 12 months to give Alberta’s legislature time to consider how to best amend PIPA.

This case, Alberta (Information and Privacy Commissioner) v. United Food and Commercial Workers, Local 401, arose after the union recorded and photographed employees of a casino crossing the picket line during lawful picketing activity. The union posted signs near the picket line saying that those who crossed the line would be photographed. Some of the photographs were eventually used on union newsletters and posters. The photographed employees filed complaints with Alberta’s privacy commissioner. An adjudicator appointed by the commissioner determined that PIPA prohibited the union from collecting, using, and disclosing such photographs and recordings without the consent of the employees. The case was reviewed by the appellate courts before eventually making its way to the Supreme Court.

The Supreme Court held in a 9-0 ruling that PIPA violates s. 2(b) of Canada’s Charter of Rights and Freedoms, which guarantees the “freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication.” After determining that dissuading workers from crossing the picket line is protected activity, the court found “without difficulty” that PIPA unconstitutionally restricts such freedoms of expression:

“It goes without saying that by appearing in public, an individual does not automatically forfeit his or her interest in retaining control over the personal information which is thereby exposed. This is especially true given the developments in technology that make it possible for personal information to be recorded with ease, distributed to an almost infinite audience, and stored indefinitely. Nevertheless, PIPA’s restrictions operate in the context of a case like this one to impede the formulation and expression of views on matters of significant public interest and importance.”

The tension between personal privacy and freedom of expression in the labor context is not unique to Canada. In the United States, for example, the National Labor Relations Board has struck down a number of social media policies for making sweeping prohibitions on what employees can and cannot post on their social media accounts. Our Employment Law Watch Blog has previously written about the NLRB’s social media cases. Back in July, the NLRB also held a company’s notice directing employees not to discuss workplace investigations to be in violation of the National Labor Relations Act. The recent Canada Supreme Court case is just the latest example of how this tension can play out in the courtroom.

While it is too early to tell what sort of specific changes to PIPA will result, the changes will likely focus on union activity, perhaps by expressly excluding union activity from the scope of the law. In the meantime, PIPA, as it’s currently drafted, will continue to remain in full force for the next year.
 

Compliance Warning Issued by BBB Accountability Program on Data Collection for Online Behavioural Advertising

This post was written by Douglas Wood and Frederick Lah.

The Online Interest-Based Advertising Accountability Program Council issued its first ever compliance warning about the omission of notices of data collection for online behavioral advertising, which are required under the Self-Regulatory Principles for Online Behavioral Advertising.  Enforcement actions against first parties failing to provide such notice has been delayed by the Counsel until January 1, 2014, but we strongly encourage companies to act now.  Learn more here about what you can do to prevent enforcement action on AdLaw By Request.

 

State Attorneys General Maintain Sharp Focus on Privacy

This post was written by Mark S. Melodia and Christine E. Nielsen.

Though the National Association of Attorneys General (NAAG) Presidential Initiative “Privacy in a Digital Age” expired in June 2013 when a new NAAG president took over, the state attorneys general have maintained their sharp focus on all things privacy, with no signs that that focus will shift anytime soon. Most recent case in point: a $17 million settlement with Google related to Google’s use of tracking cookies on Safari browsers. 

On November 18, 37 states and the District of Columbia announced the settlement with Google, which resolves an investigation that began in February 2012. Default settings on Apple’s Safari browser do not allow for tracking across different websites.  The investigation centered on whether Google tricked the browser into allowing such tracking, ostensibly in contradiction to the user’s choice not to be tracked. Google faced similar scrutiny from the FTC, which entered into a $22.5 million settlement with the search engine giant late last year.

In addition to the $17 million payment, the state AG settlement prohibits Google, without the express consent of an individual user, from overriding that user’s Internet browser’s setting to block tracking cookies. Google is also prohibited from misrepresenting the extent to which a user can manage how Google serves advertisements. Google must create and maintain a page that informs users about cookies, Google’s use of cookies, and user control over cookies.  This separate “Cookie Page” must be maintained for five years.

Privacy investigations and enforcement actions are not just handled through the multistate vehicle; individual states are pursuing their own actions, scrutinizing website and mobile app privacy policies, investigating data security breaches, and paying close attention to how entities treat sensitive data like children’s information and health information. For example, California has been particularly active in this area, releasing mobile app best practices guidance earlier this year, which followed on the heels of enforcement actions filed against mobile application developers for alleged non-compliance with California’s privacy policy requirements.
Several states have also flexed their muscles in the health care arena, enforcing data breach notification requirements for the loss of protected health information under the Health Insurance Portability and Accountability Act (HIPAA). Connecticut led the charge in 2010, exercising the new enforcement authority granted to the states under the HITECH Act, with a lawsuit against Health Net. In 2012, both Massachusetts and Minnesota entered the arena with investigations of their own. With this year’s release of final rules under HITECH and a renewed national focus on health care, we wouldn’t be surprised to hear about more states jumping into that privacy arena soon.
 

China Amends Consumer Protection Law to Benefit Consumers at Expense of Businesses

This post was written by Cynthia O'Donoghue.

On 25 October 2013, China’s Standing Committee of the National People’s Congress of the People’s Republic of China passed an amendment to modernise the Law on Protection of Consumer Rights and Interests (Consumer Protection Law), the first overhaul since it’s adoption in 1993. The amended Consumer Protection Law will take effect from 15 March 2014 and will provide specific protection for consumer privacy in China for the first time. The amendments reinforce the Decision on Enhancing Internet Information Protection from December 2012. As a result, binding privacy protection obligations will now extend to sellers of consumer goods as well as internet service providers and telecommunication providers in the context of the increasing popularity of e-commerce.

Under the amendments businesses will be required to:

  1. Obtain explicit consumer consent to the collection and use of their personal information
  2. Expressly inform consumers of the purposes, method and scope for
    the use of their personal information
  3. Keep all consumer information confidential
  4. Are banned from selling, illegal providing or otherwise disclosing consumer information to others
  5. Put in place technical measures to adequately secure consumer information
  6. Take active steps to mitigate damage in the event of actual or suspected unauthorised disclosure of consumer information
  7. Obtain explicit consent prior to contacting consumers with commercial or marketing information

It is anticipated the amended Consumer Protection Law will greatly enhance consumer privacy rights and boost commerce in China. Jia Dongming, director of the civil law working committee under China’s Standing Committee commented “Strengthening consumer confidence will benefit the whole nation's economic development and boost domestic demand."

However, critics believe any benefits are equally outweighed by the costly burden this will place on businesses. Organisations will have to sharpen their internal data management and appoint data protection officers to ensure they keep step with the requirements of the revised Consumer Protection Law. A failure to comply could result in companies facing the increased penalty of 500,000 yuan, a significantly greater deterrent than the previous fine of 10,000 yuan. Beyond the administrative expense, costs are likely to spiral even further with the burden of proof shifting from the consumer to businesses in the event of disputes. In such scenarios consumers are likely to win out, bolstered by the support of China’s Consumer’s Association which will now be able to initiate class-action litigation on behalf of consumers.

Department for Business, Innovation & Skills Publishes Impact Assessment for European Commission Proposed Cybersecurity Directive

This post was written by Cynthia O'Donoghue.

The UK Government Department for Business, Innovation and Skills (BIS) has issued an impact assessment (IA) at the end of September on the draft Network and Information Security Directive (the Directive) proposed by the European Commission on 7 February 2013. The Directive aims to achieve a common high level of network and information security across the EU to harmonise existing discrepancies between national strategies.

To achieve this, the proposed Directive mandates:

  • All Member States must within one month establish competent authorities for network and information security and set up national Computer Emergency Response Teams (CERTS)
  • A cooperation network must be set up between competent authorities enabling secure and coordinated information exchange as well as an early warning system to allow effective detection and response in relation to network information security incidents
  • A culture of risk management and information sharing between private and public sectors must be developed
  • A system of reporting to the relevant competent authority of any incidents seriously compromising an entity’s networks and information systems must be established
  • National competent authorities must impose sanctions, initiate audits and publicise incidents

To assess the impact this Directive could have in the UK, BIS initiated a call for evidence on 22 May 2013 to create a baseline. It was found at present £1.98 billion is spent on security annually. Large organisations spend £1.45 billion in total with an average each of £540,000, whilst SME’s account for £533 million each averaging £26,000. The potential impact of the Directive is estimated as follows:

  • 22,935 businesses in the UK will be affected
  • Additional security spending will amount to between £992.1 million - £1,984.2 million
  • Large organisations will have to increase average spending by an extra £270,000-£540,000, whilst small organisations will have to increase average spending by an addition £13,000-£26,000
  • An overall benefit of £860.6 million is estimated if 5,000-10,000 of effected UK organisations can achieve benefits of £27,000 by preventing 50% of cybersecurity incidents

Beyond the figures, the IA also highlights some key concerns:

  • By setting a minimum level across the EU this could result in a tick box approach to compliance
  • In many sectors, reporting infrastructures already exist with industry regulators. Additional reporting obligations could lead to duplication of procedure increasing the administrative burden and causing resources to be diverted to dealing with compliance.
  • The scope of business likely to be effected is overly expansive and could impose disproportionate obligations on small businesses
  • Imposing mandatory reporting obligations as opposed to the voluntary approach could create a compliance culture discouraging information sharing
  • Audits, sanctions and publication of breaches could penalise organisations with strong capabilities for detecting breaches and discentivize reporting
  • Establishing a new national competent authority could be costly and unnecessary
  • A Pan European response framework is likely to inhibit and slow down effective national measures for incidents
  • Significant security risks are inherent in greater information sharing between national competent authorities poses

This being said, BIS are hopeful that the Directive will prove flexible in approach and deem existing voluntary measures and the existing high level strategy in the UK as sufficient. However, until the scope and thresholds under the Directive are confirmed, it is only possible to speculate how costly its impact could be.

South Korean Government To Certify Companies' Privacy Compliance

This post was written by Cynthia O'Donoghue.

On 27 October 2013, South Korea’s Ministry of Security and Public Administration (MOSPA) announced that beginning 28 November 2013, the government is set to issue certifications to companies that can demonstrate compliance with their duties under the Personal Information Protection Act (PIPA).

Companies will be able to file applications for certification to the National Information Society Agency (NISA), which will assess each applicant against a set of criteria, with the number of obligations depending on the size of the business. The self-employed will have to satisfy 35 obligations; small- to medium-sized companies will have 52 requirements; while large companies and state-run firms will have to meet 65 criteria.

The Certification Program will grant businesses with NISA endorsement status for a period of three years; however, companies will be subject to annual NISA review to monitor ongoing compliance.

It is anticipated that certification will foster greater trust in the relationship between businesses and customers in South Korea. This will undoubtedly be a welcome measure in the context of the increasing trend of people falling victim to scams as a result of unauthorised leaks of personal data, such as the illegal access to personal data of more than 200,000 mobile phone users last year by two of the largest mobile phone companies, KT and SKT.

Global Privacy Enforcement Network Conducts International Privacy Sweep

This post was written by Cynthia O'Donoghue.

Transparency is central to respecting the privacy of individuals and it is paramount that organisations develop transparent online privacy policies so that individuals understand how their personal data is handled in this virtual context. To raise awareness of online privacy rights and to encourage compliance with privacy legislation, 19 privacy enforcement authorities, during 6-12 May 2013, joined forces to participate in the first international ‘Internet Privacy Sweep’ to assess ‘Privacy Practice Transparency’. This initiative was pioneered by the Global Privacy Enforcement Network (GPEN) and coordinated by the Canadian Privacy Commissioner. The findings of the sweep were published in August 2013.

Over the week, participating authorities searched the Internet, replicating a user’s experience to assess the transparency of privacy policies and practices across 2,186 websites and 90 mobile apps. Five common indicators were adopted to define the scope of the sweep when assessing each website, including:

  • Availability – is there a privacy policy or information about privacy practices?
  • Findability – how difficult is this information to find?
  • Contactability – is contact information for privacy queries accessible?
  • Readability – how comprehensible is the privacy policy?
  • Relevance – how well does the information provided address common privacy questions or issues?

The Internet Privacy Sweep highlighted the following concerns:

  • 23% of 2,276 websites examined had no privacy policy at all
  • One-third of privacy policies were inadequate in terms of relevance, and had a disproportionate focus on cookies rather than an explanation of data processing as a whole
  • 33% proved weak in terms of readability, whether minimal information of no more than a tweet, to the other extreme, legalistic language quoted direct from statute
  • 92% of mobile apps lag behind privacy policies of websites in terms of presentation, with 54% having no privacy policy at all

The following best practices were highlighted:

  • Many organisations’ privacy policies were easily accessible, simple to read, and relevant, including information as to what data is collected for what purposes and to whom it is disclosed
  • The best policies were easily accessible and presented in plain language with clear and concise explanation, using headings, short paragraphs and frequently asked questions
  • 80% of privacy policies included contact information (with several options including mail, email or phone) for privacy queries
  • Some mobile apps policies went beyond providing a link to the organisation's website’s privacy policy

The sweep was not intended as an investigation into compliance issues or legislative breaches; however, several enforcement authorities have already taken follow-up actions and enforcement directly against organisations whose privacy policies (or lack of) were alarmingly exposed by the sweep. Organisations are therefore encouraged to ensure they adopt a transparent approach to their privacy practices to avoid scrutiny from GPEN’s ongoing privacy sweep efforts.

Court Ruling Reinforces The 'Right To Be Forgotten' On Social Media Sites

This post was written by Cynthia O'Donoghue.

The "right to be forgotten" is a hot topic of discussion in the context of imminent EU Data Protection Reform. Article 17 of the new EU General Data Protection Regulation will give data subjects the “right of erasure” to request that data controllers delete any personal data relating to them, and ensure there is no further dissemination of such data. Such requests will extend to third-party hosts where that data may have been sent, obliging deletion of any links, or copy or replication of that data. This is likely to be particularly onerous on Internet companies like Google, Facebook or YouTube, given the amount of personal data processed on these platforms.

Case-in-point was highlighted in a recent court case, McKeogh v John Doe 1 & Ors [2012]. In November 2011, a student evaded a taxi fare in Dublin. At the time the taxi driver took a video of the culprit and posted it on YouTube, asking help to identify the boy. This led to Mr McKeogh being mistakenly identified as the perpetrator. The creation of a Facebook page resulted in the video becoming viral and a campaign of abuse began. In the words of Mr Justice Michael Peart, this social media storm led Mr McKeogh to suffer a “miscellany of the most vile, crude obscene and obnoxious comments” wrongfully condemning him as the culprit.

In court, Mr McKeogh was proven innocent beyond all reasonable doubt on the grounds that he wasn’t even in the country at the time of the crime, having been studying in Japan. However, Mr McKeogh was refused an order prohibiting the press from publishing the defamatory material. Furthermore, YouTube, Google and Facebook failed to respond to requests to remove the video and related links. Therefore, despite clearing his name in court, the allegations and adverse comments remained online and in the media, tarnishing his reputation and future career prospects. In a judgment given in the High Court in Ireland on 16 May 2013, Mr Justice Michael Peart commented damages could not sufficiently compensate the defendant for the distress suffered, which would continue for so long as the material remained online. As a result, a mandatory injunction was granted ordering YouTube, Google and Facebook to take down the offending material within 14 days.

This judgment has the potential to open the floodgates and set a precedent to justify further requests by others objecting to videos posted about them on social media.

FCC Creates Formal Opportunity To Be Heard on Two Aspects of Its Revised TCPA Rules Requiring Prior Express Written Consent

This post was written by Judith L. Harris.

On Friday, the Federal Communications Commission (FCC) released Public Notices seeking comment on two recently filed requests for guidance on different aspects of its February 2012 Report and Order creating enhanced compliance obligations under its Telephone Consumer Protection Act (TCPA) rules. Both requests relate to that aspect of the Order that requires prior express written consent before placing a telemarketing call/sending a telemarketing text, or leaving a pre-recorded or automated telemarketing voice message to a consumer’s mobile device using an auto-dialer.

The first of the FCC’s Notices (DA 13-2118) relates to a Petition for Declaratory Ruling filed by the Coalition of Mobile Engagement Providers seeking clarification that the revised TCPA rules that were effective October 16, 2013, do not “nullify those written express consents already provided by consumers before that date.” The second notice (DA 13-2119) concerns a Petition for Forbearance filed by the Direct Marketing Association (DMA) asking that the FCC not enforce that portion of the FCC’s revised rules that require that in seeking a customer’s prior express written consent to contact him or her for telemarketing purposes using an auto-dialer or with a prerecorded message, the customer be specifically informed that sales are not conditioned on the customer giving the requested consent. The DMA argues that, in releasing the new rules, the FCC had explained that its primary goal was to make its rules consistent with those of the Federal Trade Commission’s (FTC’s) Telephone Sales Rule (TSR), and yet the FCC’s rule requiring that a marketer affirmatively disclose to its customer that it is not conditioning sale on the written agreement departs from what is required under the FTC’s rules. The DMA asks therefore, that the FCC forbear from enforcing that aspect of its revised rules with regard to existing written agreements.

Comments on both requests are due December 2 and Reply Comments are due December 17. We urge those of you who are concerned about various aspects of the TCPA’s revised rules regarding written consent to make your voices heard. The FCC is a political agency at heart that is attentive to the public and to Congressional will. 

Third Circuit Says Warrants Required for GPS Tracking

This post was written by Mark S. Melodia and Frederick Lah.

Earlier this month, the Third Circuit held in U.S. v. Katzin that a search warrant is required before the government may use a GPS tracking device. Katzin marks the first time a federal appellate court has ruled on the need for a warrant with respect to GPS trackers. The Third Circuit’s ruling is the latest development in an ever-evolving case-law dealing with the issue of whether warrants are required for location data collected from devices like GPS trackers and cell phones.

In Katzin, the FBI, without a search warrant, attached a “slap-on” GPS tracker to the suspect's van following a series of pharmacy burglaries. Using the results yielded from the GPS tracker, the FBI was able to eventually connect the suspect's location with the latest burglarized pharmacy. The suspect, along with his accomplices, moved to suppress the evidence discovered in the van. The government opposed the motions by arguing, inter alia, that a warrant was not required to use the GPS device based on the automobile exception to the warrant requirement. In rejecting the government's argument, the Third Circuit held that:

"The key distinction in this case is the type of search at issue. While the Supreme Court has stated that the automobile exception permits a search that is 'no broader and no narrower than a magistrate could legitimately authorize by warrant' [citation omitted], the search is still limited to a discreet moment in time... Attaching and monitoring a GPS tracker is different: It creates a continuous police presence for the purpose of discovering evidence that may come into existence and/or be placed within the vehicle at some point in the future."

The Third Circuit also rejected the government's argument that the good faith exception to the exclusionary rule should apply even if the search was unconstitutional. Other federal appellate courts have held that precedent involving the warrantless use of beepers sanctioned the warrantless use of GPS tracking, but the Third Circuit expressly disagreed with these courts based on how vastly different GPS trackers are from beepers.

Katzin follows the Supreme Court's January 2012 landmark ruling, U.S. v. Jones, where the Supreme Court held that the government's use of a GPS device constitutes a search under the Fourth Amendment. The Jones holding left unresolved the issue of whether the warrantless use of a GPS would be unlawful. Over the past few years, courts across the country have weighed in on the issue of whether warrants are required for the collection of location data from GPS trackers and cell phones. For example, in July 2013, the Fifth Circuit ruled that no warrant is needed for the collection of cell phone location data. This holding was in direct conflict with the New Jersey Supreme Court's ruling that warrants are required for cell phone location data, which we previously analyzed.

Katzin has the potential to affect companies that record location data about their consumers and employees. While the holding appears to be limited to the use of GPS trackers, companies should take steps to ensure that their employees understand the warrant requirement if and when law enforcement seeks this information. Any changes to a company’s policies and procedures should be done while keeping in mind the holdings from the Fifth Circuit and the New Jersey Supreme Court regarding cell phone location data. As courts across the country and Congress continue to grapple with these issues, we will continue to monitor this situation closely.

Third-Party Relationships: The OCC Gets Serious In Its New Bulletin

This post was written by Timothy J. Nagle.

Yesterday, the Office of the Comptroller of the Currency issued OCC Bulletin 2013-29 on Third-Party Relationships. The document rescinds OCC Bulletin 2001-47 and OCC Advisory Letter 2000-9, both of which had served as the basis for supplier management practices and inspections for many years. It is much more expansive than CFPB Bulletin 2012-03 (“Service Providers”), but the two should be read as complementary. With this new Bulletin, the OCC maintains the core elements of its guidance regarding the processes and risk-management principles by which banks contract with and supervise third parties, including merchant payment processing services, joint ventures, and services provided by affiliates or subsidiaries. However, the tone, level of prescription, and escalation of responsibility to the Board of Directors suggest a more active role by regulators.

As with prior guidance, this Bulletin describes an effective risk-management process and life cycle, specifies appropriate contract terms, and allocates oversight and accountability within the financial institution. But there are new requirements and admonitions, highlighted by the statement in the discussion of supervisory review of third-party relationships that a bank’s failure to have an effective third-party risk management process “…may be an unsafe and unsound banking practice.” Compliance and audit executives, start your engines. To ensure they don’t feel left out, the Bulletin has a special note for the boards and management of Community Banks, advising them to be certain that the bank has risk-management practices in place to manage the risks presented by the use of vendors for critical activities. This focus on third-party relationships who are involved in critical activities continues with the requirement that the board of directors approve the plan for managing the vendor and the negotiated contract with the third party.

Other items of interest in the guidance include the note that supervised banks that provide services to other supervised banks will be held to the standards described in the Bulletin, and an expectation that a bank will conduct a due diligence examination (possibly including a site visit) before entering into a contract. This review will include a business experience and reputation evaluation, part of which is a reference check with industry organizations, the Better Business Bureau, Federal Trade Commission, state attorneys general and consumer affairs offices, SEC filings, and similar foreign authorities. The third party will be required to conduct periodic background checks on its senior management and employees, and have adequate succession plans and employee training programs to ensure compliance with policies and procedures. The Bulletin goes into great detail regarding appropriate contract provisions, such as right to audit and require remediation, compliance with a broad range of laws and regulations, and whether the contract contains fees or incentives that could present undue risk to either party. It contemplates joint exercise of disaster recovery and incident management plans “involving unauthorized intrusions or other breaches in confidentiality and integrity.” As stated previously, senior management should obtain board approval of any contract involving critical activities. Finally, the default and termination provisions of the contract with a third party must allow the bank to terminate “in the event that the OCC formally directs the bank to terminate the relationship.”

With the issuance of this Bulletin, any financial institution that is regulated by the OCC will have to review its vendor management and third-party relationship processes, standard contract provisions, and senior management and board oversight responsibilities (including the possibility of appointing a senior manager to provide oversight of a third party involving critical activities). The Bulletin reflects a renewed focus by the OCC on joint ventures and other third-party relationships outside of the standard service provider context, risks of offshoring services, and the need for closer and ongoing management of third parties that support critical functions. It also emphasizes consideration of “concentration risk,” the impact on dual employees and assessment of the complexity of the arrangement. A bank should expect to be asked about “the robust analytical process” it uses to assess and manage third-party relationships during a supervisory review. Similarly, any third party that provides services to financial institutions regulated by the OCC, especially those involved in critical activities, should expect to be presented with more stringent and intrusive contract terms, and be prepared to undergo an audit by this regulator.

Does SB 568, California's New 'Eraser Button' law, apply to you?

This post was written by Lisa B. Kim, Joshua B. Marker, and Paul Cho.

One of the bills we have been following since May has recently cleared the governor’s desk and been signed into law. SB 568, now popularly known as the “Eraser Button” law, adds two significant, privacy-related requirements for operators of websites, online services, and mobile apps directed toward users under the age of 18. More specifically, this law applies to operators whose products and services are directed toward minors (defined as under age 18), and those who have actual knowledge that their products and services are being used by minors. Unless an exception to the new law applies, these operators will be required by law to: (1) notify minors of their right to remove posted content (whether on their own or by the operator upon request), and (2) provide instructions on how to do so.

The new law also prevents certain operators from marketing or advertising prohibited items (alcoholic beverages, firearms or handguns, dangerous fireworks, etc.) to minors. Additionally, operators who fall within the scope of the new law will be prevented from providing the personal information of minors to third parties for the purpose of marketing or advertising prohibited items. Finally, operators with products and services directed toward minors will also be required to provide notice to any third-party advertising services, who will face the same restrictions as to what they can advertise through that particular operator.

The new law applies to operators of websites, online services, and mobile apps everywhere, as long as their website, product, or service is visited or used by a California resident. It goes into effect January 1, 2015. While there is no specific private cause of action or statutory penalty set forth in the bill, violations will likely be enforced in civil lawsuits by the government and private parties under California’s unfair competition law.

Thus, anyone who operates a website or mobile application that reaches California residents should answer the questions listed in the attached flow chart to determine whether this new law applies to them, and to what extent. Also included in the attachment is a list of what needs to be done to comply with the law.

Progress for European Data Protection Reform Halted

This post was written by Cynthia O'Donoghue.

We recently reported on the excitement surrounding the breakthrough vote by European Parliament on 21 October which put the delayed overhaul of European data protection rules back on track.

Following this landmark vote, Peter Hustinx, European Data Protection Supervisor, issued a press release, stating, “It is essential that the European Union acts quickly so that political agreement is reached before the European Parliament Elections. We look to the Council to maintain the momentum with equal vigour and purpose.” Peter Schaar, the German Federal Commissioner for Data Protection and Freedom of Information, similarly commented in a press release, “I hope that the governments of the 28 EU Member States represented in the Council conceive this as an opportunity to decide rapidly the reform of data protection.... [T]he success of the reform should be a top priority!”

However, section 8 of the European Council Conclusions of 24/25 October has indicated that the impetus for reform has yet again been halted in its tracks, with the anticipated date for adoption of the EU General Data Protection Framework pushed back from Spring 2014 to 2015.

Despite this delay, commentators including UK Prime Minister David Cameron, have indicated that ‘more haste less speed’ is the best approach to take. This is in light of dangerous loopholes exposed in the Regulation that could render it ineffective if adopted in its current form. Concerns focus on:

  • Vague definitions, such as ‘pseudonymous data’ and ‘legitimate interest’, that could allow companies to exonerate themselves from compliance with legislation
  • Extended circumstances where organisations could process data unrestricted without obtaining consent
  • Soft-touch approach to rules surrounding data profiling and corporate tracking
  • Anti-data transfer clause set to eliminate U.S. Safe Harbor, preventing U.S.-based companies from indiscriminately passing on the personal details of EU citizens to U.S. law enforcement and intelligence agencies, and significantly restricting EU-U.S. data transfer

Controversially plenary debate and vote on the legislative package has been bypassed in favour of secrete tripartite negotiations, conducted between the European Commission, the European Parliament and the Council of Ministers behind closed doors. While avoiding public commentary and criticism may speed up the legislative process, commentators such as Miriam Artino of Quadrature du Net have described this tactic as “obscure hijacking of the democratic debate,” adding “the regrettable choice to enter into secrete tripartite negotiation could significantly weaken the Regulation.” The absence of transparent debate raises alarms that much of the positive provisions of the Regulation achieved so far could be watered down even further in a hurried attempt to meet the promise of adopting the legislative package before the end the current legislature.

What Constitutes an Autodialer? The Debate Continues

This post was written by Judith L. Harris and Jack Gindi.

A recent state court decision in California, and before it, a district court decision in Alabama, both found that equipment used to facilitate telephone communications must have the current capacity to randomly or sequentially generate telephone numbers in order to be considered an “automatic telephone dialing system.” Could these holdings signal a trend towards a more practical approach to TCPA enforcement or are they simply aberrations?

Click here to read the issued Client Alert.

AvMed Agrees to Pay $3 Million to Data Security Breach Class Members; Size of Payments Linked to Years as Customer

This post was written by Mark S. Melodia, Paul Bond and Frederick Lah.

Earlier this week, a data breach class action brought against health insurance provider AvMed, Inc. came one step closer to resolution when plaintiffs filed their unopposed motion for preliminary approval of the class action settlement. The parties filed a joint notice of settlement back in September, but details were not provided until now.

Plaintiffs in this case alleged that in December 2009, two unencrypted laptops containing personal information from 1.2 million customers – information including their names, addresses, Social Security numbers, and medical health information – was stolen from a conference room. After three years of litigation, which included an appeal before the 11th Circuit and multiple mediation attempts, the parties have agreed to a compromise to resolve the class action. Under the terms of the proposed settlement, AvMed agreed to create a settlement fund for $3 million from which members can make claims for $10 for each year that they bought insurance (subject to a cap of $30). Class members who have experienced identity theft are eligible to make additional claims to recover their losses. In addition, AvMed agreed to implement increased security measures, for example, mandatory security awareness and training, and installing encryption and other security protocol on their laptops.

While class action settlements are contractual in nature and only binding upon the parties that enter into them, they may still serve to influence negotiations between other similarly situated parties, as well as upon the courts reviewing the settlement’s overall fairness and reasonableness. Settlements for data breach class actions have traditionally not extended payments to class members who have not experienced any fraud or identity theft. Here, though, that is exactly what the sides agreed to, whereby payments will be made to all class members who purchased insurance, even absent any fraud or identity theft. Plaintiffs in data breach and theft cases have long sought (without success) to advance the idea that the value of the underlying good or service involved is somehow degraded by the security incident.

UK Office of Fair Trading Consults on Consumer Protection Principles for Children's Online Games and Apps

This post was written by Cynthia O'Donoghue and Kate Brimsted.

With more than six million apps currently in existence, the ‘appification’ of society is increasingly a topic of discussion, and certainly it was prominent at the 35th International Conference of Data Protection and Privacy Commissioners in Warsaw in September. Apps often collect large amounts of personal data and therefore have significant potential privacy implications. Young children are particularly vulnerable in this respect, as they can be captivated by online and app-based games and less aware of potential risks.

In September, the UK’s Office of Fair Trading (OFT) reported on its investigation into the ways in which online and app-based games encourage children to make purchases. It is now consulting on proposed principles that the online games industry will be expected to adhere to achieve compliance with consumer protection laws, in particular with regards to children, who often constitute the “average consumer” in this context. The consultation closes on 21 November 2013.

The OFT investigation scrutinised the commercial practices of 38 web and app-based games popular with children and identified the following areas of concern:

  • A lack of transparent, accurate, up-front information relating to costs available prior to the decision to play or download game
  • Misleading practices, including failing to separately identify commercial intent interspersed in game play
  • Exploitation of children’s inexperience, vulnerability and credulity, including aggressive and manipulative commercial practices
  • Direct exhortations to children to buy advertised products
  • Payments taken from account holders without their knowledge, express authorisation or informed consent

To resolve these concerns the OFT has proposed the following principles which are intended to apply industry wide:

  1. Information about all costs associated with a game should be provided clearly, accurately and prominently up-front
  2. All information about the game necessary for the average consumer to make an informed decision to play, download, sign up or purchase a game should be clear, accurate, prominent and provided up-front 
  3. Information about the business should be clear, accurate, prominent and provided up-front. It should be clear to the consumer whom to contact in the case of queries or complaints and the business should be capable of being contacted rapidly and communicated with in a direct and effective manner
  4. The commercial intent of any in-game promotion of paid-for content or promotion of any other product or services should be clear and distinguishable from game play
  5. A game should not mislead consumers by giving the false impression that payments are required, or are an integral part of the way the game is played, if that is not the case
  6. Games should not include practices that are aggressive, or which otherwise have the potential to exploit a child’s inherent inexperience, vulnerability or credulity. The younger a child is, the greater the impact such practices are likely to have. 
  7. A game should not include direct exhortations to children to make a purchase or persuade others to purchase for them 
  8. Payments should not be taken from the payment account holder unless authorised. Separate informed consent should be required for in-game payments (i.e. payments additional to any one-off payment authorised at the outset to purchase, download or sign up to the game). The amount to be debited should be made clear. Consent should not be assumed, e.g. through opt-out provisions and the consumer should positively indicate his/her informed consent.

The OFT has launched a consultation on the proposed principles. All responses must be received by 5 p.m. on 21 November 2013 by email to: childrensonlinegames@oft.gsi.gov.uk or by post to: Children’s Online Games Consultation, Office of Fair Trading, Fleetbank House, 2-6 Salisbury Square, London EC4Y 8JX. The final version of the principles is due to be published by February 2014, with a grace period until 1 April 2014, after which enforcement action may be taken against businesses likely to be in breach of consumer protection law.

Breakthrough Vote by European Parliament Sets Delayed Data Protection Overhaul Back on Track

This post was written by Cynthia O'Donoghue and Kate Brimsted.

In January 2012, the European Commission proposed a legislative package to update the data protection principles enshrined in the 1995 Data Protection Directive (Directive 95/46/EC). The policy objectives of the European Commission set out an ambitious blueprint for a more cohesive EU data protection framework backed by stronger enforcement. Central to facilitating this were proposals for a General Data Protection Regulation (the Regulation) and a Data Protection Directive (the Directive) covering law enforcement.

On 21 October 2013, the Civil Liberties, Justice and Home Affairs Committee (LIBE) met in Strasbourg for the much-anticipated vote on the proposed legislative package. Voting had previously been delayed because of the overwhelming number of contested areas and 3,000+ amendments in the draft Regulation. (See our previous blogs on the first European Parliament vote delay and subsequent vote pushback.) See a video of the vote which marked the landmark moment (circa. 19:00 hrs.) when the negotiating mandates for the Regulation and for the Directive were adopted.

In a press release from the European Parliament rapporteur for the Regulation, Jan Philipp Albrecht commented, “The vote is a breakthrough for data protection rules in Europe, ensuring that they are up to the challenges of the digital age....The ball is now in the court of member state governments to agree a position and start negotiations so that we can respond to citizens’ interests and deliver an urgently needed update of EU data protection rules without delay.”

The key points in relation to the Regulation are:

  • Sanctions – companies in breach of data protection rules would face fines of the greater of up to €100 million ($138 million) or 5% of annual worldwide turnover (compared with €1 million or 2% proposed by the Commission).
  • Consent – this will require an explicit indication (by statement or affirmative action). Withdrawing consent must be as easy as giving it. The execution of a contract or provision of a service may not be made conditional on consent to processing data that is not strictly needed for those purposes. 
  • Data transfers to non-EEA countries look set to become more difficult – such as, if a third country requests a company (e.g., a search engine, social network or cloud provider) to disclose personal information processed in the EU, the firm would have to seek authorisation from the national data protection authority before transferring any data. The company would also have to inform the person of such a request. The more restrictive transfer provisions have the potential to put Safe Harbor at risk, and appear to be a response to the revelations about mass surveillance of EU citizens published in media in June.
  • Pseudonymous data is a new concept, defined as personal data that cannot be attributed to a specific individual without the use of personal information, and where such information is kept separately and is subject to additional measures. The use of pseudonymous data, e.g., for profiling, is subject to a ‘lighter’ regime. 
  • Right to Erasure – the much-vaunted “right to be forgotten” has been reined in to a strengthened (though qualified) right of deletion. 
  • Profiling – this will be limited to circumstances where the data subject has consented, where required by law or in pursuance of a contract. Data subjects have the right to object and should not be subject to discrimination as a result. 
  • Lifecycle Data Protection Management – this broader concept has been proposed, aspects of which require conducting privacy impact assessments and the appointment of Data Protection Officers based on the number of individuals whose data are processed (not organisation size).

The bulk of the amendments approved by the LIBE Committee can be found here for the Regulation: Articles 1-29 and Articles 30-91, and here for the Directive. Negotiations are now scheduled to commence between the European Parliament and national governments in Council. The aim is to reach an agreement on this major legislative reform before the May 2014 Elections.

UK Department of Health Response Supports Recommendations Following Caldicott Review

This post was written by Cynthia O'Donoghue.

The UK Department of Health has published a response supporting the Caldicott Review findings on information sharing of patient data across the health and care system.

The Department of Health supported each of the information sharing findings:

  • NHS England’s Information Strategy granting patients free access to their electronic care records and providing virtual consultations across health care offerings by 2015
  • Access to family records for a child’s social care remains in debate in light of the need to balance family privacy against concerns for child safety
  • Confidential patient data will be shared across the entire health care team (including social care workers) who have a direct relationship with the patient
  • The Information Governance Sub-Group is due to provide a standardized data sharing agreement to ensure patient data is processed fairly
  • Patients will be fully informed that their medical records could be anonymised and processed by data linking for research on health care improvement unless consent is withheld
  • NHS England will develop a standard system to manage patient consent
  • In the event of a data breach, patients will be given a full explanation of the cause and remedial action taken
  • All data breaches will be assessed against a standardized severity scale, reported and published in Annual Governance Statements
  • Caldicott Guardians should be appointed to ensure compliance
  • Information governance will be integrated into curricula and considered a core competency in undergraduate training

The Health and Social Care Information Centre has published a guide to confidentiality in health and social care with accompanying references which sets out five rules to help organisations assess how to share information consistent with a patient’s best interests:

  1. Information should be treated confidentially and respectfully
  2. Limit the sharing of information other then as when necessary for the effective care of the patient
  3. Information shared for the benefit of the community must be anonymised unless informed consent of the individual is given or otherwise required by law
  4. An individual’s right to object to sharing confidential information should be respected
  5. Policies, procedures and systems must be in place to ensure patient confidentiality

The Department encourages providers to audit and model their information sharing practices against the National Institute for Health and Care Excellence (NICE) clinical guidelines and use the Information Governance Toolkit to assess compliance with the Department’s governance standards and policies.

U.S. Court Permits Service by Facebook Message Against UK Defendant

This post was written by Cynthia O'Donoghue.

A U.S. District court ruled that a UK defendant may be served via Facebook. The U.S. District Court for the Eastern District of Michigan in Woodward v. Chetvertakov, E.D. Mich., No. 2:13-cv-11943-GER-MKM, ruled it permissible to serve a foreign defendant website operator based in the UK via email and Facebook message, where it was not possible to otherwise locate a physical address for service from the registered address of the defendant’s website.

In this case, the plaintiffs Orrin Woodward and Chris Brady submitted a motion for substituted service under the Federal Rules of Civil Procedure on the grounds that they were unable to serve the defendant Evgeniy Chetvertakov, the website operator of Global Gurus Inc., as the registered address listed on the Global Gurus website was not invalid. The plaintiffs were able to provide evidence that the email address listed in the domain registration information for the Global Guru’s website was accepting email messages. The plaintiffs were able to further prove that this same email address was listed under the contact information of Global Guru’s Facebook account. The court therefore granted plaintiffs’ motion for substituted service on Chetvertakov via email and Facebook message. Service was deemed to be complete upon filing by the plaintiffs of a return of service indicating that the email and Facebook message had been sent.

Not only does U.S. law permit service of process via alternative means where it can be demonstrated to be a reasonable way to inform parties of a pending action and provide them with an opportunity to response, but The Hague Service Convention Article also permits service through alternative means provided that the destination state does not object to those means. Service by Facebook message and email does not violate any international agreement relating to service of which the UK is party. Therefore, on the evidence presented by the plaintiffs and no impediment to such a method of service by the UK, the court granted the service by email and Facebook message was deemed due and proper service.

The ruling followed a case in the U.S. District Court for the Southern District of New York, FTC v. PCCare247 Inc., No. 12 Civ. 7189 (PAE) (S.D.N.Y. Mar. 7, 2013) where the court permitted service of documents on five India-based defendants. However similar requests to serve via Facebook and email have been denied, including in FTC v. Pecon Software Ltd., No. 1:12-cv-07186-PAE (S.D.N.Y. Aug. 7, 2013) where the same judge who permitted service in PCCare247 denied it.

California Courts Limit Liability for Lost Medical Information

This post was written by Steven Boranian and Joshua B. Marker.

A panel of the Court of Appeal limited the private right of action under the California Confidentiality of Medical Information Act, Civil Code § 56 et seq. (“CMIA”), holding that alleging negligent maintenance and loss of possession of confidential information is insufficient to state a cause of action.

In the putative class action, Regents of the University of California v. Superior Court, an encrypted hard drive containing confidential patient information was stolen in a home invasion robbery. The plaintiff alleged that the hospital had negligently maintained information in violation of the CMIA, but did not allege that there was any unauthorized access or viewing of her confidential information.

The court held that plaintiff had failed to state a claim, ruling that there is a private right of action for negligent maintenance, “only when such negligence results in unauthorized or wrongful access to the information.” As the plaintiff did not allege such unauthorized access, she could not state a claim.

Because of the availability of administrative penalties, violations of the CMIA carry potentially significant costs. This ruling is significant as it will possibly limit the exposure for “every provider of health care, health care service plan, pharmaceutical company, or contractor” who has lost confidential patient information, such as by having a laptop or hard drive accidentally lost or stolen.

International Economic Organisation OECD publishes revised guidelines on the protection of privacy and transborder flows of personal data.

This post was written by Cynthia O'Donoghue.

The international free flow of information has become fundamental in a data-driven economy. Yet the increasingly extensive use and movement of personal data creates greater privacy risks for an individual’s digital data trail; and while nearly 99 countries worldwide have some form of data privacy laws, the legal disparities can hinder transborder data flow. Acknowledging the need for a unified standard, the Organisation for Economic Co-Operation and Development (OECD) has published a revised version of the 1980 Guidelines on the ‘protection of privacy and transborder flows of personal data.’

The original guidelines informed and became the basis for many countries' data protection laws, including those in Europe. Fundamentally, the revised version leaves the original privacy principles unchanged, and are widely familiar:

  • Fair, lawful and limited collection of personal data obtained with the knowledge and consent of the individual
  • Data is relevant for purpose collected, is complete, and kept up to date
  • Use of data for new purposes must either be compatible with the original purpose and new uses, or disclosures require consent
  • Use of reasonable security safeguards to protect data and accountability of any data controller
  • Individual right of access to data held, and the right to have data erased, rectified or amended

Data controller accountability is reinforced in the revised guidelines, regardless of data location, and regardless of whether it remains within their own operations, those of its agents, or is transferred to another data controller. The OECD recommends the use of tailored privacy management programs and privacy impact assessments to manage the risk of data breach. The OECD also encourages contractual provisions requiring compliance with a data controller’s privacy policy, notification protocols in the event of a security breach, and response plans for data breaches and data subject inquiries.

The OECD guidelines suggest that to manage global privacy risks, there must be improved interoperability, with national strategies between states co-ordinated at government level, and cross-border co-operation between privacy enforcement authorities.

Singapore Personal Data Protection Commission Issues Advisory Guidelines on Key Concepts in the Personal Data Protection Act 2012

This post was written by Cynthia O'Donoghue.

In February this year, the Singapore Personal Data Protection Commission (PDPC) launched a public consultation on proposed regulations and two sets of guidelines on key concepts and selected topics under the Personal Data Protection Act 2012 (No. 26 of 2012) (the Act). Following that consultation, the PDPC issued in September the final advisory guidelines on the key concepts. These final guidelines address the nine obligations that form the foundation for provisions under the Act.

The essential recommendations closely follow the Act and emphasise the main data protection principles, including that data may only be processed for specified purposes and that individuals must be provided with advance notice of the purposes for collection and what data may be optional or required. The advance notification requirement underpins the need for valid consent under the Act. Data should be accurate, complete and up to date. It is possible to presume this is the case when provided directly from the individual, but a written declaration should be requested as a safeguard. Organisations must keep data secure and undertake risk assessments to prevent any data breaches and should delete data in light with legal or industry standards to prevent further risk of loss. The PDPC also recommends that organisations appoint a Data Protection Officer and acknowledges that the DPO need not be an employee of that organisation nor based in Singapore.

The guidelines detail the methods organisations must undertake into order to obtain valid consent, in particular:

  • Consent may not be obtained through deceptive or misleading practices or be a condition of a product or service
  • Third party consent on behalf of an individual will only be valid where collection, use or disclosure is necessary in an emergency and where the data is publicly available
  • Implied consent will only be valid where notification has been provided and the individual’s right to opt out is not due to an inability to give consent or a lack of awareness that consent is required
  • Implied consent will be valid where an individual voluntarily provides their personal data for a known purpose or where information is generally available and can be obtained by reasonably expected means at a location or an event that is open to the public
  • The consequences for withdrawing consent should be highlighted, and where consent is revoked, organisations should anonymise the data to render it irretrievable

The PDPC is yet to issue guidelines related to the topics on which it consulted earlier this year. Additional guidance is therefore anticipated which should shed further light on interpretation of the Act before it comes into force in July 2014.

Office for the Australian Information Commissioner (OAIC) Publishes Draft Guidelines Interpreting New Privacy Principles

This post was written by Cynthia O'Donoghue.

The Office for the Australian Information Commissioner (OAIC) has published initial draft guidelines which provide a good indication as to how to interpret the first five of thirteen Australian Privacy Principles (APPS) that will form the foundation of the Privacy Amendment (Enhancing Privacy Protection) Act 2012 which will become effective from 12 March 2014.

  1. APP 1: Open and Transparent Management of Personal Information
    APP entities will be deemed accountable for taking proactive steps to manage risks to data at every stage from collection, use and storage to destruction. Emphasis is placed on the importance of IT security systems, privacy impact assessments for new projects and procedures for reporting breaches. Also important are easily accessible and up-to-date privacy policies.
  2. APP 2: Anonymity and Pseudonymity
    It is anticipated that individuals will have the right to deal with organisations where they cannot be identified from the data they provide, by opting not to provide personal information, or by providing a different name, term or descriptor. The aim is to give individuals greater control over their personal information and is seen as a method of assisting organisations with reducing their compliance burden. Organisations would need to prominently state when it is not necessary for an individual to provide personal information.
  3. APP 3: Collection of Solicited Personal Information
    APP entities will only be able to solicit information collected from other entites which is reasonably necessary or directly related to the entities functions or activities. There willl also be an additional obligation to seek explicit direct consent from individuals when soliciting sensitive personal data except (a) where it is permitted by law (b) where a permitted general situation exists or 3.4 (c) where a permitted health situation exists (d) for an enforcement activity or (e) by a non-profit organisation.
  4. APP 4: Dealing with Unsolicited Personal Information
    This principle aims to address how organisations should deal with data which it has not actively sought to collect yet but falls within its control, such as information received that is surplus to its function. If the data could not have been collected under APP 3, then it must be either destroyed or de-identified.
  5. APP 5: Notification of the Collection of Personal Information
    Before or at the time of collection of any information, organisations will be expected to ensure that individuals are fully informed as to the APP entity’s identity, the purpose for collection, the consequences if that information is not collected and any intended disclosure. 

Further draft guidelines are expected to be released over the next few weeks and will cover the remaining APPS which deal with topics including direct marketing, cross-border disclosure or personal information and data security.  

Updates from Dutch Data Protection Authority

This post was written by Cynthia O'Donoghue.

The Dutch Data Protection Authority, the College Bescherming Persoonsgegevens (CBP), has issued a report on data collection by smart TVs and issued new protocols in relation to financial fraud prevention and the use of black lists by hotels.

The CBP issued a report on unauthorized data collection by smart televisions following an investigation of TP Visions Netherlands B.V (TP Vision). The report highlights that adding web functionality to televisions has resulted in a significant shift towards two-way data traffic, allowing manufacturers to look back at viewers including their online viewing habits, their favourite programs, and apps. In violation of the Data Protection Act, TP Vision failed to demonstrate that it had fully informed viewers about the extent and purpose for which the personal data was being collected by the smart televisions, nor had it sought any form of legal and freely given consent from viewers. CBP condemned TP Vision’s inadequate Terms of Use and Privacy and Cookies Policy and further scrutinised the lack of a valid Data Processing Agreement with Google Analytics. As a result of the breaches outlined above, TP Vision was forced to terminate various contracts with analytics providers.

As an increasing number of Dutch financial institutions were joining together to tackle fraud, the CBP approved an amendment to a protocol followed by the financial services industry known as the Protocol Incident Warning Financial Institutions, setting the parameters for the ability of financial institutions to process criminal data of individuals relating to fraudulent activities.

In a similar vein, CBP has approved a new protocol permitting hotel members of the Hotel Warning Network (HWN) to process guest data for the purposes of creating a blacklist. To comply with the Dutch Data Protection Act, guests should be informed in writing before their details are entered in the register and given either an opportunity to appeal or object to their inclusion. The protocol addresses how long blacklist data may be retained and the security measures related to accessing the data and keeping it confidential. THE CBP has approved the protocol as an alternative to the use of porters and CCTV, which it acknowledged as insufficient to prevent criminal damage estimated at costing the hotel industry €115 million a year.

French Data Protection Authority CNIL Announces New Online Notification Procedure for Reporting Data Breaches

This post was written by Cynthia O'Donoghue and Daniel Kadar.

France’s data protection authority, the Commission Nationale De L’informatique et Des Libertés (CNIL), released a new mandatory online notification procedure for French electronic communications service providers (Providers) to rapidly report data breaches to CNIL in compliance with new EC Regulation (No.611/2013) (the Regulation).

Any data breach must be reported to CNIL via a new standardized online notification form in accordance with Article 2(4) of the Regulation. The notification must include all details set out in Annex I of the Regulation and be made no later than 24 hours after the detection of the breach. Where full details cannot be provided, organisations must make an initial notification with additional information provided no later than 3 days after the date of the breach. Such additional notification must also be provided to the individual whose data was adversely affected by the breach.

Individuals need not be notified if the Provider can demonstrate that it has implemented security measures rendering that data unintelligible. The CNIL has two months to check the adequacy of any security measures, which may include encryption or data hashing/masking. Under existing French Law, Providers must maintain a registry of data breaches which CNIL is entitled to audit. The CNIL may issue penalties of up to 300,000 euros and there is the potential for up to five years imprisonment for failing to comply with the data breach notification requirement.

UK Information Commissioner's Office issues guidance on notification procedure for data security breaches.

This post was written by Cynthia O'Donoghue.

The UK Information Commissioner’s Office (ICO) published new guidance following the issuance of EC Regulation (No.611/2013) (The Notification Regulation) (see our blog), which aims to harmonise EU data breach notification procedure for ISPs and telecom providers.

The ICO’s guidance seeks to interpret the Notification Regulation in line with Privacy and Electronic Communications (EC Directive) (Amendment) Regulations 2011 (PECR). PECR requires communications service providers to notify security breaches involving personal data without undue delay.

The ICO has provided a checklist to help organisations assess whether they are communications service providers. Such providers must notify the ICO within 24 hours of the breach and in any event no later than three days after the breach. Notifications must address the details set out at Annex I of the Notification Regulation, and the ICO suggests that if the breach involved the loss of sensitive personal data, individuals should be notified of full details as per Annex II of the Notification Regulation. The ICO also encourages service providers to submit an online monthly log or emails to ensure the ICO is well informed.

While the obligation to notify the ICO currently only applies to service providers within the definition of PECR, the ICO has hinted that in the future, the obligation may extend to all data controllers.

UK Data Protection Authority publishes new guidance and checklist on direct marketing.

This post was written by Cynthia O'Donoghue.

UK data protection authority, the Information Commissioner’s Office (ICO), has published new guidance, an accompanying checklist, and an at-a-glance guide to help organisations understand the rules governing direct marketing under the Data Protection Act 1988 (DPA), and the Privacy and Electronic Communications (EC Directive) (Amendment) Regulations 2011 (PECR).

The ICO guidance attempts to clarify direct marketing under the DPA, and interprets it as any communication targeted at a named individual of unsolicited commercial material, such as that which promotes an organisation’s aims and ideals, including market research for marketing purposes or communications aimed at generating marketing leads.

Consent is a key concept in the context of direct marketing. The ICO advises that the use of ‘opt-in’ boxes is best practice for seeking explicit consent from individuals to direct marketing from a specific organisation. The ICO also recommends keeping clear records of the date, method and purpose to which consent has been given in the event of an audit. Due diligence should be undertaken when relying on third-party indirect consent from bought-in mailing lists to ensure adequate consent has been obtained and can be relied upon. The ICO also stressed individuals’ rights to opt out at any time, with an organisation then obliged to cease further direct marketing within 28 days for electronic communications, or two months for postal communications.

The checklist addresses the various methods of direct marketing by outlining the different types of consent:

  • Postal Mail: Individuals may be contacted unless registered on the Mail Preference Service (MPS)
  • Unsolicited Calls: Individuals may be contacted unless on the Telephone Preference Service (TPS) or Corporate Telephone Preference Service (CTPS)
  • Automated Calls: Only those individuals who have given specific consent may be contacted
  • Fax: Individuals and organisations may be contacted unless registered on the Fax Preference Service (FPS)
  • Texts, emails, electronic mail or voicemail: Individuals may only be contacted if they have given specific consent, unless they have exercised subsequent right to opt out

The ICO recently stepped up its enforcement of direct marketing activities that fall foul of the DPA or PECR, with fines against organisations totalling several hundred thousand pounds. Organisations should take heed of the ICO’s guidance to avoid facing monetary penalty notices of up to £500,000.

Moving Toward a 'One Stop Shop' Approach to EU Data Protection

This post was written by Cynthia O'Donoghue.

In January 2012, the European Commission presented a legislative package to update the core data protection principles enshrined in the 1995 Data Protection Directive (Directive 95/46/EC). The policy objectives of the European Commission set out an ambition to build a more cohesive EU data protection framework supported by stronger enforcement. Central to facilitating this were proposals for a General Data Protection Regulation (the Regulation) and a directive on protecting personal data.

On 7-8 October 2013, the Justice and Home Affairs Council (the Council) debated the European Commission’s proposals. In a press release, the Council showed its support for a ‘one-stop-shop’ for data protection compliance, rather than the existing 28-member state compliance approach, as well as a consistency mechanism, both of which form the core pillars of the proposals.

The Council suggested that consistency in the application of EU data protection rules could be achieved by entrusting central power with the European Data Protection Board, which would take over from the existing Art. 29 Working Party, to ensure a harmonised approach to the Data Protection Framework.

The one-stop-shop principle would create consistency for international organisations to process personal data in multiple member states through the appointment of a single competent authority to monitor the data controller’s activities across all member states. The relevant supervisory authority would be determined by the member state in which the data controller or processer has its main establishment. The supervisory authority would also be responsible for providing a single supervisory decision in the context of enforcement. However, Council debate reached an impasse when discussing how this decision should be reached. While some member states support further expert work toward a single supervisory authority model, others preferred the concept of co-decision mechanism encouraging proximity between the main establishment supervisory authority involving local supervisory authorities. Either way, a one-stop-shop approach would undoubtedly be beneficial, ensuring faster and more consistent application of decisions, as well as offering legal certainty, reducing the administrative and cost burden for international organisations to comply with data protection rules.

Juozas Bernatonis, Justice Minister of Lithuania, commented:

"I would like to say that the Council generally supports the principle that the draft regulation should provide for a 'one-stop-shop' mechanism in important cross-border cases in order to arrive at a single decision in respect of companies operating in several member states. The aim is to develop a simple, fast mechanism that would contribute to a more consistent application of the data protection rules in the EU, to ensure legal certainty and reduce the administrative burden. This is an important factor to enhance the cost-efficiency of the data protection rules for international business, thus contributing to the growth of the digital economy.”

Progress in moving toward a new General Data Protection Regulation is proving very slow, and despite the ‘vote’ by the Council having been scheduled for 7 October, the proposals are still not finalised.

Imminent Effective Date of FCC's Prior Express Written Consent Rule Under the TCPA

This post was written by Judith L. Harris and Jack Gindi.

As we have reported in the past, the FCC, in a February 2012 Order, revised its TCPA rules to, among other things, “require prior express written consent for all autodialed or prerecorded telemarketing calls to wireless numbers and residential lines.” In the Matter of Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, 27 F.C.C. Rcd. 1830, 1831 (2012). This post is meant as a reminder that this requirement is effective October 16, 2013.

With respect to what form of written consent will be acceptable, the FCC, in its 2012 Order, concluded – consistent with the FTC’s Telemarketing Sales Rule – that “consent obtained in compliance with the E-SIGN Act will satisfy the requirements of [the FCC’s] revised rule, including permission obtained via an email, website form, text message, telephone keypress, or voice recording.” Id. at 1844.

Congress enacted the E-SIGN Act (or to facilitate the use of electronic records and signatures in interstate and foreign commerce. The E SIGN Act grants legal effect, validity, and enforceability to “electronic records” and “electronic signatures.” . An “electronic signature” is defined by the E-SIGN Act as “an electronic sound, symbol, or process attached to or logically associated with a contract or other record and executed or adopted by a person with the intent to sign the record.” 15 U.S.C. § 7006. The E-SIGN Act further defines an “electronic record” as “a contract or other record created, generated, sent, communicated, received, or stored by electronic means.” Id. Any form of consent that complies with either of these definitions will satisfy the TCPA’s new written consent requirement.

In its 2012 order, the FCC also defined the scope of disclosure that must be made in order to satisfy the new prior express written consent requirement under the TCPA. According to the FCC, consent “must be signed and be sufficient to show that the consumer: (1) received ‘clear and conspicuous disclosure’ of the consequences of providing the requested consent, i.e., that the consumer will receive future calls that deliver prerecorded messages by or on behalf of a specific seller; and (2) having received this information, agrees unambiguously to receive such calls at a telephone number the consumer designates. In addition, the written agreement must be obtained ‘without requiring, directly or indirectly, that the agreement be executed as a condition of purchasing any good or service.’” In the Matter of Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, 27 F.C.C. Rcd. 1830, 1844 (2012).

As mentioned above, this new requirement will be effective for all telemarketing calls as of October 16, 2013, and everyone should be ready NOW with their implementation plans.

What a Difference a Year Makes; or Does It?

This post was written by Judith L. Harris.

Almost a year ago, we posted a client alert saying that it looked like the FCC might be getting ready to issue some guidance on what constitutes an automatic telephone dialing system (“auto- dialer”) under the TCPA. We were wrong; confusion has continued to reign; compliance has continued to be difficult; and plaintiff class action lawyers have only stepped up their exploitation of the term’s ambiguity.

Now, we again have some reason to hope that clarification might come soon (assuming that the government shutdown is short-lived!). Last year, we based our optimism on the Commission releasing a flurry of Public Notices, putting out for comment seven pending Requests for Declaratory Rulings related to the TCPA, the majority of which sought interpretation of how the TCPA’s definition of an auto-dialer applied in a variety of circumstances that have resulted from technological developments in the decade since the TCPA was enacted.

One of the most watched of those Requests was submitted by Communication Innovators. CI had petitioned in early June 2012 for a ruling that predictive dialers without the current capacity to generate and dial random or sequential numbers could be used to place non-telemarketing calls without prior express consent, and were not auto-dialers within the TCPA’s definition. In May, we even got word that a draft order on the CI request (and the auto-dialer issue) was “circulating” among the Commissioners for their consideration. But then, silence prevailed over the summer months.

On the evening of September 27, though, the FCC released an exchange of correspondence between several Members of Congress and the FCC Chair (and the Acting Chief of the Consumer and Governmental Affairs Bureau), in which the Commission repeated that “A draft order to resolve the [CI] Petition is under consideration by the Commission, and Communication Innovators has met with the staff recently to discuss the matter.”

The Congressional correspondence was dated June 19 and the FCC’s response, not released until September 28, was dated September 10. Could it be that the FCC is finally getting ready to publish an Order and, depending on the length of the government shut-down, that we might see something before October 16, the effective date of the FCC’s rules requiring prior written express consent before using an auto-dialer to make telemarketing calls to mobile phones (or leave pre-recorded telemarketing messages on residential landlines)?

While the CI petition is limited on its face to non-telemarketing calls, one would think that any clarification of the definition of an auto-dialer would be relevant to telemarketing and non-telemarketing calls alike. On the other hand, perhaps the FCC is considering the creation of a limited exception that would narrow the definition of an auto-dialer only in the case of non-telemarketing calls made to existing customers for the purpose of debt collection, or for providing information.

Guidance on the issue before October 16 would be most welcome, of course. But with recent reports that the FCC is operating with a staff of only 38 employees deemed “essential,” hope for any timely clarification is fading fast. All entities that do any telemarketing using auto-dialers (as most broadly defined) or pre-recorded messages, therefore, should make sure that they are prepared to meet the Commission’s new requirements in the next two weeks.

California Governor Signs 'Do Not Track' Disclosure Requirement; Commercial Website and Mobile App Operators Required To Disclose Whether They Honor DNT Requests

This post was written by Lisa B. Kim, Joshua B. Marker and Katrina M. Kershner.

We previously noted that the California legislature had recently passed and sent to the governor’s desk a number of different data privacy bills this term. This past Friday, California Governor Jerry Brown signed into law one of those bills, AB 370 – legislation that imposes new disclosure requirements on commercial websites and online services that collect personally identifiable information (PII) on users. The legislation, the “Do Not Track” disclosure law, is the first law of its kind in the United States.

The California Online Privacy Protection Act (CalOPPA) had already required any website operator who collects personally identifiable information (PII), to conspicuously post its privacy policy, which must identify the categories of PII collected and the third parties with whom the operator shares the information. The California attorney general has made CalOPPA an enforcement priority. With the passage of AB370, CalOPPA now requires that these commercial websites and online services also disclose in their privacy policies (1) how the site responds to a “Do Not Track” (or similar) signal from a browser, and (2) whether any third party may collect PII over time and across websites when a consumer visits the operator’s site.

As explained in our previous blog, all the major browsers offer “Do Not Track” options, which signal to sites that the individuals do not want their behavior tracked. Honoring the “Do Not Track” signal by refraining from collecting information on the individual is voluntary. The new law does not change this, but it does now require disclosure of whether and to what extent the site honors the “Do Not Track” signal.

The impact of this legislation is significant and will require all companies operating websites or mobile apps that are used by California residents to reevaluate their privacy policies. The DNT bill, in particular, requires every company to have a thorough understanding of technical aspects of its websites, and the third parties it allows to operate on its site, so that it can properly disclose its data collection practices. Further, by forcing companies to affirmatively disclose additional specifics about their information practices, the risk of litigation for noncompliance with the privacy policy is like to increase.

The Federal Trade Commission and Irish Data Protection Commissioner sign a memorandum of understanding

This post was written by Cynthia O'Donoghue.

In June 2013, the Federal Trade Commission (FTC) and Ireland's Office of the Data Protection Commissioner signed a memorandum of understanding establishing a mutual assistance and information exchange program to secure compliance with data protection and privacy laws on both sides of the Atlantic.

The privacy and data protection laws between Ireland and the United States differ significantly; however, the two agencies recognise that the global economy and the resultant increase in the cross-border flow of personal information merits close cooperation. The U.S. privacy framework is based on a number of legislative acts, that in the main apply to a specific sector or type of data, such as consumer data or health data, while Ireland’s Data Protection Acts of 1988 and 2003, which implement the EU Data Protection Directive (95/46/EC), apply to the processing of any personal data.

The MOU sets out broad objectives to ensure cooperation over the enforcement of privacy laws and to facilitate research and education in the area of data protection, including through the exchange of knowledge and expertise.

The FTC and the Irish data protection authority have agreed to use their best efforts to:

  • Share information, including complaints they receive
  • Provide each other with investigative assistance
  • Exchange data protection related information, including for purposes of consumer and business education
  • Explore opportunities for staff exchanges and joint training programs
  • Coordinate enforcement against cross-border violations
  • Regularly discuss continuing and prospective opportunities for cooperation

The memorandum also specifies the procedures and rules applying to requests for assistance. Such requests should be made only when they do not impose an excessive burden on the other agency. Any shared information, the existence of the investigations, and any requests made, are to be treated by the agencies as confidential.

UK Information Commissioner's Office clarifies rules for social networking and online forums

This post was written by Cynthia O'Donoghue.

In June 2013, the UK Information Commissioner’s Office (ICO) published new guidance entitled “Social networking and online forums—when does the DPA apply?” (Guidance). The document explains what must be considered by organisations that run social media sites, as well as by individuals who upload or download personal data from online forums, social networking sites, message boards, or blogs.

The DPA does not apply when personal data is processed by an individual for the purposes of their personal, family or household affairs. The Guidance makes it clear that the "household exemption", which permits an individual to process personal data for his or her own use, does not apply to organisations, even if the organisation uses an employee to undertake the processing through his or her personal networking page. Individuals and groups of individuals (including clubs or societies) may only rely on the exemption if they process the data merely for domestic purposes.

Where the exemption does not apply, organisations as well as individuals will be treated as the data controller and will have primary responsibility for compliance with the DPA. The ICO suggests that those running networking sites will be controllers in relation to any contact information or other personal data about the users or subscribers. Equally, the site operator may be responsible for third-party posts. The ICO relied on the case of The Law Society and Others v Rick Kordowski, [2011] EWHC 3185 (QB), to conclude that the DPA will apply when posts are moderated, especially if users pay a fee. Even if the content is not fully moderated, an operator will be a controller when the site’s terms and conditions only allow posts with acceptable content.

The Guidance also recommends that organisations take “reasonable steps” to check the accuracy of any personal data posted. What is “reasonable” will depend on the nature of the site and the extent of moderation of the site by the operator. For example, the watchdog would not consider it reasonable to expect a large social networking site to check all posts for accuracy, but recommends that such sites have a process in place to deal with complaints about inaccurate postings.

Most established social networking sites will already comply with the new Guidance, so this Guidance appears aimed at start-ups and fringe social networking sites, especially since it points out other UK legislation relating to preventing malicious communications, harassment and defamation.

Ibero-American Data Protection Network continues to promote data protection development in Latin America

This post was written by Cynthia O'Donoghue.

The six executive committee members of the Ibero-American Data Protection Network (Network) attended the First Latin American Congress on Data Protection. The Network brings together 22 Data Protection Authorities (DPAs) from Spain, Portugal, Mexico, and a number of countries in Central and South America and the Caribbean. During the 10 years of its existence, the organization has promoted the development of comprehensive data protection legislation and introduction of data protection authorities throughout Latin America.

The network has become the main forum for dealing with cooperation among Latin American and Ibero-American authorities. Collaboration resulted in the recent introduction of a number of new data protection regimes in the region (please see our blogs about the data protection laws entering into force in Costa Rica, Peru and Colombia).

The development of the data protection among the Network members was heavily influenced by Spain, which exports its data mainly to Latin America. This has resulted in almost all of the members modelling their laws in line with the Spanish legislation and European Union principles.

As well as promoting further collaboration and general increase in data protection laws across Latin America, the Network plans to focus on cloud computing, the right to be forgotten, and online advertising rules. These and other matters are likely to be further discussed during the 11th annual meeting of the Network’s executive committee, which will be held in October 2013 in Cartagena, Colombia.
 

The UK Information Commissioner advises on encrypting data

This post was written by Cynthia O'Donoghue.

Keeping personal data secure is a well-established obligation under the UK data protection regime. The UK data protection watchdog, Information Commissioner’s Office (ICO), has published advice on using encryption to satisfy this requirement. The ICO recommends universal use of encryption, especially when the loss or theft of personal data could have detrimental effects on individuals. The advice elaborates on how encryption works and the types of encryption that are available.

The advice discusses the difference between encryption and password or PIN protection. Password protection only blocks access to the data, but can be easy to circumvent, and if circumvented, full access to the information is attainable. In contrast, encryption uses a complex series of mathematical algorithms to protect and hide the underlying data. This ensures that data cannot be accessed without an encryption key, which is far harder to circumvent.

The ICO makes it clear that keeping the encryption key secret is of paramount importance. Even the best encryption will be pointless if the key is easily accessible or kept with the encrypted device or data, with best practice suggested by the watchdog to provide the encryption key over the phone once it is confirmed that the data is in the hands of the correct person.

The type of encryption method, such as symmetrical or asymmetrical, and the encryption strength, will depend on a number of factors, including the sensitivity of the information and how it is being stored or processed. The ICO advice describes some of the most common forms of encryption and suggests how they should be used. The advice covers full disk encryption, encryption of a single file or container with files, and data in transit.

Choosing and applying relevant encryption mechanisms may seem like a complicated and costly endeavour. However, the ICO’s blog points out that the costs of not using encryption may be even higher. Not only is there the potential for reputational damage, but the watchdog also recently issued three monetary penalty notices to organisations amounting to £700,000 for not using encryption.

UK Court of Appeal limits typical damages for a data protection breach to £751

This post was written by Cynthia O'Donoghue.

The UK Civil Division of the Court of Appeal ruled in favour of an individual data subject on the point of damages under the Data Protection Act 1998 (DPA), but limited the award to £751 GBP. The judgment in Halliday v. Creation Consumer Finance Limited, [2013] EWCA Civ 333, clarifies how compensation under the DPA should be assessed, and sets a high threshold for obtaining a substantial award for damages.

Mr. Halliday had been awarded nominal damages by a district judge absent evidence demonstrating any financial loss, which was affirmed on a prior appeal. In the present appeal, the court looked at the question of whether Mr. Halliday was entitled to an award for distress, and whether his claim for damages should have been rejected on the ground that he had to show that he was entitled to substantial damages before he could obtain damages for distress.

The appellant, Mr. Halliday, had entered into a credit agreement with Creation Consumer Finance (CCF). After a series of complex developments, CCF wrongly recorded Mr. Halliday as owing funds, and the information was then shared with a credit reference agency. Mr. Halliday sought to rely on the DPA to claim damages for the harm to his reputation and credit rating, and the distress he suffered.

The Court of Appeal affirmed that a nominal damage award of £1 GBP was appropriate where a claimant could not prove loss, and that such an award was an effective remedy for the purposes of European Union law. The court reviewed whether damages for distress are available where the complainant failed to receive substantial damages for the breach itself.

The court considered how the damages for distress damages should be assessed, finding that the remedy is only available where the distress results from contravention of the data processing requirements and is suffered by the complainant himself. The court concluded by recognising a general principle that non-compliance with important European instruments will cause frustration to complainants, and that compensation for such frustration may sufficiently amount to as little as £750.

While some of the points in the case were fact specific, the final decision appears to create two general rules. First, in absence of direct evidence of specific financial loss, nominal damages may be awarded for breaches of the DPA. Second, an additional remedy for frustration or distress may be awarded.

 

Scottish court holds non-compliance with data protection law a material breach of contract

This post was written by Cynthia O'Donoghue.

The Outer House of the Scottish Court of Session held that non compliance with data protection laws amounted to a material breach of contract. In the case of Soccer Savings (Scotland) Limited v Scottish Building Society, [2013] CSOH 51, the court decided that lack of proper registration and unlawful use of a customer database went to the heart of the contract, enabling the innocent party to rescind it.

In Soccer Savings, a Scottish Building Society (SBS) sought to defend its decision to rescind an unsatisfactory contract with a football affinity savings scheme promoter on the basis (among others) that the promoter was in breach by failing to comply with the Data Protection Act 1998 (DPA). The contract required both parties, as data controllers, to use reasonable endeavours to comply with statutory data protection requirements, and to take appropriate technical and organisational measures against unauthorised or unlawful processing of personal data. Meanwhile, the promoter failed to comply with the registration obligations under the DPA, and unlawfully used a customer database belonging to the previous scheme provider.

The court held that while the failure to register was not necessarily a material breach, the unlawful use of the database clearly went to the heart of the agreement. The promoter’s plan to fulfil its part of the contract depended on using the database. The constraints of the DPA and the failure of the promoter to have obtained appropriate consents materially contributed to the promoter’s ability to perform its contractual obligations. The material breach entitled the building society to terminate the contract. The decision was unaffected by the promoter’s claims that its use of the database was approved by its legal advisers. In fact, one football club refused to provide the data after taking legal advice.

The court found that an important component of the promoter’s performance under the contract involved it in a breach of the DPA, and that such illegality materially impaired its performance, thus amounting to a material breach of contract. The fact that the use of personal data was at the heart of the football affinity savings scheme, and that such data was obtained illegally, resulted in a material breach such that SBS was able to rescind the contract.

California Legislature Hard At Work: Passes Three Data Privacy Bills Before Close Of Session

This post was written by Lisa B. Kim, Joshua B. Marker and Paul H. Cho.

Back in May, we highlighted several bills the California legislature was actively considering in the area of data privacy. Recently, three bills have found their way to the governor’s desk and are waiting to be signed into law. They all give reasons for the business world to take a moment to stop and review their privacy policies and online practices. The following is a synopsis of the three bills.

  • AB 370: AB 370 amends the California Online Privacy Protection Act (“CalOPPA”), California’s Business and Professions Code section 22575 et seq., which is the original law that mandated that websites – and more recently mobile applications – have a privacy policy. The amendment requires that a privacy policy specifically disclose how the website (or mobile app) “responds to Web browser ‘do not track’ signals” or other consumer choice mechanisms regarding the collection of personally identifiable information and the tracking of consumer behavior across websites. (Currently, some Internet browsers, such as Internet Explorer, Firefox, and Safari, offer a Do Not Track (DNT) option that indicates to companies that the user has elected not to have information about their web browsing activities monitored or collected. Whether such information is actually collected or not depends on whether the companies in fact honor the DNT option.)

Further, a privacy policy would also have to affirmatively disclose whether third parties operating on the site (or in the app) can collect personally identifiable information or other information about the consumer’s online activities across websites over time. This would require that operators do a full and ongoing accounting of all third parties operating on their websites and mobile apps, and have a complete understanding of those third parties’ data collection capabilities and practices.

AB 370 could become the first law in the country that addresses the issue of tracking consumers online. Assemblyman Al Muratsuchi (D) introduced the bill. Both the California Assembly and the Senate unanimously passed it. It was sent to Gov. Brown September 3.

  • SB 568: Similar to the federal Children’s Online Privacy Protection Act (“COPPA”), SB 568 would require the operator of a website, online service, online application, or mobile application to permit anyone under the age of 18 to remove, or to request and obtain removal of, any content or information posted online. This means all websites, social media sites, and apps would be legally required to allow minors to remove pictures and other content they posted in the past, absent particular exceptions. The bill would also require operators to give minors notice of their right to remove any content or information.

Furthermore, SB 568 would prohibit operators from marketing or advertising products or services to a minor that a minor cannot legally purchase, and prohibit operators from using, disclosing, or compiling certain personal information of the minor for marketing these same products or services. Some of the products and services to which the bill applies are alcoholic beverages, firearms and ammunition, and dietary supplements, but this list is not exhaustive.

Businesses will need to assess to what extent minors access their website, mobile apps, and services, and adjust their practices accordingly, especially in regard to their marketing and advertising revenue sources.

SB 568 was introduced by Sen. Darrell Steinberg (D). SB 568 was unanimously passed in the Senate and the House and sent to Gov. Brown September 3. If signed into law, it will become effective January 1, 2015.

  • SB 467: SB 467 would require law enforcement to obtain a warrant in order to get emails on a provider’s server, regardless of how long an email was stored and whether or not it had been opened. The bill even applies to social media messages, such as messages sent through Facebook and Twitter. Previously, a warrant was only required for unopened emails or emails stored on a provider’s server for 180 days or fewer. However, a search warrant would still not be required with the user consent, or if necessary to avoid death or serious injury.

Email service providers should revise their privacy policies to inform users of the change, especially in light of the fact that the new bill would provide users with greater privacy protection.

If SB 467 becomes law, California will be following in the footsteps of Texas, which just passed HB 2268, which created a blanket warrant requirement for all electronic customer data stored both inside and outside the state by a service provider. SB 467 was introduced by Sen. Mark Leno (D). It passed in both the Senate and the Assembly, and was sent to Gov. Brown September 10.

 

 

FTC Brings Its First Enforcement Action for 'Internet of Things'

This post was written by John P. Feldman and Frederick Lah.

Earlier this year, we wrote about the FTC’s plan to hold a November 2013 public workshop over concerns with the “Internet of Things,” the dramatically growing capacity of smart devices to communicate information through the Internet. In advance of the workshop, the FTC has entered into a consent decree with a marketer of Internet-connected video cameras, marking the Commission’s first foray into the Internet of Things.

The marketer in this case was a provider of home security video cameras that allowed consumers to monitor their homes remotely. According to the complaint, a hacker exploited a security flaw in the marketer’s system and posted live feeds to approximately 700 home cameras, displaying babies asleep in their cribs, young children playing, and adults going about their daily lives. While the marketer did alert customers of the security flaw and offered them a security patch, the FTC alleged that the marketer had failed to use reasonable security to design and test its software, including a setting for the cameras’ password requirement. The FTC also alleged that the marketer had transmitted user login credentials in clear, readable text over the Internet, even though free software was available to secure such transmissions.

Under the terms of its settlement, the marketer is prohibited from misrepresenting the security of its cameras and the information that its cameras transmit. The marketer is also prohibited from misrepresenting the extent to which a consumer can control the security of information captured by the cameras. The FTC voted 4-0 to accept a consent agreement and the proposed order. The agreement will be subject to public comment for 30 days through October 4, 2013, after which the FTC will decide whether to make the proposed settlement final. Comments may be submitted electronically here.

This case is an important one for all companies that offer products connected to the Internet, whether they’re offering home appliances, automobiles, or even products with “smart” labels. The FTC relied upon a “reasonable” standard in bringing this action, which can always be a tricky one for companies to interpret. As a baseline, companies need to follow industry security standards and implement protections that are commensurate with the type of data they collect and transmit. As a best practice, companies should take a Privacy by Design approach and consider privacy and security as early as possible during product development. Still, no system can ever be 100 percent secure from malicious hackers, even if the company has taken extensive measures to protect its data assets; and just because a company has been the victim of a malicious hack does not by itself prove that the company was not acting reasonably.

The FTC’s workshop on the privacy and security of the Internet of Things will be held November 19, 2013. According to the FTC’s website, the workshop will address issues related to the increasingly prevalent ability of everyday devices to communicate with each other and with people.

EU Data Protection Update: Belgium, Italy and Slovakia

This post was written by Cynthia O'Donoghue.

Belgium has issued a new protocol to facilitate the approval process for transfers of personal data outside the EU. Italy issued new regulations governing direct marketing, and the Slovak Republic has introduced a new data protection act—a busy few months!

Belgium’s new protocol on data transfers was issued on the 25th of June by the Ministry of Justice and the data protection authority (the “Privacy Commission”) and relates to transfers based on the EU standard contractual clauses as well as ad hoc contracts to transfer data. Prior to the new protocol, a data controller could indicate that transfers were based on the EU standard clauses when notifying the Privacy Commission. For any other contracts related to transfers of data, Belgium had required that they be approved by royal decree, which required that the approval be signed by the King following advice from the Privacy Commission. The new protocol requires that both the EU standard clauses and other contracts covering transfers of data be submitted to the Privacy Commission for approval and confirmation that they conform to the template adopted by the European Commission. Other transfer contracts will still require a royal decree, but they will no longer require review by the Council of State or publication in the Official Journal, which should simplify the adoption process. While the aim of the protocol to simplify the approval process, disappointing is the new requirement to submit transfers based on the EU standard clauses to the Privacy Commission for verification.

The Italian Data Protection Authority issued new Guidelines on Marketing and Spam (Guidelines), which were published in the Gazette and have the force of law. The Guidelines cover marketing via email, SMS (text messaging) and address the use of social networks or other information in the public domain, what the Garante refers to as “social spam.” The key change is that email addresses now in public domain are in scope and the broadened definition of “spam” covers any unsolicited marketing messages sent via automated means, regardless of the volume of messages sent. The number of messages only becomes relevant in relation to the amount of the sanctions. The Guidelines still permit marketing messages to existing customers when the offering is similar products/services previously acquired by the customer, so long as the customer has been informed and has not opted out. The Guidelines also cover the sending of marketing messages to business email accounts. Where there is no prior relationship, companies intending to send marketing communications must obtain the prior consent of individuals, and such consent cannot be obtained by sending a first promotional message that asks for consent. Individual prior consent is required where email addresses are gathered through publicly available sources such as registers, social media or other websites. In addition, consent cannot be obtained via a "pre-ticked" box or by leading an individual to believe that his or her ability to obtain products or services is conditioned on consenting to receive marketing communications. Companies will demonstrate or document that they have obtained the necessary consent, including where the consent covers marketing by third parties. Lastly the Guidelines also cover activities by marketing affiliates or promoters, such as asking person with a large Twitter following to tweet about a product or service. This form of viral marketing now falls within the meaning of spam, such that under the Guidelines, the marketing promoter must obtain the consent of those individuals who receive the marketing message. The Guidelines also contain a new sanctions regime with penalties ranging from €6,000 to €300,000, or in some cases four times that where the fine would not act as a deterrent because of the economic conditions of the offender.

The Slovak Republic has enacted a New Data Protection Act No. 122/2013 Coll (NDPA), aimed at better implementing the EU Data Protection Directive. Slovakia’s NDPA became effective on 1 July 2013 and replaces Act No. 428/2002 Coll. on Protection of Personal Data (as amended). The reform brings many important changes, including on cross-border transfers and the requirements to appoint a data protection officer (DPO) or register databases. Compliance with the new rules, which must be achieved after a transitional period, will be facilitated through higher compulsory fines. Under the new Act it is no longer necessary to receive authorization for using data transfer agreements which include EU standard contractual clauses, and controllers will not need to submit their binding corporate rules (BCRs) to the DPA if they have been authorised in another EU Member State. Transfers based on EU-U.S. Safe Harbour are also exempt from DPA approval, although the NDPA prescribes a minimum content for contracts governing the safe-harboured transfers. Under the Act a DPO must be appointed where there are 20 or more “entitled persons” processing data processing. DPOs must be appointed within 60 days and the DPA notified of the appointment within 30 days. All DPOs need to pass an exam, and need to be appointed through a written authorization with a minimum content dictated by the Act. Compliance with the new DPO framework must be achieved within the one-year transitional period. Lastly, penalties for non-compliance have been increased to €300,000, which can be doubled to €600,000 for a repeated offence within two years.

Eastern European Data Protection Update: Ukraine & Kazakhstan

This post was written by Cynthia O'Donoghue.

Kazakhstan has enacted a new data protection law which is due to come into force at the end of November. Ukraine has enacted amendments to its data protection law which are due to come into force in January 2014.

Kazakhstan's new data protection law, The Law of Republic of Kazakhstan No. 94-V, “On Personal Data and Its Protection” (May 21, 2013) (PDP), introduces a new regulatory framework governing how personal data is collected, used, disclosed, transferred and destroyed, and applies to the public sector, businesses and to individuals.

The PDP will require "database owners" and "database operators" to create a list of the personal data required for each purposes for which it will be processed, and use of third parties to process data will only be permitted if they are also subject to data protection requirements relevant to the processing they will perform. Individuals must provide consent for the processing of their data, except in certain limited circumstances. Transfers of personal data are permitted to countries that ensure adequate level of protection, including to members of the Council of Europe Convention 108 for the Protection of Individuals with regard to Automatic Processing of Personal Data. Transfers to other countries will require individuals’ consent. Compliance failures/breaches are subject to administrative (civil) penalties as well as criminal penalties and imprisonment of up to five years.

Ukraine enacted fundamental changes to its existing data protection law in July, with the changes coming into force 1 January 2014. The Law of Ukraine “On Amending Certain Legislative Acts of Ukraine Regarding Improving the System of Personal Data Protection” No. 383-VII, dated July 3, 2013 (Amendments) abolishes the Data Protection Office, the current regulatory entity, and transfers regulatory oversight to Ombudsman. The changes aim to bring Ukrainian law into compliance with European Union standards (even though Ukraine is not a member of the European Union) by establishing an independent supervisory authority for data protection.

The Amendments introduce other changes to Ukraine's data protection regime, including that businesses will no longer have to register their databases, but instead must notify the Ombudsman only where the processing of personal data will result in a "special risk" to a "data subject's" rights and freedoms. Data controllers will be able to automatically process data where necessary to fulfil legal obligations. In addition, consent will be clarified and biometric and genetic data are recognised as sensitive personal data. The Amendments also introduce new data subjects’ rights, including the right to know the sources collecting data and its location. The Amendments have not changed the provisions relating to transfers of personal data, so transfers are still subject to consent or may only be transferred to adequate protection countries.

The Amendments require the Ombudsman to adopt secondary regulations addressing the types of personal data that will be considered a "special risk" for "data subjects" rights and freedoms, as well as the method and manner of notifying it. The Amendments also require certain business sectors to develop and agree codes of conduct governing the processing of personal data.

European data protection watchdog proposes stricter regulation of profiling

This post was written by Cynthia O'Donoghue.

The EU data protection watchdog, Article 29 Working Party (Art. 29 WP), has issued the Advice paper on essential elements of a definition and a provision on profiling within the EU General Data Protection Regulation. The document underlines the significance of creating profiles based on interlinked personal data, especially given the latest developments in geo location and Big Data. The Art. 29 WP argues that more must be done to explain and mitigate the various profiling risks, a sentiment expressed before in its Opinion from January 2012. The new advice paper suggests a number of amendments to the draft Data Protection Regulation (Regulation) in order to ensure it adequately deals with this issue.

The Working Party agreed with the Rapporteur Jan Albrecht that a comprehensive definition of profiling should be included in Article 4. Its proposed definition based on the Council of Europe Recommendation on profiling, covers “any form of automated processing of personal data, intended to analyse or predict the personality or certain personal aspects relating to a natural person.” The Art. 29 WP wants profiling tackled at a much earlier stage than that set out in the proposed Regulation, by regulating the collection of data for the purpose of profiling and the creation of profiles as such. It advocates introducing specific requirements for lawful profiling, such as requiring information to be provided about the context and purpose of profiling and the logic used for automatic processing. The Art. 29 WP also calls for individuals being entitled to modify or delete profile information and to refuse any measure or decision based on profiling, with special safeguards being adopted by data controllers such as use of protection friendly technologies and data minimization.

The proposed changes would impose additional burdens on many data controllers, especially those involved in credit rating, social networking, or targeted advertising. The Art. 29 WP suggested a balanced approach, where the requirements vary depending on the actual effects of profiling, for instance, applying the additional requirements only when profiling has a significant effect on the interests, rights or freedoms of an individual. Unless comprehensive guidelines are provided, such approach could result in significant uncertainty for data controllers.

New EU Data Breach Notification Rules for Telecoms

This post was written by Cynthia O'Donoghue.

New EU rules on personal data breach notification for telecoms and ISPs came into force recently (on 25 August 2013). European Commission Regulation (EU) 611/2013 of 24 June 2013 on the measures applicable to the notification or personal data breaches under the ePrivacy Directive (2002/58/EC) aims to ensure that telecoms operators, internet service providers and other public electronic communications service providers notify personal data security breaches consistently across the EU.

The revised ePrivacy Directive (2009/136/EC) requires telcos and ISPs to keep personal data secure and confidential and to notify relevant national data protection authorities of any breach where the affected individuals’ personal data or privacy are likely to be adversely impacted, in particular where the data is stolen, lost or accessed by unauthorised persons. The Notification Regulation requires service providers to notify the relevant national DPAs within 24 hours of detection of the breach. In addition, affected individuals must be notified without undue delay and provided with detailed information about the data breach.

Since the legislative instrument is a Regulation, it is directly applicable in each of the EU member states without the need to enact any national law. The UK Information Commissioner's Office (ICO) says it will publish updated guidance sometime in September on the new procedure.

Breach notifications are not required where the telco or ISP can demonstrate that it had implemented appropriate technological protection, such as by using various encryption measures. The Regulation also contains annexes setting the detail of what must be notified to both the national DPAs and to individuals affected.

UK ICO criticises elements of the proposed EU cybersecurity Directive

This post was written by Cynthia O'Donoghue.

Last month, the Information Commissioner’s Office (ICO) published a response to the government’s call for views and evidence on the draft EU Directive on Network and Information Security (NIS Directive). The ICO’s criticism stemmed from its experience with mandatory data breach notifications from the telecoms sector and included suggestions for modifying the proposed NIS Directive.

The Directive would require Member States to create national competent authorities (NCAs) to handle network information security risks and incidents, with the NCAs being notified about any major cybersecurity incidents affecting critical infrastructures, information society services and public administrators. The ICO generally welcomed the objectives, hoping that there will be a greater focus on security among European businesses.

The ICO felt the proposed NIS Directive did not clearly address how NCAs were meant to deal with incident notifications, noting that while monetary penalties can act as a useful motivator, adequate improvements will not be achieved if there is not emphasis on understanding the underlying cause of an incident. In addition, the requirement on “core service” providers to notify incidents required the setting of thresholds to prevent NCAs being flooded with trivial and inconsequential notifications.

The ICO also criticised the NIS Directive provision relating to disclosures of personal data in connection with a notification always being treated necessary and legitimate, and pointed that by default it will be unnecessary to know whose personal data was compromised. The ICO suggested the focus should be on ensuring the removal or minimisation of unnecessary personal data.

Lastly, the ICO pointed out the flawed idea of introducing harmonised security standards across Europe by highlighting that the pace of technological development will outstrip and outdate any measures before they can be agreed, and that a single standard of adequate security will not suit the myriads of organisations covered by the NIS Directive.

The ICO is not keen to take on the role of the UK’s NCA, stating it does not feel equipped to deal with notifications relating to security incidents unrelated to personal data, and suggested cooperation between itself and the NCA through a Memorandum of Understanding.

UK Information Commissioner fine of £250,000 overturned

This post was written by Cynthia O'Donoghue.

The UK’s First-tier Tribunal (Information Rights) has overturned a monetary penalty issued by the Information Commissioner’s Office (ICO) against the Scottish Borders Council. The £250,000 penalty related to the unsecure disposal of hard copies of council records containing personal data and had been issued by the ICO in September 2012. The Tribunal found that the breach was not "of a kind likely to cause substantial damage or substantial distress" as required under section 55A of the Data Protection Act 1998 (DPA). In addition, the Tribunal held that the contravention was not a serious enough contravention of a data controller's duty to uphold the data protection principles, and thus no liability arose such that a monetary penalty notice could be served on a data controller.

The SBC had hired a third party supplier to scan hard copies of pension files containing personal data onto CDs. The supplier hired by SBC had put about 1,600 pension files into recycle bins at a couple of supermarkets, which were then found by a local member of the public and subsequently taken into police custody. No actual harm was found to have resulted.

Before a monetary penalty can be assessed, the breach must either be deliberate or something that a controller either knew or ought to have known would result in substantial damage or distress and then failed to prevent. The issuance of a monetary penalty by the ICO is discretionary.

The SBC had no formal data processing contract with its data processor and had sought little reassurance as to data security measures. The Tribunal found that although a serious contravention had occurred, based on all of the relevant circumstances, it was not of a kind likely to cause substantial damage or substantial distress. Therefore no monetary penalty could be imposed on the SBC. The Tribunal also found that a monetary penalty notice is subject to a civil standard of probability rather than a criminal standard of proof.

The personal data exposed included name, address, date of person, national insurance number and salary, and in certain cases, bank account details, nominated beneficiaries and reason for leaving, including references to ill health. The Tribunal found that none of the data was "sensitive personal data" which under the DPA deserves more robust protection. In addition the SBC had failed to enter into a data processing agreement with the third party supplier as required under the DPA.

The Tribunal acknowledged that a substantial amount of personal data was exposed and that the breach of the DPA was serious and systematic. The Tribunal also found that proof of actual substantial damage or distress is not required for the ICO to be able to issue a monetary policy notice. However, in looking at the relevant circumstances, the Tribunal found in this case that it was unlikely that substantial distress or substantial damage would be caused since such harm but be more than a mere probability. The Tribunal also noted that "it is fundamental that a data controller cannot contract out of its responsibilities" under the DPA and that data controller duties in relation to data processing contracts in the DPA lie at the heart of the data protection system.

European Union harmonizes the approach to sentencing cybercriminals

This post was written by Cynthia O'Donoghue.

In early July, the European Parliament adopted a new directive harmonizing the criminal laws relating to cyberattacks (Directive). It will replace the current nonbinding agreement between EU countries from 2005 (Framework Decision 2005/222/JHA). The Directive aims to harmonise the approach to cybercrime, by requiring all Member States to introduce maximum imprisonment sentences ranging from two to five years for various forms of cyberattack. ”Cyber crime does not stop at borders, so it is vital to have a comprehensive and joint set of rules to prevent and fight it successfully,” said Monika Hohlmeier, a German lawmaker responsible for overseeing the Directive’s passage through the European Parliament.

The Directive would set a three-tier system of maximum prison sentences applicable to cybercrimes, and it will be in each Member States’ discretion to define which attacks would be classified as minor. In addition, perpetrators benefitting from their cybercrime would face penalties ranging disbarment from public benefits to being closed down.

All infringements will need to carry at least a 2 year maximum prison sentence. The crimes range from illegal interference with IT systems or data, access of information systems, interception of data, as well as producing, selling or distributing tools designed for a cyberattack. The penalty for illegal interference with systems or data should be increased to a maximum 3 year sentence when the perpetrator used tools specifically designed for large-scale attacks or another person’s electronic identity. A maximum term of 5 years should apply to illegal interference with “critical” infrastructures (e.g. government information systems or energy networks), attacks which cause serious damage, attacks committed by criminal organisations and using "botnets" - establishing remote control over a significant number of computers by infecting them with malicious software.

The Directive also creates a system for effective exchange of information on cyberattacks. Member States will need to maintain an operational national point of contact which must be available on a 24/7 basis with a required response time for urgent reports of 8 hours.

The Directive was adopted in the European Parliament and will be considered by the EU Council at a forthcoming meeting. Once the Directive is fully approved and published, Member States will have two years to transpose its provisions into their national laws, except in relation to Denmark which used its opt-out right for legislation affecting law enforcement.

ICO adopts enforcement action plan

This post was written by Cynthia O'Donoghue.

The UK data protection watchdog, the Information Commissioner’s Office (ICO), has published a Data Protection Regulatory Action Policy, setting out factors the ICO will consider when deciding whether to initiate enforcement action and what form it should take. The policy should assist organisations with understanding the enforcement process and the risks of non-compliance with the UK Data Protection Act 1998.

Regulatory action by the ICO covers each type of enforcement power, including criminal prosecution, monetary penalties, undertakings, enforcement orders and, for the public sector, compulsory audits. The ICO will continue to publicise the non-confidential details of regulatory actions taken.

The ICO will consider a number of factors when deciding whether to undertake any enforcement, including the following key issues:

  • Deliberate or persistent non-compliance
  • Gravity of impact of non-compliance
  • Volume of individuals adversely affected
  • Enforcement is required to clarify an important point of law or principle
  • Enforcement is necessary to set an example because (i) the non-compliance relates to a representative of a particular sector or activity; or (ii) of the novel, precedent setting or particularly intrusive nature of the non-compliance
  • Whether the organisation in question had a deliberate, wilful or cavalier approach

The policy reinforces the manner in which the ICO currently undertakes enforcement action in that organisations will continue to have an opportunity to make representations before the ICO makes a final determination about whether regulatory action is warranted.

In determining which of its enforcement powers to exercise, the ICO will consider the actual or potential detriment caused by non-compliance. To assess this, the ICO will focus on:

  • Issues of general public concern (including those raised in the media)
  • Concerns that arise because of the novel or intrusive nature of particular activities
  • Concerns raised with the ICO in complaints that they receive
  • Concerns that become apparent through ICO’s other activities

An interesting aspect of the policy is the degree to which the ICO will consider market factors. For instance, strong regulatory action may be less likely where the non-compliance occurs in markets which regulate themselves. In practice this means that the ICO is likely to pay less attention to industries where compliance with data protection laws can give an important competitive advantage and where competition is fiercer. In contrast, regulatory action is more likely to continue to be in the public sector, which cannot be adequately regulated by market factors.

Lastly, the policy makes some efforts to encourage more business to agree for voluntary audits. The ICO makes it clear that it views audits as a “constructive process with real benefits for data controllers.” Businesses agreeing to an audit will benefit from the help of trained and competent ICO auditors and from the fact that the ICO will not impose any monetary penalty for contraventions discovered in the process.

UK data protection authority publishes data breach statistics

This post was written by Cynthia O'Donoghue.

The UK data protection authority, Information Commissioner’s Office (ICO), has published statistics regarding breach incidents in the first quarter of this year (1 April - 30 June 2013). In a related press release, the ICO discussed conclusions drawn from the numbers regarding the most common types of data breaches and the sectors that appear to be at greatest risk. It also described the enforcement tactics used to respond to the incidents.

As many as 175 out of 335 data breach incidents investigated by the ICO concerned data being ‘disclosed in error’. This includes situations where emails were sent to the wrong people or where information was erroneously included in freedom-of-information responses. The ICO highlighted that carelessness was often at the heart of the problem, with the same mistakes frequently repeated. ICO treats carelessness seriously and will take enforcement action were warranted. Loss or theft of paperwork and hardware were the second- and third-highest, respectively.

The ICO also looked at where the incidents occur most frequently, finding the health sector and local government at the top of the list. The ICO noted reported incidents for these sectors were likely to be seen to be higher because of the presence of internal reporting guidelines. The third and fourth places on the list were taken by schools and the legal industry. The ICO noted that it will keep an eye on these sectors to see how they perform in the next quarter.

Recent enforcement action by the ICO includes:

ENISA Cybersecurity Annual Report

This post was written by Cynthia O'Donoghue.

ENISA, the European Union Agency for Network and Information Security, issued its Annual Incidents Report 2012. The report has been issued under Article 13a of the Common Regulatory Framework Directive (1009/140/EC) for electronic communications networks and services. The report highlights that 18 European Union countries reported 79 significant incidents during 2012. Only 9 countries reported no significant incidents.

Nearly 50% of incidents reported related to mobile telephony and the internet, which affected about 1.8 million users per incident. Over 75% of the incidents were reported as “system failures” with hardware being the most common followed by software issues. Only 6% of the incidents were attributed to cyberattacks with the internet, followed by fixed telephony, being the most affected. Cyberattacks accounted for the second biggest cause of internet issues behind hardware failures.

The report contains several examples of the types of incidents reported, some of which were related to hardware failures or configuration failures. Some notable incidents related to theft of fibre optic cables, vandalism by a former employee and distributed denial of service attacks.

ENISA’s report concludes that the proposal for a cybersecurity Directive contains a similar reporting requirement to the existing Framework Directive. ENISA supports reporting as a method of assisting the European Union and Member States to improve the security and resilience of electronic communications networks.

UK Court of Appeal upholds two-year sentence for cybercriminal

This post was written by Cynthia O'Donoghue.

The UK Court of Appeal, R v Martin [2013] EWCA Crim 1420, dismissed an appeal against a two-year prison sentence for various cybercrimes by the appellant, Lewys Martin. Martin previously pleaded guilty to various breaches of the Computer Misuse Act 1990. Martin was then convicted and sentenced to concurrent terms for unauthorised modification of computer material, securing unauthorised access to computer material, including with intent, and for making, supplying or obtaining computer materials. In addition, there was a violation for the interception of communications under the Regulation of Investigatory Powers Act 2000.

The underlying convictions related to denial of service (DOS) attacks against various public bodies including Oxford and Cambridge Universities and the Kent Police, as well as to the hacking of several individuals' bank accounts, all taking place during 2011 and 2012. Martin was found to be linked to the cyberattack group “Anonymous.” The Universities wasted 19 working days in dealing with the attacks and Kent Policy wasted 35 man hours, with 30% of their security team engaged in dealing with the attacks. The individuals whose banking details were hacked and stolen were also impacted with each of them having to cancel and obtain new bank cards and close accounts, which took weeks to resolve.

The court recognised that individuals who had their privacy invaded in this way “very seldom” got over it, and the sentence had to reflect that the attacks against the individuals bordered on identity theft, even though Martin’s acts were not financially motivated. The appellate court held that the offences fell “into the highest level of culpability: they were carefully planned offences which did and were intended to cause harm both to the individuals and organisations targeted” (para. 36).

The court also held that the seriousness of the criminality could not be measured by the length of a cyberattack nor by the financial consequences, rather that the wider implications for society could not be ignored because of the potential to cause great damage and increasing prevalence of such incidents. The court looked at aggravating factors, such as whether an offence was planned or persistent, the nature of the damage, the public interest, and effect on individual privacy and on public confidence, holding that “for offending of this scale, sentences would be measured in years rather than months” (para. 43). In particular, the court acknowledged the prevalence of computer crime, the potential to cause enormous damage, to IT systems, important public institutions and to individuals, given the way in which society now operates, and that organisations are compelled to spend substantial sums combating this type of crime. The court concluded, therefore, that a deterrent sentence was warranted and the sentences were “amply justified” (para. 46).

PCI Data Security Standards Changes on the Horizon

This post was written by Cynthia O'Donoghue.

This month the PCI Security Standards Council published the highlights of the new data security standards (DSS) that will come into effect in November 2013. The 3.0 Change Highlights provides a preview of the new standards which are meant to be more flexible and presents security as a responsibility shared through education and awareness.

The Council has tried to provide as much transparency about the new developments and process for PCI DSS. The key drivers for the 3.0 updates are the lack of education and awareness, weak passwords and authentication challenges, third party security challenges, slow self-detection of malware and other cybersecurity threats and an inconsistency in PCI DSS assessments.

The new 3.0 version will introduce several new sub-requirements among the 12 standards, including building in security policies and operational procedures into each of the 12 requirements. There will also be new point-of-sale requirements and stronger requirements for penetration testing and other enhanced testing procedures required for validating compliance with the standards. Version 3.0 will also include a requirement to do threat modelling in relation to software development.

The standard updates are still being reviewed and subject to further comment with the final versions being published in November 2013.

Colombia fills the gaps in its new data protection framework.

This post was written by Cynthia O'Donoghue.

After its first data protection law came into force in April this year, Colombia has now introduced implementing regulations (Decree No. 1377). The legislation, which was released in late June, provides greater clarity on a number of areas contained in the data protection law (Statute Law No. 1581). The regulation sets out the information that must be given to data subjects, consent requirements and when cross-border transfers are allowed, which Colombia’s data protection authority José Alejandro Bermúdez Durana, has characterised as being “elastic and business-friendly”.

The regulation also requires organisations acting as a data controller to provide individual’s access to their data by providing a description of the way data is collected, stored and processed as well as the reasons for collecting the data. All privacy policies are to be written using plain language with information about how individuals can exercise their rights and setting out the purpose for which data is collected and length of time data will be kept.

The legislation clarifies the methods that can be used to obtain consent, which cover automatic means and consent obtained through unequivocal conduct. Consent may not be implied from silence nor from past conduct, so controllers must obtain consent to continue using existing data. Organisations should, however, take some solace from the regulations permitting the use of alternative methods where seeking consent would be unrealistic.

Controllers were only given 30 days to institute consent mechanisms, after which they were required to give individuals 30 days to request that their data no longer be processed. Only if controllers do not hear from individuals after the 30-day period may they continue to use the data.

Consent must also be obtained for new processing, and, of course, individuals may revoke their consent at any time, and controllers must then remove the data, unless it is required to be retained under legal or contractual obligations.

The regulation also requires that data may only be kept for as long as is “reasonable and necessary”. International transfers of data are only allowed where there is ‘adequate protection’, much like in the EU, or where the Colombia authority provides that a country’s laws provide such protection. Something unusual in the regulations, however, is that collection of data relating to minors (those under the age of 18) is banned unless the data is of a “public nature”. Non-compliance could result in a maximum fine of approximately $612,000.

"Reclaim Your Name": A Campaign Against Big Data?

This post was written by Frederick Lah.

Earlier this summer, FTC Commissioner Julie Brill introduced her “Reclaim Your Name” initiative. Aimed toward the big data industry, the initiative urges data brokers to be more transparent and provide consumers with more control over their personal data. In a follow up, Commissioner Brill released an op-ed piece in the Washington Post earlier this month, which drew some pointed criticism from the Direct Marketing Association.

Click here to read more on our sister blog, AdLaw By Request.

Google refuses to submit itself to the scrutiny of English courts

This post was written by Cynthia O'Donoghue.

An action against the California-based Internet giant, Google, was recently brought in the English courts. The individuals, supported by the campaign group known as Safari Users Against Google’s Secret Tracking, claim that the search engine provider bypassed the security settings on their Apple iPhones and Mac computers in order to track their online behaviour without their knowledge. Google denied any wrongdoing and disputed English courts’ jurisdiction over the case, arguing that it should be heard in California.

The browser used by the individuals in question was set to block cookies, small text files put on users’ computers to track their online behaviour. However, the claimants and the campaign group claim that Google exploited a loophole in the Safari browser to disable Safari settings without informing the users. The claim against Google covers breach of confidence and privacy, non-compliance with UK Data Protection Act 1998, computer misuse and trespass.

Google has denied any wrongdoing and claims that it collected no personal data. The Internet giant, however, has already been issued with a $22.5 million fine by the Federal Trade Commission (FTC) in context of the same behaviour in the United States. The FTC decided that bypassing Safari’s security settings constituted a breach of its previous order that Google would not misrepresent “the extent to which consumers can exercise control over the collection of their information.”

Google refused to accept service of the lawsuit in the UK, claiming the English courts have no jurisdiction over the lawsuit, on the basis that the software powering its services is located in California, and its consumer services are provided by the U.S. division, not Google UK. Thus, arguing that any claims alleging breaches of privacy and data protection law should be brought in California. The claimants argue that Google has a considerable presence in the UK, earning substantial revenues in the country and constructing a $1 billion headquarters in London.

Google says its position differs from that of other tech giants who offer services through EU-based affiliates, thus enabling claimants to make claims against such affiliates. Google’s argument that it is not subject to local jurisdiction in Europe is analogous to the position taken before the European Court of Justice, where Google has argued that its Spanish affiliate is not subject to the data protection laws as a data controller in relation to the services offered by Google, Inc. Both of these cases raise interesting questions about jurisdiction, which the European Commission is hoping to resolve through the draft Data Protection Regulation. The draft Regulation clearly establishes jurisdiction for non-EU based companies that offer goods and services to individuals in the EU or who “monitor” their behaviour.

U.S.-EU Safe Harbor Under Fire

This post was written by Cynthia O'Donoghue.

As part of an on-going debate on the European data protection reform, doubts were cast over the adequacy of the Safe Harbor arrangements with the United States. Viviane Reding, the European Commissioner for Justice, Fundamental Rights and Citizenship, called the 13-year-old data-sharing agreement between the EU and the United States a potential “loophole for data transfers,” which does not provide adequate protection. “The Safe Harbour agreement may not be so safe after all,” she said when announcing a review of the cross-Atlantic agreement.

The existing General Data Protection Directive prohibits cross-border transfers of personal data to countries not recognised as providing adequate protection for the processing of personal data, unless certain mechanism are in place. The U.S.-EU Safe Harbor Framework attempts to transpose European data protection law into that of the United States, such data transferred to companies certifying adherence to the framework as deemed to provide adequate protection for the processing of personal data. Currently, around 3,000 companies have voluntarily joined the programme by subscribing to a binding set of data transfer rules.

EU officials have raised two criticisms. First, whether Safe Harbor actually provides adequate protection. Second, whether companies certified to Safe Harbor actually observe the principles. Past studies have shown organisations falsely claiming to have certified to Safe Harbor, as well as only a fraction of organisations fully complying with Safe Harbor requirements in practice.

The U.S. Federal Trade Commission (FTC), the body responsible for enforcing the Safe Harbor, recently increased its enforcement action to ensure compliance with Safe Harbor, including by requiring annual audits of Twitter, Google, Facebook and MySpace. However, the EU officials and representatives of some European data protection authorities doubt whether this is enough to make Safe Harbor work in its current form.

Article 3 of the U.S.-EU Safe Harbor Agreement allows the European Commission to reverse or suspend the agreement. Referring to this provision, the European Parliament requested that the European Commission conduct a full review of Safe Harbor. Ms. Reding has confirmed that she plans to present a comprehensive assessment of Safe Harbor before the end of the year. Companies relying on Safe Harbor may need to audit their adherence to the framework, or even to consider implementing other mechanisms for ensuring adequate protection for data transfers, including binding corporate rules.

Data protection vs. anti-money laundering, counter terrorism and traceable money transfers

This post was written by Cynthia O'Donoghue.

Last month, the European Commission published a letter from the Article 29 Working Party (Art. 29 Working Party) to Juan Fernando López Aguilar, chair of the European Parliament's Committee on Civil Liberties, Justice and Home Affairs (LIBE). The letter addressed the proposed new directive on the prevention of money laundering and terrorist financing through financial systems, and a new regulation on information accompanying transfers of funds. After scrutinising the proposed directive, the European data protection watchdog suggested a number of changes.

In early February of this year, the European Commission published its proposal for the Anti-Money Laundering/Counter Terrorist Financing (AML/CFT) Directive (also known as MLD4) and a Regulation on information accompanying transfers of funds, COM (2013) 44/2 (Wire Transfer Regulation). The proposal aims to update and extend the scope of control over money transfers.

The Art. 29 Working Party has commended the proposal for giving greater consideration to data protection, including by preventing indefinite retention of personal data for AML/CFT purposes. It has also welcomed the recognition of the right of access under recital 34 of the MLD4.

The Art. 29 Working Party also, however, questioned a number of issues. First, it criticised the extension of MLD4’s scope to allow Suspicious Transaction Reports (STRs) to be used in tackling tax fraud and evasion on the basis that it contradicts the data protection principles relating to purpose limitation and proportionality.

Second, the Art. 29 Working Party suggested further efforts to be made to stop disproportionate application, or “goldplating,” of some AML/CFT provisions, noting that further efforts should be made to clarify the "tipping off" provision. Under that provision, companies would be exempt from complying with a data subject access request where access related to personal data collected was part of STRs. The Art. 29 Working Party was concerned that the exemptions would be used to withhold any data collected for AML/CFT purposes.

Last, the Art. 29 Working Party suggested that MLD4 should require member states to adopt national laws addressing the international transfer of personal data for AML/CFT purposes, especially in relation to transfers of personal data to third countries that do not offer an adequate level of data protection. The Art. 29 Working Party would like to see such laws specify the types of data, and countries, for which there are important public interests behind the transfers, as well as the guarantees available to ensure data protection.

The Art. 29 Working Party concluded that, while the proposals take into account the interaction between anti-money laundering provisions and the European data protection law, MLD4 as proposed fails to reconcile them.

UK's CESG launches a two-tier programme for cyberattack response

This post was written by Cynthia O'Donoghue.

On 13 August 2013, the UK’s CESG, the Information Security arm of GCHQ, formally launched two schemes aimed at providing access to industry expertise on effective response to cybersecurity attacks. The schemes were prepared in collaboration with the Council of Registered Ethical Security Testers (CREST), the professional body representing the technical security industry. The Minister for Cyber Security, Chloë Smith, said she was delighted to announce “a unique Government-Industry partnership to tackle the effects of cyber incidents.”

The initiative follows on from the successful pilot that began in November 2012. The pilot programme aimed to assist organizations hit by cyberattacks by connecting them with companies with established expertise in responding to such incidents. The conclusion drawn from it was that the objectives of the National Cyber Security Strategy will be best met by a two-tier certification programme for Cyber Incident Response services. The two-tier approach takes account of the varying degrees of assistance required by private sector companies, government organizations and universities by tailoring incident response to industry needs, and allowing GCHQ and CPNI to focus on the most significant attacks.

The first tier, or industry-led certification, will focus on appropriate standards for incident response for all industry sectors, the general public sector and academia. As part of this scheme, CREST, together with industry and government, will prepare standards for ‘Cyber Security Incident Response (CSIR)’ services. CREST will use the standards to audit security incident providers and will enforce the standards via codes of conduct. GCHQ hopes that this will provide a “foundation to establish a strong UK cyber incident response industry able to tackle the vast majority of cyber-attacks.”

The second tier, known as CESG/CPNI-led certification, will focus on responding to sophisticated or targeted attacks against important national infrastructure. This scheme will be run by GCHQ and CPNI, and will seek to identify a small number of industry providers with sufficient expertise and quality standards to respond to attacks perpetrated by the most skilled “threat actors” or those aimed at “networks of national significance”.

The scheme recognises that despite best efforts, cyberattacks cannot always be avoided, but that the manner of response is often crucial to how much damage results from it. Smith said, “The best defence for organisations is to have processes and measures in place to prevent attacks getting through, but we also have to recognise that there will be times when attacks do penetrate our systems and organisations want to know who they can reliably turn to for help.”

Vote on Draft General Data Protection Regulation due Mid-October

This post was written by Cynthia O'Donoghue.

The landslide of proposed amendments and the recent debates over the PRISM scandal have pushed back the Civil Liberties, Justice and Home Affairs Committee (LIBE) vote on the proposed General Data Protection Regulation (Regulation). The vote, initially planned for May 2013, has already been postponed twice (see our blog reporting on the previous postponements).

The primary reason for the delay continues to be the number of suggested changes to the draft Regulation. LIBE received more than 3,100 proposed amendments, in addition to the four reports from other parliamentary committees, a number of EU Member States and some industry groups. An anonymous European Parliament official was quoted by Bloomberg BNA as saying that everybody is overwhelmed with the number of amendments.

The more recent delay concerns the U.S. National Security Agency’s PRISM programme. The U.S. data-gathering scandal has sparked the debate on re-introducing a clause prohibiting disclosures “not authorized by Union law.” The re-introduction of that clause into the draft Regulation would prohibit any company from disclosing the personal data of EU citizens to non-EU governments other than in accordance with mutual legal assistance arrangements. The clause was dropped from the draft Regulation before the Commission published the final text, but now there is growing consensus for putting it back in.

Continued postponements may substantially impact the overall legislative timeline. Even once LIBE adopts its position on the draft, the Regulation will still need to be accepted by the EU Council and put to a vote by the full European Parliament. On 15 July, vice president and European Commission for Justice, Vivianne Reding, issued an appeal to Member States to place the Regulation on the autumn summit agenda. An orientation vote is now provisionally set for mid-October, as part of Reding’s push to ensure that the whole process closes before the European Parliament is re-appointed in 2014. If this deadline is not met, the new Parliament can decide to return the dossier to the Commission, and the legislative process will need to restart.

ICO to check out websites for adequate Subject Access Request wording

This post was written by Katalina Chin.

Following a public consultation in December 2012 on a draft version, the Information Commissioner's Office (ICO) published its final Subject Access Code of Practice on 8 August 2013.

Like all other data protection laws in the EU, the Data Protection Act 1998 (DPA) includes the principle that anyone has the right to find out what information an organisation holds about them by making a ‘subject access request’ (SAR). But when faced with such a request, organisations often feel confused, daunted or even frustrated as to how to properly handle and respond to a SAR. How do we carry out a full search for all their personal data? How do we ensure that the privacy of others isn’t infringed when responding? There are on-going legal proceedings – don’t the discovery rules provide a more appropriate method of providing information?

So, the ICO’s code of practice aims to assist organisations in the public, private and non-profit sector handle SARs and provides practical guidance on the subject – from how to recognise a SAR to how to actually deal with and respond to such requests. The code explains the circumstances in which organisations can refuse to provide all or some of the information requested, as per the ‘exemptions’ from the duty to comply with a SAR set out in Schedule 7 of the DPA.

The code also includes ten simple steps to consider when responding to SARs:

  1. Identify whether a request should be considered as a SAR
  2. Make sure you have enough information to be sure of the requester’s identity
  3. If you need more information from the requester to find out what they want, then ask at an early stage
  4. If you’re charging a fee, ask for it promptly
  5. Check whether you have the information the requester wants
  6. Don’t be tempted to make changes to the records, even if they’re inaccurate or embarrassing…
  7. But do consider whether the records contain information about other people
  8. Consider whether any of the exemptions apply
  9. If the information includes complex terms or codes, then make sure you explain them
  10. Provide the response in a permanent form, where appropriate

The ICO’s code does not have the force of law and they cannot take enforcement action where organisations fail to adopt good practice or to take on the code's recommendations – unless, of course, this itself breaches the DPA.

But beware organisations with websites: the ICO plans to carry out what they call a ‘subject access request sweep’ of websites later in the year with the aim of publishing a report on their findings in early 2014. They’ll be looking at the information that organisations provide to anyone who may want to make a SAR. So it may be a worthwhile exercise to double check that the SAR information set out in your website privacy policy (or elsewhere) is adequate.

The NIST Cybersecurity Framework: The Only Game in Town?

This post was written by Timothy J. Nagle.

On Tuesday, the White House cybersecurity coordinator posted a blog on the White House website describing incentives that may be made available to private sector “owners and operators.” The blog reviews the purpose of the Executive Order (information sharing, privacy and adoption of cybersecurity practices) that was issued earlier this year and the resulting NIST Cybersecurity Framework process, but it is also noteworthy for two reasons. It focuses on systems that run elements of the national infrastructure “…such as the electric grid, our drinking water, our trains, and other transportation.” This underscores the shifting cybersecurity focus from telecommunications and financial services to other infrastructure sectors. The second interesting element is the list of incentives under consideration. Among the eight listed, the most promising and consequential include the engagement of the insurance industry to build underwriting practices (implicitly based on the Framework), limitation on liability for companies that implement the Framework, and “public recognition” for program participants. Of most interest to the Energy industry and other utilities is the suggested incentive that “…regulatory agencies that set utility rates should consider allowing utilities recovery for cybersecurity investments related to complying with the Framework and participation in the Program.”

The “Voluntary Program” mentioned in the blog is described on the NIST Cybersecurity Framework website as the final stage in the implementation after a draft Framework for stakeholder review followed by a workshop in September, the release of a draft for public comment and issuance of the final Framework document early next year. The most recent update from NIST described the current status of the work and included the concept of Framework Implementation Levels which were introduced in the July outline. The elements of the Framework are consistently described as prioritized, flexible, repeatable, cost-effective, and risk-based. NIST and other government participants stress that the participation of the private sector is essential, that the resulting standards must be consistent with current industry practice, and that they not conflict with existing regulation or create new rules.

This was underscored by the Director of NIST in testimony to the Senate Commerce Committee at a hearing held on July 25, 2013 entitled “The Partnership Between NIST and the Private Sector: Improving Cybersecurity.” A week after the hearing, the Committee approved the Cybersecurity Act of 2013. This bill essentially codifies the role the President assigned to NIST in the executive order; i.e. to facilitate and support the development of voluntary, industry-led standards and best practices on an ongoing basis to reduce cyber risks to critical infrastructure. The bill also contains provisions for research and development, education and workforce development in cybersecurity. The bill explicitly does not confer any new regulatory authority. Another aspect that is missing from the bill is cyber threat information sharing which raises privacy and liability concerns and has frustrated prior attempts to pass cyber legislation. The bill is supported by most industry organizations including the National Association of Manufacturers and energy and financial services industry trade associations.

It would appear the White House, the Congress and much of the private sector have come to recognize NIST as an honest broker in the cybersecurity standards development process. This is a positive development and will most likely lead to a Framework that reflects current practice but will be sufficiently flexible to accommodate future technologies and threats. The real area of contention will be around implementation, adoption and obligations that may result.

Court Rules That Technical Violations of Michigan Video Rental Privacy Act Give Rise to $5,000 Per Person in Statutory Damages, Alleged Violation Enough to Stay in Federal Court

This post was written by Paul Bond and Lisa B. Kim.

A Michigan federal judge has held that plaintiffs could proceed in federal court on their claims under the Video Rental Privacy Act (VRPA), a state law akin to the federal Video Privacy Protection Act (VPPA). The ruling came in three similar putative class actions that alleged Bauer Publishing Co., Hearst Communications, Inc, and Time, Inc., respectively, sold their customers’ personal information without permission. (The three cases were assigned to the same judge for their similar allegations.)

To have jurisdiction over a case, a federal court must find that the plaintiffs satisfy Article III of the United States Constitution, including by alleging that they have suffered an injury-in-fact. Many privacy class actions falter because plaintiffs allege only a technical violation, but cannot point to any actual or imminent impact that this supposed violation had, or will have, on their lives. The plaintiffs’ bar has therefore tried to find federal and state privacy laws, with associated statutory or liquidated damage hooks, in an attempt to avoid dismissal for lack of harm.

In these cases, plaintiffs brought suit under the VRPA. Michigan’s VRPA provides that a person “engaged in the business of selling at retail, renting, or lending books or other written materials, sound recordings, or video recordings shall not disclose to any person, other than the customer, a record or information concerning the purchase, lease, rental, or borrowing of those materials by a customer that indicates the identity of the customer,” with certain exceptions. The statute also provides that “a person who violates this act shall be liable in a civil action for damages to the customer identified” for “[a]ctual damages, including damages for emotional distress, or $5,000.00, whichever is greater” (emphasis added) and costs and reasonable attorneys’ fees. Because of the provision for $5,000 per person in statutory damages, the VRPA threatens businesses with the prospect of catastrophic class damages.

Defendants moved to dismiss these cases. In part, the defendants argued that the plaintiffs had alleged no injury-in-fact, and thus, the court had no jurisdiction. The court rejected that argument. Analyzing the language of the VRPA, Judge George Steeh reasoned that the VRPA did not contain any language that would require the claimant to have suffered any actual injury apart from the violation of the statute. To the contrary, the statute expressly provided for statutory damages and actual damages as alternative remedies. Contrasting Michigan’s law with its federal counterpart, the court noted that “Unlike the VPPA, a close reading of the VRPA reveals that it contains absolutely no language to require that a claimant suffer any actual injury apart from a violation of the statute[.]” In doing so, Judge Steeh followed the reasoning of the Northern District of California, who addressed this same issue of Article III standing in connection with Michigan’s VRPA in Deacon v. Pandora Media, Inc. (901 F.Supp.2d 1166 (N.D. Cal. 2012).

While the VRPA is similar to the VPPA, and was in fact enacted right after the VPPA, this holding is not likely to extend to the VPPA. In his ruling, Judge Steeh specifically noted differences between the language of the VRPA and the VPPA as to this issue of standing, and also referenced how the Northern District of Illinois found that the VPPA required that a plaintiff actually be “aggrieved,” i.e., suffered an Article III injury-in-fact (See Sterk v. Best Buy Stores, L.P., 2012 WL 5197901 (N.D. Ill.)). That being said, holdings like this are surely being watched by legislatures who are introducing new privacy bills that explicitly include language that violations of the law would constitute an injury. See e.g.

You can read Judge Steeh’s 17 page ruling here.

Is Your Electronic Device an Automatic Telephone Dialing System?

This post was written by Paul Bond and Henry Pietrkowski.

In recent years, there has been a heightened focus on the Telephone Consumer Protection Act and a boom in TCPA litigation. The formula for recovery may seem simple and with no statutory cap, class action damages under the TCPA can add up quickly.

Since Congress first passed the TCPA in 1991, the statutory definition of an “automatic telephone dialing system” has remained the same. An upcoming decision by the Federal Communications Commission may finally alter the definition which could now incorporate the various electronic devices used by the public.

Click here to read the full length analysis.

Pushing the Limits with Analytics

This post was written by Paul Bond and Frederick Lah.

With more and more companies engaging in the field of analytics, companies continue to come up with new and innovative ways to harvest their fields of Big Data. For instance, on Tuesday, the New York Times reported on how a range of start-ups and established tech companies are focusing their analytics efforts on a technology for mobile apps referred to as “predictive search.” The technology would not require people to enter a search query; instead, relevant information is pushed forward to the users proactively based on their location, time of day, digital activity (such as their email and calendar), and other contextual factors. For example, a phone with predictive search could inform users