Oregon AG Seeks Tougher State Breach Law

This post was written by Divonne Smoyer and Christine N. Czuprynski.

State attorneys general (AGs) are regulators with varying enforcement priorities and policy agendas, even within a focused issue such as data privacy and security. Over the last year, The Privacy Advisor has interviewed a number of state AGs who are active in privacy to gain insight into their views. In this spotlight, we talk to Oregon AG Ellen Rosenblum about her work in privacy, including her focus on protecting children online, and her interest in seeing her state’s data breach notification law strengthened. Click here to read the full article published by the International Association of Privacy Professionals (IAPP) The Privacy Advisor.

Italy Releases Draft Declaration of Internet Rights

This post was written by Cynthia O’Donoghue.

Italy’s Chamber of Deputies has proposed a ‘Draft Declaration of Internet Rights’ (Declaration), acknowledging both the way in which the internet has changed interactions and the way it has erased borders, but also noting that the EU’s protection of personal data is a necessary reference for governing operation of the internet. The Declaration is now open to public consultation until 27 February 2015.

The aim of the Declaration is to establish some general principles to be implemented by national legislation. It consists of a preamble and 14 articles covering topics including the fundamental right to internet access, net neutrality and right to be forgotten.

In particular, there is strong emphasis on the protection of the individual from widespread monitoring. Article 9 of the Declaration, for example, states that restrictions on anonymous communications "may be imposed only when based on the need to safeguard the public interest and are necessary, proportionate, and grounded in law and in accordance with the basic principles of a democratic society."

This publication is not the first of its kind and follows the German Bundestag committee work on the ‘Digital Agenda’, France’s parliamentary committee report on Rights and Liberties in the Digital Age, and Brazil’s Marco Civil.

The Declaration has received a mixed response, including from Italy’s Data Protection Commissioner, who expressed some concern about the rights to be anonymous and to be forgotten (Articles 9 and 10). A particular concern raised about the right to be forgotten, relates to increasing the scope of the right to permit court appeals of decisions relating to search engine de-listings where there is a public interest in preserving the information, which in principle sounds like a promotion of freedom of speech, but could have the opposite effect by focusing undue attention on individual requesting de-listing.

As a Declaration it will not become binding even after being finalised after the public consultation period; however, it will form a statement of principles on internet governance and the rights of individuals.

Update: Proposed Settlement in Target Data Breach Litigation

This post was written by Paul Bond, Lisa Kim, and Christine Czuprynski.

The proposed settlement agreement in the Target data breach consumer litigation that we reported on on March 19, 2015 has been approved by the judge, and a final approval hearing set for November 10, 2015. Based on this order, class members should start to receive notice of the settlement within 45 days of yesterday’s order.

Proposed Settlement in Target Data Breach Litigation

This post was written by Paul Bond, Lisa Kim, and Christine Czuprynski.

A proposed settlement has been reached in the multi-district consumer litigation Target faces following a data breach that compromised at least 40 million credit cards during the 2013 holiday shopping season. The settlement, which requires Target to pay $10 million into a settlement fund and adopt specific data security measures, still needs court approval.

If approved, class members who used credit or debit cards at Target stores between November 27, 2013, and December 18, 2013, will be eligible to receive up to $10,000 individually upon submitting a claim form seeking reimbursement for any costs associated with identity theft, unauthorized charges, and higher interest rates that resulted from unauthorized activity on credit accounts. Those class members who submit documentation of their losses will be paid first, and those class members without documentary evidence of losses are eligible to receive an equal share of whatever is remaining in the settlement fund.

As our colleague Mark Melodia noted, the settlement is unique not only because Target agrees to adopt data security protocols, but also because of the amount of attorneys’ fees. The attorneys for the class will seek fees in an amount not to exceed $6.75 million, which is on the high end of the historical range.

In late 2014, Target sought to dismiss the claims, but the court denied that motion and allowed the case to proceed. The preliminary approval hearing on the settlement was scheduled for Thursday morning in front of Judge Magnuson.

Enforced subject access requests now a criminal offence in the UK

This post was written by Cynthia O’Donoghue and Katalina Bateman.

In September 2014 we reported on the UK’s intention to stamp out a practice commonly known as “enforced subject access requests”. This concerned the previously dormant section 56 of the UK Data Protection Act 1998 (‘DPA’), which, following an announcement from the Ministry of Justice, was implemented on March 10, 2015. Under this section, it is now a criminal offence for an entity to require an individual to submit a subject access request under section 7 of the DPA for his or her own protected personal data that the entity would otherwise be unable to access.

This will prevent employers from requiring a candidate or current employee to use his or her subject access rights under the DPA to obtain and then provide certain records to the employer as a condition of employment. By way of an example, this will affect those organisations that had been using enforced subject access requests submitted to the police to check individuals’ criminal and other protected records but choose not to use the established legal system.

Section 56 also has a second limb, affecting the provision of goods, services and facilities to the public. Under section 56 (2) a person concerned with the provision of goods, facilities or services to the public must not make the provision of goods and services conditional on an individual making a subject access request and providing their records. Since the restriction applies whether or not there is payment for the goods and services, this also affects volunteered services.

Going forward, if an organisation is interested in accessing criminal records, it will have to request a criminal records check. Bear in mind that once this information is processed, the organization will then be a data controller for sensitive personal data with all the compliance responsibilities found under the DPA.

The ICO recommends that if it is necessary to conduct a criminal records check, then detailed standard and enhanced checks can be done through the appropriate statutory procedures - the Disclosure and Barring Service in England and Wales, Disclosure Scotland in Scotland and Access Northern Ireland in Northern Ireland – which were formally known as ‘CRB checks.’

In England and Wales, committing an offence under section 56 of the DPA can carry an unlimited fine and the ICO has stated that it intends to actively prosecute those who continue to enforce subject access requests, to both protect individuals and encourage the use of the DBS. Further guidance on how not to fall foul of section 56 can be found in the ICO’s guide on enforced subject access.

French Supreme Administrative Court decision significantly broadens the scope of the French Sunshine Act

This post was written by Daniel Kadar. 

A decision of the French Supreme Administrative Court (Conseil d’Etat) dated 24 February 2015 has significantly broadened the scope of the French ‘Sunshine Act’:

  • Whereas initially health care companies were only obliged to disclose the existence of agreements with health care professionals (HCPs), they will now also be required to disclose the remuneration of French HCPs.
  • Companies which manufacture or market non-corrective contact lenses, cosmetic and tattoo products will have their transparency reporting duties aligned with the ones applicable to health care companies.

Further developments from the French authorities on this matter will need to be closely monitored, since they will probably bring significant changes to reporting requirements. In particular, the date of application of this new interpretation is key, since it could have a major impact on disclosure requirements for the remuneration of French HCPs.

Read more on this matter in our client alert.

Update on State Attorneys General: Connecticut Creates a Permanent Privacy Department; NAAG Covers Big Data, Cybersecurity, and Cloud Computing; and States Amend Breach Laws

This post was written by Divonne Smoyer and Christine N. Czuprynski.

The federal government may be pushing a cybersecurity and data privacy agenda, but that doesn’t mean that the states are taking a back seat. The state attorneys general are maintaining their focus on issues relating to privacy and data security and expanding the scope of that focus to address the ever-evolving nature of those issues.

On March 11, 2015, Connecticut Attorney General George Jepsen announced the creation of the Privacy and Data Security Department in his office that will be tasked with privacy and data security investigations and litigation. The attorney general, who created a privacy task force four years ago, hopes that the creation of this specialized department will solidify Connecticut’s role as a leader in this space. The attorney general is making the shift from a task force to a permanent department because the need for such a focus has not let up in the last four years, and shows no signs of doing so.

Privacy and data security are on the minds of the attorneys general as they come out of their most recent National Association of Attorneys General (NAAG) meetings and head into spring. The NAAG Southern Region Meeting, which concluded March 13, 2015, covered “Big Data – Challenges and Opportunities,” and included panels on data breach, cybersecurity, cloud computing and the proposal for a national data breach notification law.

NAAG President Mississippi Attorney General Jim Hood, whose presidential initiative for the 2014-15 year is “Protecting Our Digital Lives: New Challenges for Attorneys General,” will host the presidential initiative summit in mid-April in Biloxi, Mississippi. On the summit agenda: intellectual property theft, cloud computing, and digital currency.

In addition, state attorneys general are seeking to revise and expand upon existing data breach and privacy legislation. We have previously discussed the changes being considered in New York and Oregon. The Washington Attorney General is also pushing for changes to that state’s data breach notification law. Regulated entities can expect to continue to see a lot of action from the states on these issues. 

French courts are competent to judge over a French Facebook user's complaint

This post was written by Daniel Kadar.

It is foreseeable that not many of Facebook’s millions of users every day have ever had a look at the social network’s Terms & Conditions.

Only the readers of the fine print may know that these Terms & Conditions provide that any claim related to Facebook must be resolved exclusively in the United States District Court for the Northern District of California or a state court located in San Mateo County, and that the law of the State of California must necessarily prevail without regards to conflict of law provisions. This provision is deemed to protect Facebook against the claims arising from foreign users. It has recently been challenged by French courts.

In a decision dated March 23, 2012, the Court of Appeal of Pau dismissed Facebook’s forum clause and found it unclear and difficult to read for users. On March 5, 2015, for the second time, the Paris Court of First Instance (Tribunal de Grande Instance) rejected Facebook’s challenge to its jurisdiction.

In this case, an art-lover schoolteacher had published on his wall a link to Gustave Courbet’s famous and provocative painting, “L’origine du monde,” representing a naked woman. Like many 19th century critics, Facebook found unacceptable the displaying of a nude body on its network and suspended the account. In 2011, after several unsuccessful requests asking for its reactivation, the schoolteacher and former Facebook user filed a lawsuit against the company for violation of his free speech rights. Facebook used its Terms & Conditions as a shield and challenged French courts’ jurisdiction. The clause was however declared null and void by the Paris Court of First Instance. Judges will now hear parties’ main arguments.

This approach is consistent with the French data protection authority’s (Commission Nationale de l’Informatique et des Libertés – CNIL) approach on jurisdiction: as soon as means for collecting, processing and transferring data are located in France, such as a computer or a tablet, the CNIL considers it has jurisdiction.

There is a clear trend now that the defense based on a jurisdiction clause becomes less and less efficient. As a result, compliance to local regulation becomes key.

Ofgem's Smart Meter Network Decision: UK gas and electricity consumer privacy gets broader protection

This post was written by Kate Brimsted and Cynthia O'Donoghue.

In February 2015, Ofgem (the UK’s Office of Gas and Electricity Markets) published its Decision on Extending the Smart Meter Framework to Remote Meters (the Decision). This confirms that, following a public consultation, the privacy requirements embedded in the supplier licence terms and which will apply to suppliers’ use of customer data from “smart meters” will apply to a wider class of meters.

Ofgem is a non-ministerial government department and an independent National Regulatory Authority, recognised by EU Directives. The UK Department of Energy and Climate Change (the DECC) is leading the implementation of “smart metering”; gas and electricity suppliers are required to roll out around 53 million smart meters, affecting every home and smaller business in Great Britain. The rollout is scheduled to be completed by 2020. Smart meters are expected to bring significant benefits. Consumers will have more information about their energy consumption, which should help them manage their usage more effectively. There will be improved and more accurate billing, easier and quicker switching between different methods of payment, and a wider range of payment options, including Internet-based prepayment top-up. Smart meters should also help to reduce costs for the industry and, ultimately, consumers.

However, smart meters can store much more detailed energy consumption data than traditional meters, and are capable of being read remotely. The DECC therefore originally introduced a regulatory framework for data access and privacy specifically for smart meters, including new supplier licence obligations (the Privacy Requirements), as well as obligations in the Smart Energy Code to complement the Data Protection Act 1998 and to ensure that consumers have control over the use of consumption data from their meters.

The Privacy Requirements require suppliers:

  • For domestic consumers, to get opt-in consent to obtain and use data at greater detail than daily reads, or to use any detail of consumption data for marketing
  • For domestic consumers, to get opt-out consent for access to consumption data up to daily detail (the supplier is required to notify the consumer of the data it plans to take and must not take the data if the consumer so requests)
  • For micro business consumers, to get opt-out consent for access to consumption data at greater detail than monthly

Along with “smart meters”, there is a range of meters with similar functionality, such as “smart-type”, “advanced domestic”, “advanced” and “AMR” meters; Ofgem refers to these collectively as Remote Access Meters – i.e., any meter that isn’t a smart meter, but which is able remotely to send consumption data to the supplier, either on its own or with an ancillary device. Ofgem’s Decision confirms that the Privacy Requirements will also apply to Remote Access Meters in the future, regardless of when they were installed.

PCI Security Standards Council Announces Revisions to the use of SSL

This post was written by Cynthia O’Donoghue.

The Payment Card Industry (PCI) Security Standards Council has released a bulletin on impending revisions to version 3.0 Payment Application Data Security Standards (PA-DSS) and version 3.0 of the PCI Data Security Standard (PCI-DSS), which we reported on in January 2014.

To ensure the continued protection of consumers’ payment data, the PCI Security Standards Council has changes that align with National Institute of Standards and Technology’s findings that Secure Socket Layers (SSL) v3.0 is no longer adequate because of inherent weaknesses within the protocol.

The findings mean that no version of SSL meets PCI Security Standards Council’s definition of “strong cryptography”. As a result, new revised standards PCI-DSS v3.1 and PA-DSS v3.1 will be published to reflect the findings.

The bulletin states that these revised standards will be “effective immediately, but impacted requirements will be future dated to allow organisations to implement the changes”. In the interim, organisations are encouraged to find out whether they are using SSL and, if so, to upgrade to a “strong cryptographic protocol as soon as possible”.

These impending revisions should help organisations in protecting their data and dealing with processing payment card information.

NGOs may rely on UK's Journalism Exemption

This post was written by Cynthia O’Donoghue.

The UK Information Commissioner’s Officer (the “ICO”), in a letter to Global Witness (in Steinmetz and others v Global Witness) (the “Letter”), stated that non-media organisations may rely on the special-purposes exemption for journalism in s32 of the Data Protection Act 1998 (the “DPA”), to withhold personal data in response to Data Subject Access Requests. This is the first time s32 of the DPA has been extended to non-media organisations.

In 2012, Global Witness – a non-governmental organisation – reported that four individuals at BSG Resources Ltd (“BSGR”), an international diversified mining company, had been involved in a bribery scandal leading to BSGR being assigned a licence for four blocks at Simandou iron ore mine in Guinea, West Africa (the “Simandou controversy”).

Beny Steinmetz, an Israeli, and three others requested access to the data that Global Witness held about them in an attempt to have Global Witness’ sources disclosed to them. When the data was not disclosed, Steinmetz and the others requested that the ICO determine whether Global Witness was entitled to claim to be protected under the DPA’s journalism exemption.

To rely on exception, Global Witness had to demonstrate that: (1) the personal data was being processed only for journalism, art or literature; (2) the processing took place with a view to publication of some material; (3) Global Witness had a reasonable belief that publication was in the public interest; and (4) such belief extended to access being incompatible with journalism.

The ICO found that Global Witness had met all elements, including the public interest test, because the Simandou controversy purported to involve corruption and a bribe of $2.5 billion of a government in one of the world’s poorest countries.

The ICO’s decision follows its recent 'Data Protection and journalism: a guide for the media' guidance, and is the first time use of the journalistic exception has been extended to public-interest reporting by persons other than professional journalists. 

Article 29 Working Party issues its Cookie Sweep Combined Analysis - Report

This post was written by Cynthia O’Donoghue and Katalina Bateman.

On 3 February, the Article 29 Data Protection Working Party published its ‘Cookie Sweep Combined Analysis – Report’. The sweep was undertaken by the WP29 in partnership with eight of the European data protection regulators, including the UK’s ICO, France’s CNIL and Spain’s AEPD, in order to assess the current steps taken by website operators to ensure compliance with Article 5(3) of Directive 2002/58/EC, as amended by 2009/136/EC. The Report details the results of their assessment of the extent of the use of cookies, the level of information provided, and a review of control mechanisms in place.

The Report examines 250 websites which were selected as among the most frequently visited by individuals within each member state taking part in the sweep. Media, e-commerce and the public sector were chosen as target sectors, which were those considered by the WP29 to present the ‘greatest data protection and privacy risks to EU citizens’.

Highlights of the assessment include:

  • High numbers of cookies are being placed by websites. Media websites place an average of 50 cookies during a visitor’s first visit.
  • Expiry dates of cookies are often excessive. Three cookies in the sweep had been set with the expiry date of 31 December 9999, nearly 8000 years in the future. Excluding cookies with a long duration, the average duration was between one and two years.
  • 26% of sites examined provide no notification that cookies are being used. Of those that did provide a notification, 50% merely inform users that cookies are in use without requesting consent.
  • 16% of sites give users a granular level of control to accept a subset of cookies, with the majority of sites relying on browser settings or a link to a third-party opt-out tool.

Since publishing the Report, the WP29 has made it clear in a Press Release that the results of the sweep “will be considered at a national level for potential enforcement action”. While the UK’s ICO has already stated that it intends to write to those organisations that are still failing to provide basic cookie information on their websites before considering whether further action is required, other European regulators have yet to comment on what actions they have planned.

South Korean Communications Commission Releases Guidelines on Data Protection for Big Data

This post was written by Cynthia O'Donoghue and Philip Thomas.

In December 2014, the Korea Communications Commission (KCC) released the“Big Data Guidelines for Data Protection” (Guidelines). Aimed at Information and Communications Service Providers (ICSPs), they are designed to prevent the misuse of “publicly available information” to create and exploit new information. The Guidelines expressly permit ICSPs to collect and use “publicly available information”, within certain parameters.

“Publicly available information” is defined as “code, letters, sounds and images" that are “lawfully disclosed”; however, the Guidelines also cover Internet log information and transaction records.

According to the Guidelines, where such information includes personal information, the data must be de-identified before it may be collected, retained, combined, analysed or sold. The Guidelines also include a number of specific measures for ICSPs to take in connection with their collection and use of such information.

These measures include a duty to disclose their big data processing activities and policies to users, and to inform users of their rights to opt out. Other provisions include a prohibition on the collection, analysis and exploitation of sensitive information, and an obligation to ensure that information collected and used remains de-identified.

Before the Guidelines were introduced, the right to collect and use such information in South Korea was widely considered to be a grey area. The KCC has therefore provided much-needed clarification in this area. The Guidelines strike a balance between protecting personal information, on the one hand, and recognising the growth of the big data industry on the other.

China's State Administration for Industry and Commerce Releases Measures Defining Consumer Personal Information

This post was written by Cynthia O'Donoghue and Zack Dong.

In January, China’s State Administration for Industry and Commerce (SAIC) released its ‘Measures on Penalties for Infringing Upon the Rights and Interests of Consumers’ (Measures) which are due to take effect March 15, 2015.

These Measures flesh out China’s Consumer Rights Protection Law (CRPL) which was amended in March 2014 and provides guidance as to how companies may collect, use and protect personal information of consumers.

The Measures helpfully defines “consumer personal information”, which the amendments to the CRPL had failed to do, as “information collected by an enterprise operator during the sale of products or provision of services, that can, singly or in combination with other information, identify a consumer.”

Examples of consumer personal information provide additional clarity, such as information which refers to a consumer’s name, gender, occupation, birth date, identification card number, residential address, contact information, income and financial status, health status, and consumer status. This definition is a welcome addition in the midst of China’s patchwork of privacy rules and regulations.

Violations of the Measures may result in significant penalties. The Measures state that the SAIC and its local Administrations of Industry and Commerce may impose a fine of up to RMB 500,000 if there are no illegal earnings. In the event that there are illegal earnings, however, they may issues fines of up to 10 times the amount of the illegal earnings and confiscate all illegal earnings.

It is hoped that these new Measures (in combination with the CRPL) will help to repair consumer trust in Chinese companies, and protect the improper use, disclosure and sale of consumers’ personal information in the country.

EU Art. 29 Working Party Letter on Health Data and Apps

This post was written by Cynthia O'Donoghue.

The EU Article 29 Working Party (“WP29”) has published a letter to the European Commission (“EC”) on the scope of health data in relation to lifestyle and well-being apps, following the EC’s Working Document on mHealth and the outcome of its public consultation, which generated interest in strong privacy and security tools, and strengthened enforcement of data protection.

In the letter, WP29 addresses the exceptions to processing health data for historical, statistical or scientific research, and requests that the EC ensure that any secondary processing of health data only be permitted after having obtained explicit consent from individuals.

The Annex to the letter acknowledges that determining the scope of health data is particularly complex and can have a wider interpretation depending on context, and is likely to capture apps measuring blood pressure or heart rate – exactly the types of apps that are already widely available.

The Annex makes recommendations for those gray areas where it is not always clear whether personal data is medical data, and gives examples of possible indicators to consider, such as the intended use of the data and, over time, if it is combined with other data, would it be possible to create a profile about the health of an individual, such as risks related to illness, weight gain or loss and the consequential health issues that may arise, or an indication of heart disease. To be considered ‘medical data’, the WP29 states that there has to be a relationship between the raw data set collected through the app and the ability to determine a health aspect of a person, either from the raw data itself or when that raw data is combined with other data (irrespective of whether these conclusions are accurate or not).

Finally, WP29 suggests that the data protection exception relating to further processing of health data for historical, statistical and scientific purposes should be limited to research that serves high public interests, cannot otherwise be carried out or where other safeguards apply, and where individuals may opt out.

The view of the WP29 is likely to capture most of the existing apps relating to well-being, which at present a lot of organizations may have been considered to be outside the scope of the additional protections afforded to sensitive data.

Google signs UK Undertaking to Improve its Privacy Policy

This post was written by Cynthia O'Donoghue.

On 30 January 2015, Google signed an Undertaking with the Information Commissioner’s Office (ICO) to improve and amend the Privacy Policy it adopted 1 March 2012.

Among other things, the modifications to the Privacy Policy allowed Google to combine personal data across all services and products. For example, personal data collected through YouTube could now be combined with personal data collected through Google Search.

The Undertaking requires Google to address three of the ICO’s particular concerns: (1) the lack of easily accessible information describing the ways in which service users’ personal data is processed by Google; (2) the vague descriptions describing the purposes for which the personal data is processed; and (3) the use of insufficient explanations of technical terms to service users.

In order to address these issues, Google states in Annex 1, Undertaking that it will, inter alia: enhance the accessibility of its Privacy Policy to ensure that users can easily find information about its privacy practices; provide clear, unambiguous and comprehensive information regarding data processing, including an exhaustive list of the types of data processed by Google and the purposes for which data is processed; and revise its Privacy Policy to avoid indistinct language where possible.

Google has a period of two years in which to implement these changes, and it must provide a report to the ICO by August 2015, specifying the steps Google has taken in response to the commitments set out in the Undertaking.

The ICO’s measures in response to Google’s breach of national data protection laws are much lighter than those take by other EU Member States. The data protection authorities in France (CNIL) and Spain (AEPD) have imposed fines of €150,000 and €900,000 respectively. Currently, the Dutch data protection authority is threatening Google with a €15 million fine (see our previous blog).

New Data Protection Laws in Africa

This post was written by Cynthia O'Donoghue.

In recent years, the number of African countries which have enacted privacy frameworks or are planning data protection laws has vastly increased.

Currently, 14 African countries have privacy framework laws and some sort of data protection authorities in place. Once the African Union Convention on Cyber Security and Personal data Protection (Convention) is ratified across the continent, many other nations will likely enact personal data protection laws.

Currently, seven African countries have data protection bills in place: Kenya, Madagascar, Mali, Niger, Nigeria, Tanzania, and Uganda. Many analysts believe that the Convention seeks to replicate the European Union data protection model whereby each country has its own national data protection laws and authority.

Despite these developments, the Convention still has many important areas to provide guidance on. For instance, the Convention fails to define what is meant by “consent”, “personal data” and the “legitimate grounds” individuals can raise to object to the processing of their information.

The international human rights advocacy group, Access, welcomes these changes, but stresses that “change won’t happen overnight”, and that “it will likely be a few years” before countries enact laws to implement the Convention.

FAA Takes One Small Step Toward Legalizing Commercial Use of Small Unmanned Aircraft Systems, a.k.a Drones

This post was written by Patrick BradleyMark Melodia and Paul Bond.

The Federal Aviation Administration (FAA) has long been studying the promise and perils of small unmanned aircraft systems (“UAS”), a.k.a drones. The commercial potential of UAS technology is clear. Businesses are eager to use UAS to do everything from covering traffic accidents to taking real estate and wedding photos to delivering small parcels. However, the FAA currently prohibits any commercial or business use of UAS, unless the operator obtains specific permission from the FAA. Permission is only granted on a case-by-case basis, greatly restricting businesses from adopting UAS.

This framework remains in place, but the FAA has now issued a Notice of Proposed Rulemaking (NPRM). If adopted, the proposed rules would provide some rules of the sky for UAS and real regulatory relief to businesses. However, estimates are that adoption of even these first-step rules may be as far as two years out.

The FAA’s proposed rule would set forth several requirements as to operator certification, airworthiness, registration, and operation. As to operation, potentially significant restrictions include a requirement that UAS only operate in the daylight at or below 500 feet above the ground; that the operator maintain a line of sight with the UAS during operation; and that an operator only operate one UAS at a time. Drones would not be allowed to fly over any people not directly involved with the operation of the drone.

Currently, prospective commercial drone operators are required to hold at least a private pilot certificate. That would change under the new rules. Commercial UAS operators will need to pass an FAA knowledge test and pass biennial knowledge exams. Transportation Security Administration approval will also be required under the rules. Commercial drone operators will not be required to undergo an FAA medical exam.

The FAA rules do not call for the imposition of airworthiness requirements on drones, but they will be required to register with the FAA, and they will carry N numbers like other aircraft. The pilot will need to do a preflight inspection before every flight, and accidents must be reported.

The proposed rule would not apply to:

“(1) air carrier operations; (2) external load and towing operations; (3) international operations; (4) foreign-owned aircraft that are ineligible to be registered in the United States; (5) public aircraft; (6) certain model aircraft; and (7) moored balloons, kites, amateur rockets, and unmanned free balloons.”

As to privacy, the FAA notes:

“The FAA also notes that privacy concerns have been raised about unmanned aircraft operations. Although these issues are beyond the scope of this rulemaking… the Department and FAA will participate in the multi-stakeholder engagement process led by the National Telecommunications and Information Administration (NTIA) to assist in this process regarding privacy, accountability, and transparency issues concerning commercial and private UAS use in the NAS. We also note that state law and other legal protections for individual privacy may provide recourse for a person whose privacy may be affected through another person’s use of a UAS.” At the same time that the FAA released this NPRM, the White House issued a Presidential Memorandum to all federal agencies, setting forth administration priorities for the NTIA process and all agency rulemaking.

Comments to the FAA’s NPRM will be open for 60 days after it is published to the Federal Register.

Ofcom Publishes Plan To Support the Internet of Things

This post was written by Cynthia O'Donoghue and Angus Finnegan.

In January, Ofcom, the UK telecommunications regulator, published its Statement on ‘Promoting investment and innovation in the Internet of Things’ (Statement). The Statement acknowledges that the Internet of Things (IoT) has the potential to deliver significant benefits to citizens and consumers. In light of this, Ofcom sought views from its stakeholders on what role Ofcom might play to support the growth and innovation of the IoT.

The Statement identifies four priority areas to help support the growth of the IoT. These include: data privacy, network security and resilience, spectrum availability, and network addresses.

Ofcom identifies data privacy as the ‘greatest single barrier to the development of the IoT’. Respondents were concerned about issues such as lack of trust in sharing personal data on the part of citizens and consumers.

To address such issues, the Statement proposes the implementation of a common framework to allow consumers to easily and transparently authorise the conditions under which data collected by their devices is used and shared with others. Where possible, the Statement recommends industry-led approaches to keeping consumers in control which are agreed internationally where possible.

In order to foster innovation and facilitate progress on these issues at both a national and international level, Ofcom proposes to work closely with government, the Information Commissioner’s Office, other regulators, and industry.

This Statement follows the EU Article 29 Working Party’s Opinion on ‘Recent Developments on the Internet of Things’ which we reported on in January 2015.

German Data Protection Commissioners Take Action Against Safe Harbor

This post was written by Cynthia O’Donoghue, Thomas Fischl & Katharina Weimer.

At the Data Protection Conference in Berlin, the Berlin and Hamburg Data Protection Commissioners (Commissioners) made a number of important announcements regarding the ‘inadequacy’ of the EU/U.S. Safe Harbor Program.

Both Dr. Alexander Dix and Prof. Johannes Caspar, Commissioners for Berlin and Hamburg respectively, asserted that U.S. companies do not protect data to the same level as EU companies do, even when U.S. companies certify that they will adhere to the Safe Harbor provisions. In addition, the Data Protection Authorities (DPAs) stated that there may be inadequate enforcement of the Safe Harbor Program by the Federal Trade Commission. Speaking on behalf of his colleagues from 16 German states, Dr. Dix went as far to say that:

“The Safe Harbor agreement is practically dead, unless some limits are being placed to the excessive surveillance by intelligence agencies”.

Dr. Dix further announced that the German DPAs in Berlin and Bremen have initiated administrative proceedings against two U.S. companies that base their data transfers on the EU/U.S. Safe Harbor Program. In these proceedings, the Germany DPAs have expressed their intention to stop data transfers for a limited time. Some commentators have suggested that an actual suspension of data transfer may potentially lead to a court decision, which could deny the supervisory authorities’ competence to suspend data transfers.

Other speakers, such as Paul Nemitz, Director for fundamental rights and union citizenship at the Directorate-General Justice of the European Commission, stressed that “there is an economic incentive to make Safe Harbor work”. However, in order for trans-Atlantic businesses to flourish, organisations need to be more transparent.

In light of these developments, global organisations may wish to consider alternative approaches to the Safe Harbor Program, such as EU Model Clauses, for data transfers from European jurisdictions, such as from Germany to the United States.

Senators Trying to Hit the Brakes on Smart Cars, Citing Privacy and Security Concerns

This post was written by Mark Melodia and Frederick Lah.

On February 11, Sens. Ed Markey (D-Mass.) and Richard Blumenthal (D-Conn.) announced that they would introduce legislation intended to address the data privacy and security vulnerabilities with Internet-connected cars. The legislation, if passed, would require manufacturers to adhere to a number of security and privacy standards, including the following:

  • Requirement that all wireless access points in the car are protected against hacking attacks, evaluated using penetration testing
  • Requirement that all collected information is appropriately secured and encrypted to prevent unwanted access
  • Requirement that the manufacturer or third-party feature provider be able to detect, report and respond to real-time hacking events
  • Transparency requirement that drivers are made explicitly aware of data collection, transmission, and use of driving information
  • Consumers can choose whether data is collected without having to disable navigation
  • Prohibited use of personal driving information for advertising or marketing purposes

The legislative proposal served as a follow-up to an earlier report by Sen. Markey, “Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk.” That report was based on the responses from 16 major automobile manufacturers to questions posed by the senator about how driver information is collected and used, and the potential security risks with wireless technologies in cars. The report found that large amounts of personal driver information – including geographic location, destinations entered into a navigation system, parking history locations, and vehicle speed – are collected without the drivers being clearly informed as to how that information will be used. In most cases, the information is shared with third-party data centers, the report said. Further, the report found that nearly 100 percent of cars on the market include wireless technologies that could pose vulnerabilities to hacking intrusions, and that most manufacturers were unaware or unable to report on past hacking incidents.

In addition to Sen. Markey’s report, the FTC highlighted the potential security and privacy risks with connected cars in its recent Internet of Things Staff Report, which we previously covered here. While acknowledging the many safety and convenience benefits of smart cars, the FTC also shared Sen. Markey’s concern about their potential vulnerabilities.

In response, the industry, led by two major automobile coalitions, has adopted self-regulatory privacy principles. In November 2014, 19 U.S. car companies made a commitment to incorporate a series of self regulatory “Consumer Privacy Protection Principles for Vehicle Services” in their vehicles no later than model year 2017. In a letter sent to the FTC, the participating manufacturers said the “privacy principles” would be applied to their vehicles’ technologies and services, such as roadside assistance and navigation services, and will provide a baseline for privacy commitments. The principles include provisions for transparency, choice, respect for context, data minimization, de-identification, data security, integrity, access, and accountability. Sen. Markey said in a statement that these self-regulatory principles were a good first step, but that they did not go far enough in terms of choice and transparency.

As more and more cars join the wave of Internet of Things, legislators and regulators will continue to scrutinize their potential privacy and security risks. Any road forward – whether it be legislative or self-regulatory – must carefully balance the many benefits offered by smart cars with their potential risks. In the meantime, car manufacturers (and their third-party service and technology providers) should continue to monitor this area for legislative developments and start taking steps to implement the self-regulatory principles.

Courts Continue To Find That Unique Device Identifiers Are Not Personally Identifiable Information (PII) Under The Video Privacy Protection Act (VPPA)

This post was written by Lisa Kim and Alan Drosdick.

Two recent federal district court rulings regarding the Video Privacy Protection Act (VPPA) follow the emerging trend of decisions indicating that courts are reluctant to find violations of the VPPA for sharing anonymous identification markers with third parties (see May 5, 2014 blog post; June 20, 2014 blog post).

On January 20, 2015, a district court judge in New Jersey dismissed with prejudice a VPPA action against Viacom, Inc. (“Viacom”), holding that disclosure of anonymous user information to Google, Inc. (“Google”) was not actionable because such information did not constitute “personally identifiable information” (“PII”) as defined under the VPPA.

In In re Nickelodeon Consumer Privacy Litigation, plaintiffs alleged that Viacom operated websites for children, and encouraged users to create personal profiles on them. Viacom then assigned each user a code name, and collected certain information about each user, including gender and birthday. Viacom also placed cookies on a user’s computer, and allowed Google to place similar cookies, that collected further information, such as IP address, device and browser settings, and web traffic. On these websites, users were able to stream videos and play games, and a record was created of the name of each video each user played. Plaintiffs alleged that Viacom shared this information with Google, and both Viacom and Google used the information to target advertising at the user. Plaintiffs claimed that this practice of sharing information without users’ consent violates the VPPA.

The court found that nothing in the VPPA or its legislative history suggested that PII included anonymous user IDs, gender and age, or data about a user’s computer. Plaintiffs argued that Google, because it already had so much general information at its disposal, could use the information garnered from Viacom to ascertain personal identities. The court disagreed, confirming that PII is information which must, without more, itself link an actual person to actual video materials. Because the user information Viacom disclosed was not PII, no violation of the VPPA occurred, and the court dismissed the claim with prejudice.

Similarly, on January 23, 2015, a district court judge in Georgia dismissed with prejudice a VPPA action against Dow Jones & Company, Inc. (“Dow Jones”), holding that the disclosure of the plaintiff’s Roku device serial number was not actionable because the Roku device serial number did not qualify as PII.

In Locklear v. Dow Jones & Company, Inc., the plaintiff alleged that she downloaded and began using the Wall Street Journal Live Channel (“WSJ Channel”), offered by Dow Jones & Company, Inc. (“Dow Jones”), on her Roku device. Each time plaintiff viewed a video clip using the WSJ Channel, Dow Jones disclosed without her consent her anonymous Roku device serial number and video viewing history to mDialog, a third-party analytics and advertising company. mDialog, using demographic data from Roku and other such entities, was able to identify plaintiff and attribute her video records to her. Plaintiff alleged that this practice violated the VPPA.

The court dismissed the case with prejudice, finding that the Roku device serial number did not qualify as PII. Declaring the fact pattern indistinguishable from that presented in Ellis v. Cartoon Network, Inc. (see October 13, 2014 blog post), the court again defined PII as information which must, without more, itself link an actual person to actual video materials. Because mDialog had to take further steps, by turning to other sources beyond Dow Jones, to identify the user, Dow Jones’s disclosure of plaintiff’s anonymous Roku device serial number did not constitute a violation of the VPPA.

These rulings continue to demonstrate that courts are unwilling to enlarge the scope of the VPPA to include sharing anonymous identification numbers or code names alone. Nevertheless, companies utilizing unique device identifiers in connection with video materials should use caution it what information it shares with others.

Finland Introduces New Information Society Code

This post was written by Cynthia O'Donoghue and Katalina Bateman.

The Information Society Code (2014/917) (Code) – a new act in Finland on electronic communications, privacy, data security, communications, and the information society in general – took effect 1 January.

This sees a consolidation of 10 existing acts into one, which had included Finland’s Communications Market Act; Act on the Protection of Privacy in Electronic Communications; Domain Name Act; Act on Radio Frequencies and Telecommunications Equipment; Act on the Measures to Prevent Distribution of Child Pornography; and Act on Television and Radio Operations.

Besides simplifying existing rules and increased regulatory powers over the information society, there are three significant changes:

  1. Extending Confidentiality Obligations

The Code extends the obligation to protect the confidentiality of communication from traditional telecom companies to ALL intermediaries of electronic communications services.

Under the changes, social media companies must now ensure that users of their messaging services get the same standards of privacy and security as other, already regulated, sectors, such as telecommunications companies.
 

  1. Extraterritorial Application

The Code’s scope has been increased, allowing the extraterritorial application of its rules. It now also covers companies based outside the EU that offer services in Finland. The obligation on operators to maintain the information security in connection with their services will apply where (1) an operator is based in Finland; (2) the communications network or other equipment to be used in the business operation are located or maintained in Finland; or (3) the services are offered in Finnish or are otherwise targeting Finland or Finns.

  1. Joint Liability

The Code introduces a new obligation where a telecom operator and service provider can be held jointly liable for a defect in the provision of a service. As a big drive on consumer protection, the Code allows, in cases where consumers order and pay for products and services via their mobile phone, that telecom operators and the companies selling the said products or services share accountability.

With its focus on transparency, accountability, and its extraterritorial application, the Code does reflect many aspects of the upcoming EU General Data Protection Regulation, and is a clear enhancement of Finland’s laws on information security.

In Nevada Court, Millions of Dollars Wasted in the Name of Macau Data Privacy Law

This post was written by J. Joan Hon.

Clark County Nevada District Judge Elizabeth Gonzalez is considering further sanction against Sands China Ltd. for redacting “personal information” from about 2,600 documents the company produced in 2013 as part of an ongoing wrongful termination suit first filed in 2010 by Steven Jacobs, the former president of Sands Macau. Jacobs alleges that he was wrongfully fired for refusing to engage in unlawful acts, including promoting prostitution and spying on Chinese politicians in order to find potentially embarrassing information for use in obtaining favorable treatment for the casino.

Jacobs sought the production of about 100,000 emails and other documents from Las Vegas Sands Corp. and Sands China in order to show that Las Vegas controlled Sands China, and therefore the Nevada court has jurisdiction over Sands China. In 2012, Judge Gonzalez ruled that neither defendant could raise the Macau Personal Data Protection Act (the “Macau PDPA”), which has the closest approach to the European Union’s Data Protection Directive of 1995 than any other country in Asia, as an excuse to refuse disclosure. The ruling was made after it was learned that “significant amounts of data from Macau related to Jacobs was transported to the United States” and reviewed by in-house counsel for Las Vegas Sands and outside counsel. The defendants had tried to conceal the existence of the transferred data, and were subsequently ordered to make a $25,000 contribution to the Legal Aid Center for Southern Nevada, and to pay Jacobs’ legal fees for nine “needless” hearings involving issues related to the Macau PDPA.

Unable to avoid disclosure of documents, Sands China then spent US$2.4 million to redact documents, insisting that it would face civil and criminal penalties, including possible imprisonment for the company’s officers and directors if it hadn’t. David Fleming, general counsel for Sands China, testified that Macanese officials “were furious” about the prior release of data from the region.

In 2012, the Macau Office for Personal Data Protection (“OPDP”) had begun an investigation into potential violations related to the alleged transfer of “certain data” from Sands China to the United States without permission, but to date, the government office has made no statement on any outcomes of this probe. Typically, the maximum fine per violation would be 80,000 patacas (US$10,000) and the maximum jail sentence would be two years.

Judge Gonzalez rejected arguments made by Sands China and is currently considering further sanctions against the defendants.

According to the Macau PDPA, “[t]ransfers of personal data to any destination outside the Macau SAR is prohibited unless an adequate level of protection is guaranteed by the legal system of the country where the data is transferred, and such determination is left under the discretion of the OPDP.”

Wynn Macau Fined by Macau OPDP in Relation to FCPA Investigation

In 2013, Wynn Macau Ltd. was fined 20,000 patacas (US$2,500) by the Macau OPDP for unauthorized transfers of customer information to its parent. The information was used in an investigation into whether an executive at the parent company violated the Foreign Corrupt Practices Act. The data included customer relationships and entertainment expenses, and involved officials from another country, and thus its transfer violated Macau’s Personal Data Protection Act, according to the OPDP.

 

Australian Data Protection Authority Issues Guidelines On Securing Personal Information

This post was written by Cynthia O’Donoghue and Katalina Bateman.

On 19 January 2015, the Australian data protection authority, the Office of the Australian Information Commissioner (OAIC), released an updated information security guide: ‘Guide to securing personal information.' The Guide aims to help organisations meet their data security obligations under the Australian Privacy Principles (APPS) that provide the framework for Australia’s Privacy Amendment (Enhancing Privacy Protection) Act 2012.

The Guide provides guidance and practical examples of the “reasonable steps” entities are required by law to protect personal information and dispose of it when no longer needed. While the Guide is not legally binding; the OAIC will refer to it when conducting its compliance assessment functions.

Amongst the proposed recommendations for organisations to incorporate a privacy framework are to:

  • Conduct a Privacy Impact Assessment (PIA),
  • Conduct an information security risk assessment to inform any PIAs, and
  • Establish a privacy “governance body” that defines and implements information security measures.

Part A of the Guide recognises that there are a range of circumstances and factors which may affect the assessment of what constitutes “reasonable steps.” Such circumstances include the amount and sensitivity of personal information involved;for example, where there is a high volume of sensitive data being collected, the Guide recommends the deployment of higher levels of protection.

Steps and strategies that may be reasonable for an organisation to take are outlined in Part B of the Guide. The Guide proposes that organisations should consider the following steps to protect personal information: governance, culture and training; internal practices, procedures and systems; ICT security; access security; third party providers; data breaches; physical security; destruction or de-identification of personal information; and standards.

Perhaps nothing particularly new or innovative… but together with those guidelines published by other data protection authorities worldwide, this Guide can be a useful aid to those organisations looking to assess their security risks and take steps in planning, implementing, and reviewing measures to improve their data security.

FTC Report Offers Privacy and Security Guidance for 'Internet of Things'

This post was written by Frederick Lah.

On Tuesday, January 27, the FTC issued a 71-page Staff Report on the privacy and security issues with the Internet of Things. As we’ve noted in our previous blog posts, the Internet of Things (“IoT”) refers to the growing ability of everyday devices to monitor and communicate information through the Internet. This FTC Staff Report follows up on the FTC’s public workshop over concerns with the IoT, as well as the FTC’s first enforcement action brought in September 2013.

Click here to read the full post on our sister blog AdLaw By Request.

European Banking Authority Releases Internet Payment Guidelines

This post was written by Cynthia O’Donoghue.

The European Banking Authority (EBA) released ‘Final guidelines on the security of internet payments’ (Guidelines). These Guidelines are based on the work published by the European Forum on the Security of Retail Payments (SecuRe Pay) and set the minimum security requirements that Payment Services Providers (PSPs) in the EU will be expected to implement by 1 August 2015.

Internal payment services covered in the scope of the Guidelines include the execution of card payments; the execution of credit transfers; the issuance and amendment of debit electronic mandates; and transfers of electronic money between two e-money accounts.

In particular, the Guidelines emphasise the importance of PSPs roles in providing assistance and guidance to their customers in relation to the secure use of Internet payment services. Among other things, the EBA requests that services should adopt formal security policies; conduct and regularly update security risk assessments; and strengthen customer identification, authentication and enrolment process.

Included within the Guidelines is a list of best practice examples which PSPs are encouraged, but not required, to adopt. One best practice example for strong customer authentication includes ensuring that there are elements linking the customer authentication to a specific amount and payee. The technology used in linking the two sets of data should be tamper-resistant and could help to provide customers with increased certainty when authorising payments.

These Guidelines are particularly welcome in light of the high levels of fraud on Internet payments. Latest reports from the ECB suggest that card fraud on Internet payments alone caused €794 million of losses in 2012 (a growth of 21.2% from 2011).

N.Y. AG Seeks To Have the 'Strongest, Most Comprehensive' Data Security Law in Nation

This post was written by Mark S. Melodia, Anthony J. Diana, and Frederick Lah.

Last week, New York Attorney General Eric Schneiderman announced that he would propose a new data security law in his state that would require companies to take increased safeguards for the protection of personal information. The bill, if passed, would broaden the scope of information that companies would be responsible for protecting, and would require stronger technical and physical security measures for protecting information. Specifically, the bill would apply to all entities doing business in New York that collect and store private information, and would require such entities to have reasonable security measures in place, including:

  • Administrative safeguards to assess risks, train employees and maintain safeguards
  • Technical safeguards to (i) identify risks in their respective network, software, and information processing, (ii) detect, prevent and respond to attacks, and (iii) regularly test and monitor systems controls and procedures
  • Physical safeguards to have special disposal procedures, detection and response to intrusions, and protect the physical areas where information is stored

Under the law, entities that obtain annual, independent third-party audits and certifications showing compliance with the state’s data security requirements would receive for use in litigation a rebuttable presumption of having reasonable data security measures in place. To incentivize companies to adopt tougher data security measures, the new bill will also include a safe harbor provision for those companies who certify that they have implemented heightened data security standards. In order to qualify for the safe harbor, entities would be required to categorize their data systems based on the risk a data breach imposes on the data stored. An appropriate data security plan considering such risks and other factors would then need to be implemented and followed. If this standard is met, the entity would need to obtain a certification, though it is not clear yet from whom the certification would need to be obtained. Upon obtaining the certification, the entity would be granted the benefit of a safe harbor that may eliminate its liability entirely under the law. In addition, the proposed law would amend the state’s existing data breach notification law to include in the definition of "private information" the combination of an email address and password, the combination of an email address with a security question and answer, medical data, and health insurance information (entities are currently not required under the law to notify consumers of a breach of any of these types of information).

The attorney general shared his ambitious goal for the bill, saying that he envisions that the "new law will be the strongest, most comprehensive in the nation." Citing the high number of data breaches last year, he said that he wanted New York's law to serve as "a national model for data privacy and security." While a copy of the proposed legislation is not yet publicly available, we envision that it will bear some similarities to Massachusetts' Data Protection Regulations in that both set forth specific minimum standards that companies are required to take in connection with the safeguarding of personal information. We have previously covered some of the requirements under the Massachusetts Regulations here. With President Obama also pushing his own privacy and cybersecurity agenda, 2015 could potentially result in a drastic change in the privacy law landscape. We will be following these legislative developments closely.

Turkish Parliament Approves E-Commerce Law

This post was written by Cynthia O’Donoghue and Kate Brimsted.

Turkey’s Parliament has approved Law No. 6563 on the Regulation of Electronic Commerce (Law) aimed at creating a more secure, transparent and accessible e-commerce environment. The Law is expected to come into force 1 May 2015.

The Law covers electronic communications, liabilities of service providers, contracts concluded electronically, and the information provided to consumers, as well as unsolicited electronic messages.

One of the key provisions under the Law requires service providers to: (a) clearly identify the terms of the contract and on whose behalf it is sent; (b) state the trade associations of which it is a member, the rules of conduct for the profession, and how the recipient may access these electronically; (c) give up-to-date and easy-to-access identifier information before a contract is concluded; and (d) state whether the concluded contract will be kept by the service provider and whether it will be accessible by the recipient and, if so, for how long. This information must be clearly communicated before and after the formation of the contract if such contract is entered into electronically.

The Law also hopes to put a stop to unsolicited SMS and email messages with the introduction of a new opt-in and opt-out regime. The opt-in system, also favoured by the EU, requires prior consent to be obtained by the individual consumers before any commercial electronic messages may be sent. The consent requirement does not apply to business-to-business marketing. Failure to comply could result in a penalty ranging from TL 1,000 to TL 5,000 (and up to 10 times the original fine for repeat offenders).

In introducing an opt-out system, the Law stipulated that recipients should be provided with the right to unsubscribe at any time to commercial electronic messages free of charge, and are not required to provide a reason in refusing further communication.

Turkey still lacks a comprehensive data protection law, but this new law takes the country a step closer to both providing transparency to consumers, and seeking to facilitate e-commerce.

OECD Releases Guidance for Digital Consumer Products

This post was written by Cynthia O’Donoghue.

The Organisation for Economic Cooperation and Development (OECD) released Consumer Policy Guidance on Intangible Digital Content Products (Guidance) for protecting online consumers of digital content.

With the expansion of the Internet and mobile devices, digital content has grown considerably. The OECD recognizes that this has brought consumers considerable benefits, “including ready access to a wide range of high-quality products, often at reduced costs”. It has also created issues that the OECD believes “countries and business now need to address”.

According to the Guidance, consumers acquiring and using intangible digital content products face several challenges, including, among others: inadequate information disclosure, and misleading or unfair commercial practices.

The Guidance provides recommendations to address six issues concerning:

  • Digital content product access and usage conditions
  • Privacy and security
  • Fraudulent, misleading and unfair commercial practices
  • Children
  • Dispute resolution and redress
  • Digital competence

The recommendations include provisions relating to privacy and security, and address fraudulent, misleading and unfair commercial practices. In particular, the OECD suggests that terms and conditions should be made available to consumers as early as possible in the transaction, and that consumers be provided with clear information about the collection, storage and use of their personal data, including steps consumers can take to manage their data.

In addition, the Guidance addresses children’s advertising and recommends that businesses have mechanisms in place to prevent children from making in-app or digital content purchases without parental consent.

The OECD has also called for effective dispute resolution and redress mechanisms.

Given the growth in the digital market in which businesses now operate, this Guidance calls for governments, businesses and other stakeholders to work collectively to develop education and awareness programs to facilitate consumer use of digital content. Importantly, the OECD acknowledges that protection of consumers and of their personal data should form the core of any legal framework, and should be read in conjunction with the OECD’s Privacy Principles, which tend to form the basis of the data protection and privacy laws in nearly 140 countries.

FTC Chairwoman Rings in the New Year with 'Internet of Things' Warning

This post was written by Frederick Lah and Sulina D. Gabale.

While hundreds of tech companies are racing to develop the newest in Internet-connected “smart” devices, Federal Trade Commission (“FTC”) Chairwoman Edith Ramirez is sending a reminder to those companies of their responsibilities to consumers. At the 2015 Consumer Electronics Show held in Las Vegas, January 6-9, Chairwoman Ramirez highlighted some best practices to address the vast array of consumer privacy risks posed by the “Internet of Things.”

The “Internet of Things” refers to the growing ability of everyday devices to monitor and communicate information through the Internet. For example, mobile phones are used for far more purposes than originally intended by Mr. Alexander Graham Bell. They have become integral to our daily lives: waking us up in the morning, feeding us the news on our commute to work, and tracking our sleep patterns at night via Bluetooth technology.

However, with the widespread use of innovative “smart” technology comes a swath of potential privacy concerns for consumers and companies alike. In her speech, Chairwoman Ramirez warned that the data collected from these “smart” devices “will present a deeply personal and startlingly complete picture of each of us—one that includes details about our financial circumstances, our health, our religious preferences, and our family and friends.” In response to the risk of potential misappropriation of consumer data, the FTC is calling for companies to mitigate privacy risks and embrace principles of “security by design” and “data minimization,” where companies only collect requisite information for a specified purpose and then safely and immediately dispose of it afterwards. More specifically, Ramirez stated, “companies should: (1) conduct a privacy or security risk assessment as part of the design process; (2) test security measures before products launch; (3) use smart defaults – such as requiring consumers to change default passwords in the set-up process; (4) consider encryption, particularly for the storage and transmission of sensitive information, such as health data; and (5) monitor products throughout their life cycle and, to the extent possible, patch known vulnerabilities.” In addition, Ramirez suggested companies should implement technical and administrative measures to ensure reasonable security, “including designating people responsible for security in the organization, conducting security training for employees, and taking steps to ensure service providers protect consumer data.”

Though this isn’t the first time the FTC has taken a firm stance on “The Internet of Things,” it acts as an important reminder looking into the New Year. In November 2013, the FTC convened a public workshop in D.C. on the “Internet of Things” to study privacy and security concerns related to the industry, and then held a comment period lasting until January 2014. Then, in September 2013, the FTC brought its first enforcement action in this area, a case we previously covered on our blog. The agency is projected to issue a report with findings and recommendations sometime this year. We will be monitoring the FTC’s movement closely in this area.

New Jersey Requires Encryption for Health Insurance Carriers; May Open Door to Class Action Suits over Violations Under State Consumer Protection Law

This post was written by Paul Bond and Brad M. Rostolsky.

Gov. Chris Christie has signed into law S. 562, which, as its title states, “Requires health insurance carriers to encrypt certain information.”

Violation of this new law constitutes a facial violation of the New Jersey Consumer Fraud Act, a powerful consumer remedies statute. The NJCFA can be enforced by the state attorney general, or by private action. For private litigants showing ascertainable loss, the NJCFA allows for recovery of treble damages and attorney’s fees. The NJCFA is a favorite of the state class action bar.

For purposes of this Act, a “health insurance carrier” is “an insurance company, health service corporation, hospital service corporation, medical service corporation, or health maintenance organization authorized to issue health benefits plans in this State,” New Jersey.

Such health insurance carriers “shall not compile or maintain computerized records that include personal information, unless that information is secured by encryption or by any other method or technology rendering the information unreadable, undecipherable, or otherwise unusable by an unauthorized person.” A simple password will not do. Unlike the Massachusetts data security regulation, the New Jersey Act does not expressly establish a duty to pass encryption standards on to vendors.

As defined by the Act, personal information “means an individual's first name or first initial and last name linked with any one or more of the following data elements: (1) Social Security number; (2) driver's license number or State identification card number; (3) address; or (4) identifiable health information.”

While the substantive requirements of this Act may not be onerous, the explicit link between this Act and the NJCFA should give pause to all New Jersey health carriers.

Cybersecurity Risks Are Higher than Ever and Are Proving Costly

This post was written by Cynthia O’Donoghue and Paul Bond.

Cybersecurity is an increasing concern for companies. Last April, the UK Department for Business, Innovation & Skills (BIS) published the 2014 information security breaches survey: technical report. The report comprises the findings from two online questionnaires completed by 1,125 respondents, and contains a number of important cyber-attack statistics for both large organisations and small businesses.

The results indicate that while UK businesses are paying more attention to cybersecurity, the scale and cost of security breaches has nearly doubled in the past year, with losses from the worst breaches ranging between £600,000-£1,500,000. In the United States, the number of companies reporting concerns about cybersecurity to U.S. regulators more than doubled in the past two years. to 1,174.

The following cybersecurity concerns affect corporates, start-ups, investors and shareholders. and highlight some of the obstacles to addressing cyber-attacks:

Businesses still continue to view cybersecurity as a purely technical matter. Businesses tend to focus on technological vulnerabilities (i.e., insufficient patching of servers or routers) rather than protecting the most critical business assets or processes (such as customer credit card information), which concern customers and consumers.

Businesses need to adapt to address security risks from new technologies. Businesses need to address security risks from the cloud, social media and mobile using a more holistic approach rather than protecting digital assets by targeting the data centre perimeter and managing user access, authorisation, and authentication from known locations and devices.

Businesses need to adapt to the challenge presented by the pervasive use of personal mobile devices by staff and security. A robust Bring Your Own Device policy ensures that employees are aware of the risks introduced when sending or receiving corporate information on a personal smartphone or tablet, and should address effective security to comprehensively manage user identity and access to sensitive corporate data.

Businesses need to make cybersecurity a board issue. C suite engagement can help address cybersecurity threats effectively to protect critical information assets without placing constraints on innovation and growth.

Businesses need to monitor cybersecurity and implement a rapid response program to address breaches. Audit committees should take a risk-based approach and address cybersecurity risks with appropriate frequency. Doing so would help minimise the risk that arises from such events as stolen passwords and unauthorised access. In addition, the board or the appropriate committee should satisfy itself that management has in place the resources and processes necessary to respond to a breach in order to minimise the effects. By having well-documented information security controls, processes, or certifications in place, businesses increase their appeal to clients by directly addressing any concerns.

Businesses need to adapt to deal with more sophisticated cybercrime. Antivirus software and firewalls alone are no longer adequate. As attacks against large companies such as Target, Adobe and Sony illustrate, businesses can no longer work on a prevention-first security strategy which purely relies on protecting the perimeter. Businesses need to innovate and focus on protecting their core data through data encryption, or even shape-shifting botwalls.

Failing to address these issues may result in a cybersecurity breach leading to lost revenue and significant damage to a business’ brand as it affects both customer and investor confidence. In addition, a breach may result in remediation costs to customers or partners, litigation, compromised intellectual property, and cuts to staff.

Both large companies and start-ups need to be aware of these risks and take steps for planning, implementing, and reviewing cyber-defences. Larger organisations may consider minimising their risk by making sure that all entities they do business with adhere to these standards.

Russia sets a new deadline for data localisation, and removes Hong Kong and Switzerland from Adequate Privacy Protection List

This post was written by Cynthia O’Donoghue.

The Russian Duma recently set a new deadline for companies to localise their data processing of Russian citizens on Russian soil, while the data protection authority published an order removing Hong Kong and Switzerland from its ‘adequate privacy protection list’.

The Russian Duma has voted through, on a first reading, an accelerated effective date for the data localisation law, moving the deadline forward by a year to 1 September 2015. Previously, Federal Law No. 242-FZ, which amends Russia’s 2006 data protection statute and primary data security law (Laws 152-FZ and 149-FZ), had been proposed to come into force as early as 1 January 2015, from the initial deadline of 1 September 2016.

In addition, the Russian data protection authority (Roscomnadzor) issued a new order removing Hong Kong and Switzerland from a list of countries that meet privacy protection adequacy standards in Russia. Nothing in the order indicates a reason for the removal. The order becomes effective 25 December 2014. The list of adequate countries includes all members of the Council of Europe Convention 108 on Data Protection, as well as Australia, Argentina, Israel, Canada, Morocco, Malaysia, Mexico, Mongolia, New Zealand, Angola, Benin, Cape Verde, South Korea, Peru, Senegal, Tunisia and Chile.

White House Previews Ambitious (if Familiar) Privacy and Cybersecurity Proposals for 2015

This post was written by Paul Bond and Divonne Smoyer.

On January 20, 2015, President Obama will address Congress with his annual State of the Union report. On Monday, the president spoke at the Federal Trade Commission, providing a “sneak peek” of the privacy and cybersecurity agenda that he intends to set.

Of the United States, the president remarked:
“We pioneered the Internet, but we also pioneered the Bill of Rights, and a sense that each of us as individuals have a sphere of privacy around us that should not be breached, whether by our government, but also by commercial interests.”

The president’s proposals were set forth in additional detail in a fact sheet.

The proposals include introduction of a “Personal Data Notification & Protection Act” to set a national, pre-emptive standard for data security notification. Aside from a change of the deadline to notify from 60 to 30 days after discovery, the proposal sounds similar to that proposed by the president in May 2011.

However, bills setting forth pre-emptive national data security breach notification requirements were put before the Senate and the House in the 113th Congress, and never got beyond committee.

While the case for a national, pre-emptive data security breach notification law is sound, we would expect state attorneys general to resist full pre-emption of their authority and to press for preservation of a significant enforcement role, as they have under HIPAA and COPPA. The attorneys general offered significant resistance to prior pre-emption efforts in other similar legislation.

The president also intends to introduce another so-called “Consumer Privacy Bill of Rights.” The administration has floated the idea of a “Consumer Privacy Bill of Rights” since the Commerce Department issued a green paper on privacy in 2012. A Consumer Privacy Bill of Rights was most recently introduced in May 2014, with S.2378 – the Commercial Privacy Bill of Rights Act of 2014. S.2378, like its predecessor S.799 in the prior Congress, was read twice and referred to committee. No further action was taken in either Congress. Notably, both S.2378 and S.799 were introduced to Senates controlled by the president’s own party, an advantage the White House no longer enjoys.

The White House also described its efforts to lead the creation of voluntary codes of conduct for privacy matters in the energy industry, and to push for stricter safeguards for information in the education sector. These smaller, less categorical initiatives may have a better chance of coming to fruition. However, the State of the Union, and Republican response to it, will provide a useful gauge of whether, as the president told Monday's audience, the privacy and security of consumer information is really an issue that “transcends politics, transcends ideology” in the Washington, D.C. of 2015.

EU Art. 29 Confirms Cookie Rules Apply to Digital Fingerprinting

This post was written by Cynthia O’Donoghue.

The Article 29 Data Protection Working Party (Working Party) released Opinion 9/2014 on ePrivacy Directive 2002/58/EC (amended in 2009), stating that the consent and transparency mechanisms apply to digital fingerprinting of devices (Opinion).

The Working Party issued the opinion to clarify that consent was required and to end “surreptitious tracking” of users in light of the increasing use of profiling technologies in an attempt to avoid reliance on cookies.

The Opinion defines ‘fingerprint’ as including “a set of information that can be used to single out, link or infer a user, user agent or device over time”, and that the consent requirement applies to website publishers, third parties and the use of Application Programming Interfaces.

The Opinion sets out practical guidance providing six scenarios and requires prior consent for:

  • First-party website analytics – there is no exemption to obtaining consent for cookies that are strictly limited to first-party anonymised and aggregated statistical purposes
  • Tracking for online behavioural advertising
  • User access and control – where fingerprinting comprises information elements which store or gain access to information of the user’s device because such purposes are not considered “strictly necessary” to provide functionality explicitly requested by a user

As with cookies, consent is not required if fingerprinting is used for adapting the user interface to the device solely for network management, or as a security tool to prevent unauthorised access to services those users have accessed in the past.

Companies will now have to make clear in cookie policies, uses of alternative technological processes that can enable them to create a profile of users. The UK Information Commissioner’s Office welcomed the Opinion.

EU Commission Publishes Work Program for 2015

This post was written by Cynthia O’Donoghue.

The European Commission’s work program for 2015 covers 10 actions for 2015, including a “connected digital single market” across the EU.

As part of the Digital Single Market Package, the Commission aims to conclude negotiations on the European data protection reform and the Regulation, and to propose changes to deal with existing challenges in the sector, such as enhancing cyber security, modernizing copyright, and simplifying rules for consumers making online and digital purchases.

Annex 3, Work Program, sets out REFIT actions (legislative initiatives to simplify and reduce regulatory burdens and ensure that EU legislation is fit for purpose). One proposed action in the “Digital Economy & Society” section includes an evaluation of the E Privacy Directive 2002/58/EC “following agreement on the data protection proposal”. This action however, is expected to be ‘ongoing’ until 2016.

This move towards a connected Digital Single Market, and the economic opportunities, should present positive opportunities for future innovation.

Presidency of the Council of Ministers publishes amendments to 'one stop shop' of the draft EU Data Protection Regulation

This post was written by Cynthia O’Donoghue.

In October 2013, we reported on the move towards a ‘One Stop Shop’ (OSS) approach to EU Data Protection.

The OSS principle aims to create consistency for international organisations to process personal data in multiple member states through the appointment of a single competent authority to monitor the data-controller’s activities across all EU Member States. In November, the Presidency of the EU Council of Ministers announced its latest plans to remodel the OSS mechanism in its updates to the draft General Data Protection Regulation.

These amendments seek to address some of the concerns with the OSS principle which we reported on in March. Rather than the OSS being automatic, the proposals adopt an elective system whereby a business must apply for a lead regulatory authority. The proposal also addresses the need for an effective uniform decision-making process in conjunction with an effective redress based on geographic proximity for citizens.

The Presidency also proposed a process to ensure a uniform decision-making by entrusting the European Data Protection Board with binding powers, albeit in limited cases, so long as decisions are made by a two-thirds majority.

In addressing proximity, the Presidency’s proposal attempts to address concerns raised by Data Protection Authorities (DPAs), by creating a cooperation mechanism for multi-jurisdictional matters involving several Member States’ DPAs. These proposed joint decisions seek to ensure that all interests are taken into account.

The Proposal suggests a more flexible and balanced methodology to OSS under the Regulation, balancing the interests of both EU citizens and of businesses operating among several EU Member States.

EU Art. 29 Working Party Announces Cooperation Procedure for EU Model Clauses

This post was written by Cynthia O’Donoghue.

The Article 29 Data Protection Working Party (Working Party) released a Working Document setting forth a co-operation procedure for issuing common opinions on “Contractual clauses” considered as compliant with the EC Model Clauses (Working Document). The aim of this Working Document is to facilitate the use of the EU model clauses across multiple jurisdictions in Europe, while ensuring a harmonised and consistent approach to the way these model clauses are approved by the national Data Protection Authorities (DPAs).

There is currently a patchwork of authorisation and registration procedures among the national DPAs. When assessing a particular set of model clauses, one DPA may reach a different conclusion from another, resulting in uncertainty and legal risk for organisations.

Under the new co-operation procedure, the Working Party hopes to streamline the approval process, with the appointment of a Lead DPA deciding whether the proposed contractual clauses conform to the Model Clauses. Reasons for selecting a particular DPA as the Lead DPA include, among others, ‘the location from which the Company’s Clauses are decided and elaborated’, and ‘the place where most decisions in terms of the purposes and the means of processing are taken’.

Once the Lead DPA is satisfied that the contract complies with the EU Model Clauses, the Lead DPA will draft a letter to the co-reviewer(s) to review within the next month (two co-reviewers must be appointed in the event of the data being transferred from more than 10 Member States).

The principal concern with this procedure is that the proposed contract is only reviewed for its compliance with the EU model clauses. Further steps may still be needed to comply with national laws, such as the supporting documentation requirements in Spain. Nonetheless, it is hoped that this co-operation procedure facilitates and speeds up the authorisation process, while also providing greater legal certainty for companies that transfer personal data outside of the EEA.

Hong Kong Privacy Commissioner Ends 2014 with Special Interest in Mobile Apps

This post was written by Joan Hon.

The Hong Kong Privacy Commissioner of Personal Data (the “Commissioner”) ended 2014 with a special interest in mobile applications (“apps”).

In a media statement published 15 December 2014, the Commissioner reported that versions 4.3 and earlier of Google’s Android operating system contained a flaw that allowed others to read shared memory in mobile devices without the proper user permission. The Commissioner had contacted Google twice to formally request it to “take corrective action and/or warn the end-users concerned that they are subject to the risk of data access by malicious apps without their knowledge and permission.”

This is not the first time the Hong Kong privacy regulator has reproached Google for its data practices. In 2010, Google undertook to investigate its Street View and WiFi data collection, to ensure practices complied with Hong Kong law. Also, earlier in 2014, the Commissioner pressed Google to apply the EU “right to be forgotten” safeguard to Hong Kong.

On the same date, the Commissioner completed two separate investigative reports on mobile travel apps by travel services companies,* finding that these apps had either inappropriately collected excessive personal information without giving customers notice as to how their data was to be used, or otherwise failed to safeguard customer personal information.

Finally, the Commissioner also issued a statement on a survey done in conjunction with the 2nd annual Global Privacy Enforcement Network Mobile Sweep. The survey reviewed 60 popular mobile apps developed by Hong Kong entities and found that “their transparency in terms of privacy policy was clearly inadequate and there was no noticeable improvement compared with the results of a similar survey conducted in 2013.”

 

* The Commissioner conducts investigations of suspected breaches of the Personal Data (Privacy) Ordinance (Cap. 486) based on complaints received, and publishes an investigation report when he opines that it is in the public interest to do so.

European Commission and EU Art 29 dispel the myths on the ECJ's decision in Google Spain

This post was written by Cynthia O’Donoghue.

In May 2014, we reported on the implications of the landmark decision in Google Spain which recognises the right for individuals to have links about themselves de-listed from search results. In response to the complaints received, the Article 29 Working Party (Art 29 WP) published a report on work being carried out to handle complaints, and the European Commission (Commission) published a report dispelling myths on the “rights and wrongs of the so-called right to be forgotten”.

Following the CJEU’s ruling, Data Protection Authorities (DPAs) have received numerous complaints about search engines’ refusals to de-list results. In response, the Art 29 WP produced a common ‘tool box’ to produce a coordinated approach to handling complaints by putting together a network of dedicated persons to handle complaints, with the aim to also create a common decision-making process.

To dispel the myths on the implications of the right to be forgotten, the Commission’s report addresses concerns that have arisen, and attempts to correct misinterpretations of the judgment.

Myths the Commission seeks to dispel include:

  • The judgment requires deletion of content. In reality, the content can still be found through the same search engine based on a different query.
  • The judgment contradicts freedom of expression. The ruling does not give the ‘all clear’ for people or organisations to have search results removed from the web simply because they find them inconvenient.
  • The judgment promotes censorship. The right to be forgotten does not allow governments to decide what can and cannot be online. Search engine operators will act under the supervision of national DPAs, and the national court will have the final say on whether a fair balance between the right to personal data protection and the freedom of expression was met.

The reports illustrate how the “right to be forgotten” has been widely misinterpreted, and they seek to create widespread understanding of the judgment, including clarifying the extent of EU citizens’ rights.

While clarifying the scope of the right, the Art 29 WP also complicated matters for search engines by stating that the right applies globally, not just to EU-related domains, such as .co, .uk or .es.

Dutch Data Protection Authority Threatens Google with a €15 million fine

This post was written by Cynthia O’Donoghue.

The Dutch data protection authority, College Bescherming Persoonsgegevens (CBP), released a cease and desist order requiring Google to pay €60,000 per day, up to a maximum of €15 million, for violating Dutch data protection law, Wet bescherming persoonsgegevens(Wbp). Google has until the end of February 2015 to change the way it handles personal data.

The order requires Google to carry out three measures:

  • Ask for “unambiguous consent” before it shares personal data of Internet users with its other services, such as Google Maps and YouTube, the video-sharing site
  • Make it clear to users that Google is the owner of YouTube
  • Amend its privacy policy to clarify what data is collected and how the data is used

While the CBP conceded that Google has “already taken measures in the Netherlands”, CBP Chairman Jacob Konstamm commented that “This has been on-going since 2012 and we hope our patience will no longer be tested.” Google has responded that “We are disappointed with the Dutch regulator’s orders, especially as we have already made a number of changes to our privacy policy in response to their concerns”.

The Dutch authority has also recently announced that it will now turn its attention to Facebook’s privacy policy. The order clearly shows that the national data protection authorities are not ready to give up their national jurisdiction and enforcement powers, and that they are individually and increasingly focussing on transparency to users of social media.

EDPS publishes Guidelines on data protection in EU financial services regulation

This post was written by Cynthia O’Donoghue.

The European Data Protection Supervisor published ‘Guidelines on data protection in EU financial services regulation’ (Guidelines) to be used as a “practical toolkit for ensuring that EU data protection rules are integrated when developing EU financial policies and rules.”

The Guidelines address the processing of personal information involved in supervising financial markets, particularly through the use of surveillance, record keeping, and reporting, and information exchange. Such measures have the potential to infringe on individuals’ rights to privacy and data protection.

The Guidelines include 10 steps and recommendations to assist EU policy makers responsible for financial regulation. Some of the key recommendations include:

  • Assess whether information processing interferes with the right to privacy
  • Establish a legal basis for the data processing
  • Evaluate and justify an appropriate retention period for the information
  • Establish a correct legal basis for any transfer of personal information outside the EU
  • Provide appropriate guarantees of individuals’ data protection rights
  • Consider appropriate data security measures
  • Provide specific procedures for supervision of data processing

In light of the 2008 financial crisis, the Guidelines provide a useful method of rebuilding trust in markets for financial services by ensuring that personal data is properly protected.

EU Art. 29 Assesses Cybercrime Assessment

This post was written by Cynthia O’Donoghue.

The Article 29 Data Protection Working Party (Working Party) sent a letter to the Council of Europe discussing its first assessment of several cybercrime scenarios presented at the 2014 Cybercrime@Octopus conference (Conference). The scenarios that sought to create “discussion on the consequences of data protection legislation and principles when obtaining such data in a criminal investigation” include such situations as cyberstalkers, investigations into fraudulent activity, and missing persons.

In the letter, the Working Party highlights how many delegations during the Conference mentioned that “mutual legal assistance procedures in practice do not always function in a satisfactory way”, and offers to improve or insert data protection clauses in mutual legal assistance agreements to ensure compliance with data protection safeguards.

The Working Party emphasised the need to comply with the eight data protection principles for transfers of data outside the EEA. The letter sent by the Working Party to the Council of Europe provides a useful assessment of various cybercrime scenarios in relation to transborder access to personal data.

EU Art. 29 Working Party Opinion on the Internet of Things

This post was written by Cynthia O’Donoghue.

The EU Article 29 Working Party (WP29) has issued an Opinion on ‘Recent Developments on the Internet of Things’ (Opinion). The Opinion stresses the privacy and security challenges generated by the development of the Internet of Things (IoT), while acknowledging the benefits of IoT to individual lives, and the prospect of significant economic growth within the EU companies.

The Opinion focuses on innovations such as wearable technology and connected devices in homes, cars, and work environments, Quantified Self sensors, and Home Automation or “demotics.”

The WP29 identifies challenges and security risks posed by these devices, and recommends the implementation of Privacy by Design/Default (PbD) techniques and other practical tools aimed at specific industries, such as device manufacturers and application developers, to ensure that their developments safeguard the data subject’s privacy.

The Opinion recommends that organisations placing IoT devices in the marketplace should complete privacy impact assessments, conduct timely deletion of raw data, respect users’ rights to self-determination of their data, and use appropriate privacy information notices to inform and obtain consent.

ISO develops the first privacy-specific cloud standard

This post was written by Cynthia O’Donoghue.

Earlier in 2014, the International Standards Organisation (ISO) developed a new voluntary standard, ISO 27018 (Standard), establishing commonly accepted control objectives and guidelines to protect personal information for a public cloud computing environment.

The need to create trust in cloud solutions led to the development of the Standard, in accordance with one of the key goals announced in the 2012 European Cloud Computing Strategy. In adopting an appropriate set of standards for cloud service providers who process personal data, providers can give their customers confidence that they meet their own regulatory obligations on data security.

The Standard focuses on practical recommendations to help cloud providers meet the Standard. Examples include:

  • Confidentiality agreements and training for those with access to personal information
  • Policies for the return, transfer or disposal of personal information at termination
  • Policies that allow the processing of personal information for marketing or advertising purposes only with customer’s express consent
  • Requirements to disclose the names of sub-processors and possible locations where personal information may be processed prior to entering into a cloud services contract
  • Independent security reviews at regular intervals or after significant changes

The Standard could not come at a better time. The Ponemon Institute, which conducts independent research on privacy, data protection and information security policy, revealed the extent of mistrust in cloud, with 72% of EU respondents accusing cloud service providers of failing to comply with data protection regulations. Obtaining the ISO cloud certification could go a long way to restoring trust, and could further facilitate the adoption of cloud computing in all sectors.

UK Government releases 'Bring Your Own Device' Guidance

This post was written by Cynthia O’Donoghue and Kate Brimsted.

In early October, the UK government updated a collection of guidance notes they had issued on ‘bring your own device’ initiatives (BYOD). Given the increase in employees using their personal devices to connect to their employers’ systems, employers in both the private and public sector will welcome this guidance.

The ‘BYOD Guidance: Executive Summary' describes eight key security aspects for businesses to consider “to maximise the business benefits of BYOD whilst minimising the risks.”

In order to design the organisation’s security network effectively to minimise these risks, the ‘BYOD Guidance: Device Security Consideration’ document recommends that organisations consider authentication and protection for data in transit and for data at rest. To protect internal services from attack via personally owned devices, the ‘BYOD Guidance: Enterprise Considerations’ document provides various recommendations, including a ‘walled garden architecture’ approach which involves four steps to help protect an organisation’s network.

To help organisations decide on the most suitable architectural approach to best match their business, cost, and security requirements, the ‘BYOD Guidance: Architectural Approaches’ document explores common BYOD scenarios and associated risks that an organisation may face when using personally owned devices to access enterprise services and data. These new government guidance notes include information tailored to different operating systems.

The Communications Electronics Security Group recommends that the collection of guidance notes be read in conjunction with the guidance on ‘bring your own device’ issued by the ICO in March 2013, which we reported on in March 2013.

While the cost of BYOD controls can be substantial, those costs may pale in comparison with the reputational damage caused by serious data breaches, or the loss of an organisation’s proprietary and confidential information. Furthermore, in the event of a serious data breach, the Information Commissioner’s Office may use its enforcement powers to issue a monetary penalty of up to £500,000.

UK ICO to endorse privacy seal schemes

This post was written by Cynthia O’Donoghue.

The UK Information Commissioner’s Office (ICO) signalled its commitment to approving third-party “privacy seal” schemes following its recent public consultation. The first UK schemes should be operational by 2016.

The consultation comes in anticipation of the European Commission’s revised data protection framework proposals, which may include provisions intended to encourage the adoption of privacy seals, certification mechanisms and trust marks. It is hoped that privacy certification will establish confidence among consumers around personal data handling.

The ICO has issued a proposed framework and guidelines for organisations wishing to be approved as providers of a privacy seal scheme, including that providers must be an independent body accredited by the UK Accreditation Service.

Privacy seals are already available at the European level, and in France, Germany and Japan.

OWASP releases the results of its Privacy Risks Project

This article was written by Cynthia O’Donoghue and Kate Brimsted.

The Open Web Application Security Project (OWASP) published its findings on the ‘Top 10 Privacy Risks’ for 2014. The aim, according to one of the developers of OWASP, was to build a top-10 list of both technical and organisational risks to “help people with developing web applications, or a social network.”

The OWASP is an organisation that provides practical information about computer and Internet applications. Members include a variety of security experts from around the world who share knowledge of vulnerabilities, threats, attacks and countermeasures.

The study should help organisations to not only identify risks, but to also try to minimise them.

According to the study, the top 10 privacy risks in 2014 are:

  • Web application vulnerabilities
  • Operator-sided data leakage
  • Insufficient data breach response
  • Insufficient deletion of personal data
  • Non-transparent policies, terms and conditions
  • Collection of data not required for the user-consented purpose
  • Sharing data with third party
  • Outdated personal data
  • Missing or insufficient session expiration
  • Insecure data transfer

The report provides a good basis for helping organisations to assess the risk of vulnerabilities by measuring different factors of a potential attack, like an attack’s impact on the organisation or the motivation of the attacker.

OECD Discusses Privacy Risk Management

This post was written by Cynthia O’Donoghue.

The OECD’s Working Party on Security and Privacy in the Digital Economy (Working Party), and the Centre for Information Policy Leadership (Centre), issued a white paper on ‘The Role of Risk Management in Data Protection.’

This paper explores the link between risk and accountability, and focuses on three key areas: (1) addressing the role of risk management in data protection as implemented into legal requirements, interpreted by regulators, and put into practice by responsible organisations; (2) the growing consensus around risk management as an essential tool for effective data protection; and (3) key considerations that affect the role of risk in data protection law and practice.

The white paper states that risk management does not alter rights or obligations, but is an essential tool for prioritising activities, raising awareness of risks, and for risk remediation and mitigation in line with the EU Art. 29 Working Party’s “scalable and proportionate approach to compliance.” It also discusses that the OECD could play a key role in helping to develop and implement a privacy framework.

Several data-protection authorities in the EU, particularly in France and Germany, have raised concerns that a risk-based approach to data protection could potentially weaken fundamental rights. The white paper demonstrates that a risk-based approach which focuses on the likelihood and severity of harms from the individual’s perspective could in fact strengthen data protection.

Is Your Employee-Monitoring Policy Up to the Job? UK Case Shows Importance of Having the Right Policy

This post was written by Kate Brimsted and Cynthia O’Donoghue.

The UK Employment Appeal Tribunal (the “EAT”), in the case of Atkinson v Community Gateway Association UKEAT/0457/12/BA, dismissed the employee’s claim that his right to privacy had been infringed, and confirmed, more generally, that an employer will be entitled to monitor its employees’ workplace emails and Internet use where a clear policy is in place.

The employee, Mr Atkinson, was claiming constructive unfair dismissal. In the course of investigating his conduct, the employer accessed Mr Atkinson’s emails and discovered he had been sending overtly sexual messages to a female friend and had sought to help her obtain a position with the employer. Mr Atkinson resigned before disciplinary proceedings were completed, complaining they were being conducted in a way that amounted to repudiatory breach.

One point being appealed was whether the Employment Tribunal had previously erred in law in finding that the employer’s accessing of the employee’s emails was not in breach of his right to respect for private and family life under Article 8 of the European Convention on Human Rights (the “ECHR”).

The EAT found that Mr Atkinson’s right to privacy under Article 8 of ECHR had not been infringed, principally because the employer had conducted the review of the workplace email account during a disciplinary investigation in accordance with the employer’s Internet and Email Acceptable Use Policy (the “Policy”), which the employee had himself written and was responsible for enforcing. The employee therefore had no reasonable expectation of privacy on these facts.

The EAT examined the Policy in detail. It stated (among other things) that all users of the employer’s computer systems were bound by it, that emails would be monitored, including for investigations, and should not be assumed to be private. The EAT held the Employment Tribunal had been entitled to find that “[the employee] must … have known … he could not have expected the emails to his lover containing the material … described and which were not marked PERSONAL/PRIVATE to have been immune from access. On this issue, we understand why… the Tribunal made the decision that [the employee] had no expectation of privacy in relation to the relevant emails.”

The EAT’s decision is binding on the Employment Tribunal and is likely to be persuasive in future similar cases before the EAT and the High Court. This decision is a clear illustration of the importance of employers having a well-drafted, clear email policy that expressly sets out what is and what is not appropriate in relation to workplace emails and computer use. Without the Policy in question, the outcome here might have been quite different.

Amendments to Poland's Data Protection Law Ease the Rules on Data Exports and Data Protection Officers

This post was written by Kate Brimsted and Cynthia O’Donoghue.

The Polish Parliament passed the Facilitation of Business Activity Act (source in Polish) which significantly amends the existing Act on Personal Data Protection. The amendments come into force 1 January 2015.

The changes mean that the EU Commission’s approved Standard Contractual Clauses for data transfers (“SCCs”) and approved Binding Corporate Rules (“BCRs”) are automatically recognised as offering adequate protection to transfer personal data to “third countries” (non-EEA and non “white list” countries). Previously, either prior consent was needed from every data subject, or authorisation from the Polish data protection authority – the “GIODO”. The amendments dispose of this requirement where a data controller (1) uses SCCs approved by the European Commission, or (2) has implemented BCRs approved by the GIODO. The new amendments specifically refer to BCRs for controllers or processors. The new legislation also allows for the use of BCRs which have been approved by other DPAs under the mutual recognition scheme. It remains to be seen, however, how smoothly this will work in practice.

The appointment of a data protection officer (or Administrator of Information Security (“AIS”), as it is known in Poland) is no longer mandatory under the new law. However, if an organisation appoints/continue with an AIS, it will be exempt from data filing registration requirements with the GIODO (apart from for sensitive personal data). The amendments also specify certain requirements for the AIS, such as qualifications, responsibilities, and his/her role within the organisation, e.g., s/he must report to the Management Board and have his/her details registered with the GIODO. The GIODO may require the AIS to conduct an audit of his/her organisation and report non-compliance to the GIODO. Even if an organisation does not choose to appoint an AIS, it will have to perform most of the stipulated functions itself.

Clearly the Polish Government intends by these measures to make doing business in Poland easier. The amendments cut a number of formal, bureaucratic requirements, but at the same time add to the internal compliance burden – at least so far as data protection officers are concerned.

EU Art. 29 Proposes Class Actions to Enforce Privacy Rights

This post was written by Cynthia O’Donoghue.

This month, the Article 29 Data Protection Working Party (Working Party) and the French Data Protection Authority (CNIL) held the European Data Governance Forum, an international conference focusing on the issues of privacy, innovation and surveillance in Europe. The conference highlighted many of the issues raised in the Joint Statement released by the Working Party in November.

The Joint Statement emphasises the need to address “both the lack of confidence in (foreign or national) governments, intelligence and surveillance services, as well as the underlying problem of how to control access to massive amounts of personal data” in this digital age.

The Working Party proposed a series of principles and actions to create a framework enabling “private companies and other relevant bodies to innovate and offer goods and services that meet consumer demand or public needs, whilst allowing national intelligence services to perform their missions within the applicable law but avoiding a surveillance society”.

Some the key messages suggested by the Working Party include:

  • Protection of personal data as a fundamental right
  • Strengthening public awareness and individual empowerment to help individuals limit their exposure to excessive surveillance
  • No secret, massive and indiscriminate surveillance

The use of surveillance systems can be seen as privacy-intrusive, whereas establishing an effective privacy framework focused on transparency, accountability and restoring trust, can act as a counterbalance.

Privacy Authorities Urge Mobile Apps to Implement Privacy Policies

This post was written by Cynthia O’Donoghue.

In December, 23 privacy authorities – many of which are members of the Global Privacy Enforcement Network (GPEN) – signed an open letter to the operators of seven app marketplaces, urging them to improve consumers’ access to privacy information on mobile apps.

The letter states that:

  • Mobile apps that collect data in and through mobile devices within an app marketplace store must provide users with privacy practice information (for example, privacy policy links)
  • Privacy policy links must clearly inform users about the collection and use of their data before they download the app
  • Marketplace operators must implement the necessary protections to ensure the privacy practice transparency of apps offered in their stores

This letter comes in light of this year’s privacy sweep which we reported on in September. One observation of particular concern was that 85% of the mobile apps reviewed failed to explain clearly how they were collecting, using and disclosing personal information.

With the proliferation of apps, it is clear that privacy and data protection authorities are keen to ensure that apps provide transparency to consumers, and a good privacy policy may help app developers to stand out from the competition.

Oregon Breach Notification Law Changes on the Horizon

This post was written by Divonne Smoyer and Christine Czuprynski.

On December 10, Oregon Attorney General Ellen Rosenblum testified in front of the joint Oregon Senate and House Judiciary Committee on the evolving nature of not only data collection and use, but also on cybersecurity incidents and hacking, and the need to amend the Oregon data breach notification law to provide enforcement authority to the state Department of Justice. Extending enforcement authority to the attorney general’s office within that department will allow the attorney general to use the state’s Unlawful Trade Practices Act to enforce failures-to-notify and other violations of the statute.

In seeking enforcement authority, Attorney General Rosenblum is also asking that the law be amended to require breached entities to notify the state Department of Justice. The law requires notification to affected individuals, and to the consumer reporting agencies under certain circumstances, but at this time does not require notification to any state regulator. Currently, 15 states require breached entities to notify the state attorney general or other regulators, and New Jersey requires notification to be made to the state police.

For example, California requires notification to the state attorney general when a data breach affects more than 500 California residents. Once received, California posts the notifications on its website for public review. Using the information it has received in these breach notification letters, California has produced two breach reports – the most recent released in October 2014 – that highlight the most common types of breaches, the type of information stolen in breaches, and which industry sectors are victimized by breaches most often.

The attorney general is also working to expand the definition of “personal information,” the loss of which requires notification under the law. The changes contemplated in Oregon follow a current trend among the states to add biometric data, as well as medical and health information, to the list of the type of information that, if breached, triggers the notification statute.

One Year Later: Consumers Can Proceed Against Target in Data Breach Lawsuit

This post was written by Paul Bond and Christine Czuprynski.

On the one-year anniversary of Target’s announcement that it had suffered a massive data breach, Judge Magnuson in the District of Minnesota cleared the way for a consumer class action against the retailer to move forward into discovery. Earlier this month, the court ruled that the financial institution class actions can also proceed.

In the consumer case, Target argued that the plaintiffs failed to allege injury, and thus had Article III standing to proceed with the suit in federal court. The court found that consumers did claim enough injury to proceed, citing to their allegations that they suffered “unlawful charges, restricted or blocked access to bank accounts, inability to pay other bills, and late payment charges or new card fees.” The judge also will allow the consumers to pursue their claims for injunctive relief, by which plaintiffs seek to force Target to adopt new information security measures. The judge will allow the consumers discovery as to Target’s duty to disclose, and how well it performed that duty.

The judge analyzed state consumer protection and data breach notification laws of each state, demonstrating the complexity of this multi-district litigation. The consolidated consumer class action alone involves 114 named plaintiffs from all but five states, and asserts theories under 50 states’ laws. When Target raised the issue of standing in the five states with no named plaintiff resident, the court ruled that such an Article III standing analysis was premature at the motion-to-dismiss stage, and could be reassessed after the class-certification stage.

In the course of this decision, Judge Magnuson gave Target a few concessions, dismissing certain claims under certain state laws, and indicated as to many points that Target would be able to later assert their defenses. Judge Magnuson dismissed with prejudice the consumers’ bailment claim, which alleges that consumers trusted Target with their personal information as property. The judge also dismissed the unjust enrichment claim that was based on a theory that plaintiffs were overcharged for goods at Target because the goods included a premium for adequate data security that did not exist. However, the court allowed plaintiffs to proceed with the unjust enrichment claim based on the theory that had Target notified customers in a timely manner, plaintiffs would not have shopped at the store, and thus Target was not entitled to receive the money plaintiffs spent at the store.

EU Council Agrees on Partial General Approach to General Data Protection Regulation

This post was written by Cynthia O’Donoghue.

At the latest meeting in Brussels, Justice ministers agreed on a partial general approach. Andrea Orlando, Italy’s Minister for Justice and President of the Council, expressed the importance of this consensus on one of the “most politically sensitive issues on data protection reform”.

The press release states that the partial general approach includes articles which “are crucial to the question of the public sector (Article 1, Article 6, (paragraphs (1) and (2), Article 21) as well as chapter IX … and the latest recitals”. Chapter IX examines personal data processing for statistical, scientific and medical research purposes, as well as provisions dealing with freedom of expression, employment and social protection.

Despite consensus, the press release also states that the agreement was reached on the basis that:

  • Nothing is agreed until everything is agreed
  • It is without prejudice to any horizontal questions
  • It does not mandate the Council Presidency to engage in informal trialogues with the European Parliament on the text

In light of this, it appears that adoption of the Data Protection Regulation may not happen until 2016.

UK Public Authority Forced To Identify Private Sector Consultant Under Freedom of Information Act

This post was written by Kate Brimsted and Cynthia O’Donoghue.

The First Tier Tribunal General Regulatory Chamber (Information Rights) (the “FTT”), in the case of Alan Matthews v Information Commissioner [2014] EA/2012/0147, ruled that – despite being “personal data” – the name and qualifications of a private consultant should be released in response to a request under the Freedom of Information Act 2000 (“FOIA”). This overturned a June 2012 decision by the Information Commissioner (the “ICO”) that such information was exempt from release.

In 2011, an FOIA request was made by an individual, Alan Matthews, who had been unsuccessful in a bidding process to a now-defunct Local Development Agency, Business Link West Midlands Ltd (“Business Link”). An independent consultant had advised on the design of Business Link’s tendering process, and the individual believed it to be flawed.

The ICO then ruled that the identity of the consultant was exempt information under s40(2) FOIA, as it was personal data, and disclosure would contravene the principles of the Data Protection Act 1998.

In this appeal, the FTT ruled:
“…we believe there is a significant public interest in the disclosure of the identity of a consultant whose approval of a public contract tendering process is relied upon by a public authority to provide assurance as to its effectiveness and fairness. Against that we do not think that an individual accepting a role in the design and operation of such a process should expect to remain anonymous. He or she has taken on a public role and should expect to be answerable, alongside his or her client, for their respective roles in the project.”

Public authorities (and consultants who are engaged by them) should therefore be aware that the identity and professional credentials of consultants may have to be released to the public in the event of an FOIA request where a cogent public interest can be demonstrated.

Draft Data Protection Regulation delayed

This post was written by Cynthia O’Donoghue and Kate Brimsted.

At the latest meeting in Brussels, Justice ministers failed to come to a consensus on the “one stop shop mechanism” and the role of the proposed European Data Protection Board (EDPB). The minutes state that while a “majority of ministers endorsed the general architecture of the proposal,” “further technical work is required".

Ahead of the meeting, Italy prepared a compromise proposal on the one-stop-shop plan. This proposal suggested the creation of a Lead Data Protection Authority (DPA) to deal with cross-border disputes, and for any disputes between DPAs to be referred to the EDPB.

Several Ministers disagreed with the proposal. In the UK, Theresa May released a statement expressing her concern that, “with legally binding powers for the EDPB to resolve disputes, the model proposed would fail to achieve the stated objectives of legal certainty, quick decisions and proximity for the data subject”.

In light of the conflict between Member States on these issues, the Data Protection Regulation likely may not be adopted until 2016.

PCI Seeks to Help Organisations Educate Staff on Information Security with New Guidance

This post was written by Cynthia O’Donoghue.

In October, the Payment Card Industry (“PCI”) Security Standards Council published the Best Practices for Implementing a Security Awareness Program Information Supplement (“Supplement”) to help organisations educate their employees on the importance of protecting, the care in handling, and the risks of mishandling sensitive information.

The PCI Special Interest Group (“PCI SIG”) developed the Supplement with input from merchants, banks and service providers, to provide guidance on PCI Data Security Standard (“PCI DSS”) Requirement 12.6, which requires organisations to implement a security awareness programme.

The Supplement provides practical advice, including:

  • Assembling a security awareness team responsible for the development, delivery and maintenance of the security awareness programme
  • Determining roles for security awareness to tailor training appropriately
  • Developing security awareness content appropriate to each organisation’s time, resources and culture
  • Creating a security awareness checklist to plan and manage a security awareness training programme effectively

The Supplement includes a ‘Sample Mapping of PCI DSS Requirements to Different Roles, Materials and Metrics’ that shows how a training programme can incorporate PCI DSS, and a ‘Security Awareness Program Record’ to evidence a security awareness programme.

The Supplement could not come at a better time, as Cisco’s 2014 Annual Security Report found an increase of 14% in cyber-attacks since 2013. This guidance should help organisations in protecting their data, and will aid those who are gearing up for version 3.0 of the PCI DSS dealing with processing payment card information, which we reported on in April.

EU Art. 29 Releases Guidelines on the Right to be Forgotten

This post was written by Cynthia O’Donoghue.

In November, the Article 29 Data Protection Working Party (Working Party) released guidelines as to how the Data Protection Authorities (DPAs) assembled in the Working Party intend to implement the judgment of the Court of Justice of the European Union (CJEU) in the case of Google Spain SL and Google Inc. v Agencia Española de Protección de Datos (AEPD) and Mario Costeja González” (C-131/12) (Google Spain). The guidelines also contain a “list of common criteria” which the DPAs will apply to handle complaints.

The CJEU judgment set a milestone for EU data protection by granting individuals the right to request search engines to delist search results relating to them.

Part 1 of the guidelines attempts to interpret and answer the many questions left open by the judgment, and makes clear that the ruling applies to “generalist search engines”, not websites. Most importantly, the right to be forgotten is global, and that by de-listing only EU domains does not guarantee the rights of individuals, since that right extends where there is a clear link between the individual and the EU. Before search results can be de-listed, however, individuals are obliged to “sufficiently explain the reasons why they request de-listing, identify the specific URLs and indicate whether they fulfil a role in public life, or not”. The Working Party also clarified that search engines will not be obliged as a manner of general practice to inform the webmaster of the pages affected by de-listing, but that getting a fuller understanding of the circumstances of the de-listing request may be legitimate.

Part II of the guidelines provides a list of common criteria to aid DPAs’ complaint-handling and decision-making processes. The Working Party emphasises that each criterion applies in the light of the principles established by the CJEU, particularly in view of “the interest of the general public in having access to [the] information”.

The criteria provided by the Working Party will provide a useful guide for search engines seeking to understand how DPAs will interpret the ruling, and also to individuals seeking to better understand the scope of their rights.

FCC'S Notice of Opportunity To Comment on Robocalls and Call-Blocking Issues Raised by 39 Attorneys General

This post was written by Judith L. Harris and Divonne Smoyer.

On November 24, the FCC released a wide-ranging public notice seeking comment on a September 9, 2014, letter from the National Association of Attorneys General (NAAG), purportedly written “on behalf of the millions of Americans regularly receiving unwanted and harassing telemarketing calls.” The letter, signed by a bipartisan group of 39 AGs led by Chris Koster, the AG of Missouri, and Greg Zoeller, the AG of Indiana, raises issues relating to the legality and desirability of allowing telephone providers to implement call-blocking technology as a means of addressing unwanted telemarketing calls. NAAG’s letter to the FCC can be accessed here.

In its letter, NAAG references a July 2013 hearing before the Senate Subcommittee on Consumer Protection, Product Safety, and Insurance, at which witnesses from CTIA-The Wireless Association and US Telecom testified that “legal barriers prevent carriers from implementing advanced call-blocking technology to reduce the number of unwanted telemarketing calls.” In fact, it is true that the FCC has long prohibited call blocking in particular contexts as an “unjust and unreasonable practice” under the Communications Act of 1934, as amended.

Specifically, NAAG’s letter requests the FCC’s view in three areas:

  1. What, if any, legal and/or regulatory prohibitions bar telephone carriers (and VOIP service providers) from implementing call-blocking technology? Would the answer be any different if the companies’ customers were to “opt-into” use of the technology (either as a free service or for a fee)?
  2. According to US Telecom at the July 2013 hearing, telephone carriers can and do block “harassing and annoying” telephone traffic at their end-user customers’ request, but only for a “discrete set of specific phone numbers.” Could telephone carriers, at a customer’s request, legally block certain kinds of calls (for example, telemarketing calls) if technology could identify incoming calls as “originating or probably originating” from telemarketers?
  3. US Telecom describes the FCC’s position as being one of “strict oversight in ensuring the unimpeded delivery of telecommunications traffic.” Is this characterization accurate? If so, on what basis does the FCC claim that telephone carriers may not “block, choke, reduce or restrict telecommunications traffic in any way”?

In addition to seeking comment on these particular questions, the Commission, in its Public Notice, states that it is interested in hearing about what call-blocking technologies are available or under development in the United States and internationally, how they work, how these details should inform the Commission’s analysis, and whether differences in how specific technologies work might produce different outcomes under the law.

In its Notice, the FCC acknowledges having said in the past that, “‘except in rare circumstances,’ it ‘does not allow carriers to engage in call blocking.’” However, it then goes on to state that “it has not directly held that blocking calls upon customer request is unlawful,” and that “[i]ndeed the Commission has recognized ‘the right of individual end users to choose to block incoming calls from unwanted callers’” in certain circumstances.

At this juncture, the Commission seeks comment, among other things, on whether and to what extent its prior precedent and applicable statutory provisions regarding call blocking applies to call-blocking technologies now on the market or under development. And – most importantly, perhaps – the Public Notice asks: “How should the Commission reconcile the obligation of voice providers to complete calls with protecting consumers from unwanted calls under the Telephone Consumer Protection Act (TCPA)?”

One can be quite confident that not only will the pro-consumer “public interest” lobby be out in force on this one – and probably with substantial Congressional support – but also that companies that produce and market call-blocking technologies, such as Nomorobo, Call Control and Telemarketing Guard (identified by NAAG in its letter), will be pushing hard. For this reason, among others, this issue should be of grave concern – not only to all who market via the telephone, but also to those who use the phone to reach their customers for other purposes, for example, in attempting to collect a debt.

Have you had experience with your lawful outgoing calls (debt collection, informational, even emergency, as well as telemarketing, calls) being blocked by the recipient’s carrier? If so, you might want to share that experience with the FCC or write to the Agency about the impact this issue could have on your business, or the price of your products or services or your ability to communicate important information to your customers. Comments are due December 24, 2014, and reply comments are due January 8, 2015. If you fall into a potentially affected category, you should consider getting involved.

In other, but related news: it appears that the attorneys general have recently been a very busy bunch! Not only have 39 of them weighed in on blocking technology at the FCC, but – virtually simultaneously – 38 attorneys general, some the same, some different, again through NAAG, also recently urged the FTC, in its planned update of the Telephone Sales Rule, to prohibit the use of pre-acquired account information (to reflect the Restore Online Shoppers’ Confidence Act). The AGs contend that prohibiting such information would ensure that a consumer has consented to a given transaction.

NAAG’s letter to the FTC also urges that the Agency better address negative option telemarketing because, the AGs contend, that practice often leads to confusion and “outright deception.” Finally, the AGs argue that telemarketers should be required to create and maintain records, and that the use of money transfers and certain other payment methods should be banned. NAAG’s letter to the FTC can be accessed here.

FCC Confirms that Even Solicited Fax Ads Must Contain Opt-Out Language, and Sets Six-Month Deadline for Companies to Seek a Retroactive Waiver

This post was written by Judith L. Harris, Lisa B. Kim, and Christine N. Czuprynski.

On October 30, 2014, the FCC issued a much-anticipated ruling (“FCC Order”) resolving several petitions seeking clarification of the opt-out notice requirement regarding advertisements faxed to consumers, contained in the Telephone Consumer Protection Act, section 227 of the Communications Act (“TCPA”). The FCC ruled that all such faxes, even those sent with the recipient’s prior express permission or invitation – in other words, “solicited” fax advertisements – must include both notice of the recipient’s right to opt-out of receiving future faxed ads and notice of the mechanism recipients can use to exercise such opt-out right.

However, in light of the apparent confusion that existed before the FCC’s clarification regarding the applicability and scope of the TCPA’s opt-out notice requirement, the Commission granted 24 individual petitioners limited retroactive waivers, giving them six months to come into compliance with the rule. Importantly, the FCC also announced in its ruling that it will allow similarly situated entities to seek their own retroactive waivers on the same grounds. Any business that did not include an opt-out notice on fax advertisements it sent to recipients who had previously provided permission or an invitation to send them should take advantage of this rare opportunity provided by the FCC to seek a retroactive waiver.

A little history: the TCPA prohibits sending unsolicited fax advertisements. The TCPA was amended in 2005 by the Junk Fax Prevention Act, which codified an established business relationship exemption to the prohibition and required the sender of an unsolicited fax advertisement to provide specific notice and contact information on the fax that would allow recipients to “opt out” of any future fax transmissions from the sender. In 2006, the Commission issued additional regulations (the “Junk Fax Order”), including the following: “A facsimile advertisement that is sent to a recipient that has provided prior express invitation or permission to the sender must include an opt-out notice that complies with the requirements in paragraph (a)(4)(iii) of this section.” 47 CFR § 64.1200(a)(4)(iv) (the “Challenged Regulation”). However, that Order also contained a footnote, which read in relevant part, “the opt-out notice requirement only applies to communications that constitute unsolicited advertisements.” Junk Fax Order, 21 FCC Rcd at 3810, n.154.

In 2010, Anda, Inc. filed a request for declaratory ruling on the validity of the Challenged Regulation and its applicability to solicited fax ads. Anda argued that the Commission did not have the authority to promulgate the Challenged Regulation because the TCPA applies only to unsolicited fax advertisements. Alternatively, Anda argued that the TCPA was not the statutory basis of the Challenged Regulation, and thus, there is no private right of action to enforce that provision.

The Consumer and Government Affairs Bureau (“Bureau”) denied Anda’s petition in 2012 on procedural grounds. First, the Bureau ruled that Anda had identified no controversy because the Junk Fax Order identified section 227 as the statutory basis for the Challenged Regulation. In addition, any challenge to the Commission’s authority to adopt the rule itself was a collateral challenge that should have been raised within 30 days of the date of public notice of such action, which was in May 2006. Because Anda waited until 2010 to challenge the regulation, it was untimely. Anda sought review of this ruling by filing an Application for Review of the Bureau Order on May 14, 2012.

Dozens of petitions have been filed since Anda submitted its Application for Review. Those petitions parallel the arguments Anda raised in its original petition and its request for review. In addition, several of those petitions sought from the FCC a ruling that opt-out notices that “substantially complied” with the Challenged Regulation’s requirements, but did not track the language from the regulation exactly, were sufficient under the law. Some petitioners also challenged the regulation as an unconstitutional limitation on free speech.

The recently released FCC Order denies much of the relief requested by Anda and the other petitioners. Specifically, the FCC Order:

  • Affirms the Bureau’s holding that the Anda petition was an improper and untimely collateral attack on the Challenged Regulation
  • Affirms that the Commission relied on section 227 of the Communications Act to promulgate the opt-out requirement for solicited fax ads
  • Affirms that the Commission had authority to adopt the Challenged Regulation
  • Denies one petitioner’s request to repeal the Challenged Regulation on First Amendment grounds
  • Denies petitioners’ requests to allow for “substantial compliance” with the opt-out notice requirements, instead requiring full compliance

Commissioners Ajit Pai and Michael O’Rielly concurred in part and dissented in part to the order. Their statements offer a roadmap of sorts to petitioners who want to appeal this FCC ruling to the federal court of appeals, which has jurisdiction to review FCC orders. Commissioner Pai stated that “to the extent our rules require solicited fax advertisements to contain a detailed opt-out notice, our regulations are unlawful. And to the extent that they purport to expose businesses to billions of dollars in liability for failing to provide detailed opt-out notices on messages that their customers have specifically asked to received they depart from common sense.” Commissioner O’Rielly concurred with the relief granted, but dissented, like Commissioner Pai, from the ruling that the Commission has statutory authority to require opt-out notices on solicited faxes. He said that though the agency has the right to fill gaps in a statute, “it is not entitled to invent gaps in order to fill them with the agency’s own policy goals, no matter how well intentioned.”

The FCC Order acknowledges that petitioners and other entities may not have complied with the opt-out notice requirements for solicited faxes as the result of “reasonable confusion or misplaced confidence” that the opt-out notice did not apply to those fax ads. This confusion could have been the result of two things: (1) the contradictory footnote in the Junk Fax Order, referenced above; and/or (2) the lack of explicit notice, at the time the Challenged Regulation was adopted, that the Commission was contemplating an opt-out requirement on solicited fax ads. The FCC thus concluded that this reasonable confusion and misplaced confidence provided good cause for it to grant individual retroactive waivers to the petitioners, and to open up that opportunity to other similarly situated businesses.

On a practical level, this means that a business that sent fax ads with the recipient’s permission that did not include an opt-out notice, or included an opt-out notice that was not in full compliance with the language in the regulation, should lose no time seeking a retroactive waiver from the FCC. Such a waiver would protect that business from lawsuits based on past behavior, but would not apply to conduct that occurs after April 30, 2015, which is six months from the release date of the FCC Order.

All businesses sending out fax advertisements should reach out to experienced counsel to review their marketing practices and, if necessary, petition the FCC for a retroactive waiver regarding the inclusion of opt-out language and an opt-out mechanism in their solicited fax ads. All requests for waivers are to be submitted by April 30, 2015. For more information, join the authors of this alert for a webinar Wednesday, November 19, 2014, or contact the authors or any member of Reed Smith’s TCPA team for assistance in filing a waiver request.

Applying a Plain Meaning Interpretation to 'Automatic Telephone Dialing System,' the Southern District of California Dismisses TCPA Class Action Lawsuit

This post was written by Raymond Y. Kim and Jack J. Gindi.

On October 23, 2014, the U.S. District Court for the Southern District of California further clarified the federal Telephone Consumer Protection Act’s (“TCPA”) definition of “automatic telephone dialing system” (“ATDS”) and granted summary judgment for the defendant on the grounds that it did not use an ATDS to send promotional text messages. Marks v. Crunch San Diego, LLC, -- F.Supp.3d --, 2014 WL 5422976 (S.D. Cal. October 23, 2014).

Here are the major takeaways from the case:

  • “Capacity” means current, not potential, capacity.
  • The FCC lacks the authority to define an ATDS as anything but “a random or sequential number generator.”
  • The defendant’s dialing equipment was not an ATDS because it “lacks a random or sequential number generator.”
  • Meyer v. Portfolio Recovery Assocs. LLC, 696 F.3d 943 (9th Cir. 2012), is not controlling because the Ninth Circuit Court of Appeals did not consider the issue of whether the FCC had authority to define an ATDS.
  • Any dialing equipment (including a predictive dialer) that does not have the capacity to generate random or sequential phone numbers is not an ATDS.

In Marks, Crunch, a gym operator, used a third-party, web-based platform to send promotional text messages to its members’ and prospective customers’ cellular telephones. The phone numbers were inputted into the platform by one of three methods: (1) when Crunch or another authorized person manually uploaded a phone number onto the platform; (2) when an individual responded to a Crunch marketing campaign via text message; and (3) when an individual manually inputted the phone number on a consent form through Crunch’s website that interfaced with the platform. Crunch selected the desired phone numbers, generated a message to be sent, and selected the date the message was to be sent, and then the platform sent the text messages to those phone numbers on that date. The plaintiff filed a class action alleging that Crunch violated the TCPA by sending promotional text messages using an ATDS without the recipients’ prior express consent. Crunch filed a summary judgment motion arguing that its dialing platform was not an ATDS because “it lacks the capacity to store or produce telephone numbers to be called using a random or sequential number generator.”

The TCPA defines an ATDS as “equipment which has the capacity--(A) to store or produce telephone numbers to be called, using a random or sequential number generator; and (B) to dial such numbers.” 47 U.S.C. § 227(a)(1). Based on the TCPA’s “clear and unambiguous” definition of an ATDS, the court held that Crunch’s equipment was not an ATDS because it “lacks a random or sequential number generator.” In so holding, the court rejected the FCC’s 2003 interpretation of an ATDS, finding that it “is not binding on the Court.”

In 2003, the FCC broadly interpreted the definition of ATDS to include “any equipment that has the specified capacity to generate numbers and dial them without human intervention regardless of whether the numbers called are randomly or sequentially generated or come from calling lists.” In the Matter of Rules and Regulations Implementing the Tel. Consumer Prot. Act of 1991, 27 F.C.C.R. 15391, 15392 n. 5 (2012) (emphasis added). The court found that the FCC’s 2003 commentary “change[d]” and “modif[ied]” the TCPA’s definition of an ATDS, which the FCC did not have authority to do because 47 U.S.C. section 227(a) – unlike section 227(b) and section 227(c) – “does not include a provision giving the FCC rulemaking authority.”

Turning to the interpretation of “capacity,” the court applied the current capacity interpretation and found that Crunch’s equipment was not a “random or sequential number generator” because “[n]umbers only enter the system through one of the three methods listed above, and all three methods require human curation and intervention.” The court was clear that calling a list of stored phone numbers is not “random or sequential number generation”:

“Random or sequential number generator” cannot reasonably refer broadly to any list of numbers dialed in random or sequential order, as this would effectively nullify the entire clause. If the statute meant to only require that an ATDS include any list or database of numbers, it would simply define an ATDS as a system with “the capacity to store or produce numbers to be called”; “random or sequential number generator” would be rendered superfluous. . . It therefore naturally follows that “random or sequential number generator” refers to the genesis of the list of numbers, not to an interpretation that renders “number generator” synonymous with “order to be called.”

Going a step further, the court also found that the dialing platform did not have the “potential capacity” to become an ATDS because Crunch used a third-party text delivery service that controlled the technology, and its “access to the platform [wa]s limited.”

The court also addressed Meyer v. Portfolio Recovery Assocs. LLC, where the Ninth Circuit deferred to the FCC and found a predictive dialer to be an ATDS. 696 F.3d 943, 950 (9th Cir. 2012). The court concluded that Meyer was not controlling because there, the defendant had waived its right to challenge the FCC’s authority to define ATDS by failing to raise the argument at the district court level. Meyer, 707 F.3d at 1044. The court also distinguished Crunch’s platform from the “predictive dialer” at issue in Meyer. However, the court noted that, based on a current capacity interpretation of ATDS, a predictive dialer is not itself an ATDS because it is “neither the database storing the numbers nor a number generator creating an ephemeral queue of numbers.”

This opinion is particularly favorable for defendants because it holds that in order to be an ATDS, dialing equipment must have “a random or sequential number generator,” and that merely having the capacity to call a list of stored phone numbers without human intervention is not “random or sequential number generation.” Although this ruling is not binding, it adds to the body of law applying the current capacity interpretation, and further signals the judiciary’s desire to reasonably interpret and apply the TCPA according to its plain and unambiguous statutory language.

Data Security Threats Are on the Rise in the Golden State, According to California Attorney General Kamala Harris

This post was written by Maytak Chin, Lisa Kim and Divonne Smoyer.

A California attorney general’s report released this month shows that data security threats are on the rise in the Golden State. Against a backdrop of increasing security breaches, the report recommends best practices for companies to adopt as a way to reduce their vulnerabilities and to better protect consumers.

The report highlights trends in security breaches that have occurred in California over the past two years. Last year alone, personal data from more than 18.5 million California residents was compromised, which represents a 600 percent increase from the 2.6 million records breached the year before. Moreover, the leading industries targeted for hacks and malware attacks were retail, financial services, and health care. In 2013, the retail industry had 26 percent of total breaches, followed by financial services at 20 percent, and health care at 15 percent. These industries are most at risk for security breaches because they possess and transact sensitive consumer data as an integral part of their business models.

Large retailers are particularly in jeopardy for cyber attacks. For instance, Target had a security breach that compromised 41 million individual records, and Living Social had 50 million of its consumer records hacked in 2013, which affected consumers nationwide. The magnitude of the security compromises at Target and Living Social illustrates how large retail companies have become prime targets for cyber attacks. However, updating company practices and technological processes can reduce system vulnerabilities.

The California attorney general’s report recommends that companies take four steps to improve data security and reduce breaches:

  • First, companies could update point-of-sale terminals and systems, e.g., cash registers and other payment card technologies, to accept chip-embedded cards. Chip cards interact with physical sale terminals to authenticate payment cards and have the ability to send a one-time message, which changes with each transaction. Since 1994, more than 80 countries have moved toward using chip cards, including Canada, Mexico, and Brazil, and several countries in Europe and Asia.
  • Second, encrypting sensitive information could reduce unauthorized access to the data. Once encrypted, the data transforms into a non-readable format that becomes readable only when paired with a matching cryptographic key generated by a matching mathematical algorithm. This prevents access to such information from unauthorized users who do not have the matching cryptographic key.
  • Third, companies could employ tokenization solutions to make sensitive information less accessible. Tokenization is similar to encryption, except the key or token is generated at random at the point of use, rather than through a set mathematical algorithm.
  • Fourth, companies should implement security breach policies to ensure prompt notifications to consumers and responses to address the breach, as measures to prevent further systemic harm.

The California attorney general’s report comes on the eve before new personal information privacy rules take effect next year. In the past month, the California Legislature passed, and the governor approved, Assembly Bill No. 1710, which amend Civil Code section 1798 et seq. The newly enacted provisions will restrict the sale of Social Security numbers, including in advertising and offers to sell, and expand the law to reach any company that owns, licenses, or maintains specified personal information of any California resident. The new laws will also require that security breach notifications include an offer from the business with the breach to provide appropriate identity theft prevention and mitigation service to compromised consumers. For more information on Assembly Bill No. 1710, check our blog post on it here.

The California attorney general’s office is one of the most active offices in the area of state privacy enforcement. In the past several years, however, state attorney generals across all 50 states have become increasingly active in privacy enforcement because of a lack of comprehensive privacy rules in the United States at the federal level.

Reed Smith attorneys conduct Q&A with Idaho AG

This post was written by Divonne Smoyer and Frederick Lah.

Attorney General (AG) Lawrence Wasden is Idaho’s longest-serving AG, having served since his election in 2002. Wasden has been a strong advocate of consumer protection issues related to privacy, such as marketing scams and Internet safety, particularly with respect to teens and children. He also has served as president of both the National Association of Attorneys General (NAAG), the nonpartisan professional association for state AGs, as well as the chair of the Conference of Western Attorneys General (CWAG), an educational association focusing on legal and policy issues of importance to states in the western U.S.

Reed Smith Data Privacy attorneys Divonne Smoyer and Frederick Lah produced a series of Q&A with AG Wasden. Click here to read the entire interview on The Privacy Advisor.

Also see our previous Q&As with Connecticut AG George Jepsen and Indiana AG Greg Zoeller.

TCPA: The Muddled Madness Continues!

This post was written by Judith L. Harris.

Tuesday evening, the Federal Communication Bar Association held a seminar in Washington designed to help practitioners make some sense of the ever-expanding number of class actions that have been brought under the Telephone Consumer Protection Act (“TCPA”) by often over-zealous plaintiffs’ attorneys; the inconsistent decisions that have been rendered by the courts; and the scores of requests for declaratory rulings that are currently pending before the Federal Communications Commission (“FCC,” “Agency,” or “Commission”). While the participants on the seminar’s two panels (the first designed as a litigation update and the second intended to provide a look down the road) quibbled over substance throughout the evening, they did seem to share one common perspective: the TCPA is a mess!

Not surprisingly, the panelists – especially the FCC’s representative – were much more adept at identifying open issues than at providing answers. Nonetheless, we were able to gain some insight into what are generally considered to be the most difficult TCPA-related issues and how some of the current confusion might eventually sort itself out.

  • There seems to be universal agreement that the FCC will issue an order “any day now” dealing with opt-out requirements in situations involving solicited faxes. We got the sense that an order is already signed by at least the necessary three Commissioners, and that the Agency will cut a little bit of slack in limited circumstances, to telemarketers responding to consumer requests or sending faxes to existing customers who have consented to receiving them. We’ll see.
  • It also seems that Commission staff is currently grappling with the definition of “called party” in the case of reassigned mobile phone numbers. The courts have recently reached differing conclusions regarding that definition for purposes of ascertaining consent, some holding that the called party is the intended recipient of the call and others concluding that it’s the current subscriber. We’re guessing that this will be the subject of the next important TCPA order issued by the FCC.
  • The good money is betting that the other big questions (in particular, the many pending requests for declaratory rulings relating to the definition of an ATDS, the capacity debate, etc.) will be wrapped into the omnibus rulemaking currently pending before the Agency. It appears that the Commission would be very interested in arriving at a compromise position that could be embraced by both businesses and consumers. Panelist Jason Goldman, Counsel at the U.S. Chamber of Commerce, offered that the Chamber is very focused on trying to proactively develop solutions to some of these issues as, not surprisingly, this whole area of the law is of grave concern to the Chamber’s members.
  • Interestingly, in the first panel, two different answers were given by private practitioners to the question of how many petitions for declaratory rulings are currently pending before the FCC (41 and 52). During the second panel, which included Kristi Lemoine – an attorney with the FCC’s Office of Consumer and Governmental Affairs who described herself as spending more than 90 percent of her time on TCPA issues – Kristi confessed that she herself doesn’t know which of those two numbers was accurate, as petitions keep coming on a regular basis, and even she is having a hard time keeping track of them. As expected, Kristi gave the usual caveats before she spoke: (1) that she was only speaking for herself and not on behalf of the Commission; and (2) that she wasn’t going to have a lot to say because virtually all the issues that the audience might be interested in were currently the subject of pending petitions for declaratory rulings, which she was not at liberty to discuss. Then she proceeded to say almost nothing and made no predictions. She did advise that the FCC was attempting to group the petitions by issue, but even just doing that was tough because of the frequency with which petitions were being filed, and the fact that many posed more than a single issue.
  • There seemed to be some consensus that, currently, one of the most interesting open questions relates to the scope of third-party liability for mobile marketing TCPA violations. Several panelists referred to the recent decision of the Ninth Circuit holding that companies that hire third parties to send unsolicited text messages on behalf of yet another entity can be held liable for TCPA violations. See, Gomez v. Campbell-Ewald Co., __F. 3d___, 2014 WL4654479. The Gomez case reversed and remanded an order granting summary judgment in favor of defendants, holding that a marketing company, hired by the U.S. Navy to run a recruitment campaign, could be held liable for violations by a third party with which the marketing company had subcontracted to send text messages in furtherance of the Navy’s recruitment campaign. While the FCC has previously opined that third-party liability should be based on common law principles of agency (actual/apparent authority/ratification), everyone agreed that this Ninth Circuit decision, holding, as it did, that a middle man that hired a vendor on behalf of an entity that contracted with the middle man to have calls made or messages sent, could be held liable for acts of the vendor with which the middleman contracted, is really pushing the envelope; and may or may not end up accurately reflecting the law.
  • Finally, there were several references during the seminar to the Federal Trade Commission’s (“FTC”) announcement in August of the winners of its “Zapping Rachel” robocall contest as evidence that the relevant federal enforcement agencies remain laser-focused. According to the description on the FTC’s website: “Zapping Rachel marks the latest step in the FTC’s ongoing campaign to combat illegal, pre-recorded telemarketing calls known as robocalls. The contest challenged participants to design a robocall honeypot which is an information system designed to attract robocallers and help law enforcement authorities, researchers, and others gain enhanced insights into robocallers’ tactics.” Beware! The award winners came up with some pretty innovative ideas!

In other news, the FCC also released an Enforcement Alert. The Alert contains a warning (in this election season) that the TCPA’s prohibitions about auto-dialed calls and pre-recorded messages also apply to political calls, and that the Commission intends to enforce the law and its regulations in this regard. For you beleaguered defendants out there: turnaround is fair play!

Court Finds, Again, That Device ID Is Not Personally Identifiable Information (PII) Under The Video Privacy Protection Act (VPPA)

This post was written by Lisa B. Kim.

On October 8, 2014, a district court judge in Georgia dismissed with prejudice a Video Privacy Protection Act (VPPA) action against The Cartoon Network (CN), holding that the disclosure of the plaintiff’s Android ID was not actionable because the Android ID did not qualify as “personally identifiable information” (PII). The full order is attached.

In Ellis v. The Cartoon Network, Inc., the plaintiff alleged that he downloaded the Cartoon Network App (“CN App”) and began using it to watch video clips on his Android device. Plaintiff alleged that each time he used the CN App, a complete record of his video history, along with his Android ID number, was transmitted to Bango. Bango, as a third-party analytics company that collects a wide variety of information about consumers from other sources, would then allegedly reverse-engineer the consumers’ identities by using the Android ID.

Plaintiff claimed that CN’s practice of sharing his Android ID and viewing history to Bango without his consent was a violation of the VPPA.

The court dismissed the case with prejudice, finding that the Android ID did not qualify as PII, and thus, CN’s practices of sharing device IDs to Bango did not fall within the purview of the VPPA. Citing to the In re Hulu and In re Nickelodeon cases, the court explained that in order to be considered PII, the information had to link an actual person to actual video materials. Where an anonymous ID was disclosed to a third party but that third party had to take further steps to match that ID to a specific person, no VPPA violation occurred. The court likened this case to the disclosure of cable box codes, which could not identify consumers without corresponding billing records. Here, too, Bango needed to go through an additional step of matching PII gathered from other sources to identify the user. This was not a situation where video viewing habits were linked to a Facebook account, where the specific person could be identified without any additional steps. Accordingly, the court found that the disclosure of an Android ID alone, as happened here, does not qualify as PII under the VPPA, and dismissed the case with prejudice.

The court also considered and rejected arguments by CN that plaintiff had no standing to bring the case because he did not suffer an injury in fact, and that plaintiff was not a “subscriber” to any of CN’s services, and thus, not a “consumer” under the VPPA. The court found that an invasion of a statutorily created right established standing even if no injury would have existed without the statute. Since plaintiff alleged a violation of the VPPA, the court found that plaintiff alleged an injury. The court also found that plaintiff was arguably a subscriber because he downloaded the CN App and used it to watch video clips. However, given that the court ultimately dismissed the case, these rulings would be considered dicta.

With this ruling, courts appear to be drawing a line with regard to applying the VPPA to sharing information with analytics companies. Plaintiffs have certainly been testing the waters with VPPA cases against various news and entertainment organizations (see May 5, 2014 blog post). This ruling demonstrates that the courts are hesitant to push the bounds of the VPPA to include the simple sharing of device IDs without more. Time will tell if the other courts follow suit.

ICO Publishes its Report on Big Data and Data Protection

This post was written by Cynthia O’Donoghue.

On 28 July, the ICO released its report ‘Big data and data protection’ (the ‘Report’).

The Report defines ‘Big Data’ and sets out the data protection and privacy issues raised by Big Data, as well as compliance with the UK Data Protection Act 1998 (‘DPA’) in the context of Big Data.

The ICO defines Big Data by reference to the Garter IT glossary definition, and further explains that processing personal data must be of a significant volume, variety or velocity.

When announcing publication of the Report, Steve Wood, the ICO’s Head of Policy Delivery, stated that “Big Data can work within the established data protection principles….The principles are still fit for purpose but organisations need to innovate when applying them”.

Under the DPA 1st Principle (fair and lawful processing), the Report emphasises that the complexity of Big Data analytics should not become an excuse for failing to seek consent where required, and that organisations must process data fairly, particularly where Big Data is used to make decisions affecting individuals. A study by Barocas and Selbst entitled ‘Big Data’s Disparate Impact’ found that Big Data has the “potential to exacerbate inequality”, and use of Big Data that resulted in discrimination would violate the fairness principle.

The Report addresses the significant issue of data collection when using Big Data analytics, and stresses that an organisation must have a clear understanding from the outset of what it intends to do with, or learn from, the data to ensure that the data is relevant and not excessive for the purpose. The Report seeks to address the growing concern that Big Data analytics tends to involve collecting as much data as possible, but that under the DPA, data minimisation remains an essential element of Big Data.

The Report also cautions that organisations seeking to use analytics must ensure against purpose-creep by following the purpose limitation principle to ensure that data collected for one purpose is then not used for another purpose incompatible with the original purpose. With this in mind, the ICO suggests that organisations employ a risk-based approach to identify and mitigate the risks presented by Big Data.

The Report also addresses whether the growth of Big Data leads to an increased data security threat, and highlights how The European Union Agency for Network and Information Security (‘ENISA’) has identified a number of emerging threats arising from the potential misuse of Big Data by so-called ‘adversaries’. In contrast, the Report also illustrates that there is evidence illustrating how Big Data can be used to improve information security.

To address these concerns, the ICO recommends several ‘tools for compliance’, including:

  • Privacy Impact Assessments (PIAs)
  • Privacy by Design
  • Promoting transparency through Privacy Notices

Big Data is a fast-growing area that offers many opportunities and commercial advantages. It also presents many challenges. As the Report argues, the benefits of Big Data can only be realised by adhering to current DPA Principles and safeguards. Only through compliance will individuals trust organisations and become more open to the use of their data for Big Data analytics.

Did California Just Impose a First-in-the-Nation Requirement for Breaching Companies To Offer Identity Theft Prevention and Mitigation Services?

This post was written by Paul Bond, Lisa B. Kim, and Leslie Chen.

Spurred by the security breaches at Target, Neiman Marcus, and The Home Depot, California Gov. Jerry Brown signed into law Assembly Bill No. 1710 September 30, 2014. The bill expands requirements on persons or businesses that own, license, and maintain personal information about a California resident. Specifically, the new law amends sections 1798.81.5, 1798.82, and 1798.85 of the California Civil Code to reflect the following changes:

  • Expands the provisions that require businesses to provide security measures involving personal information to include businesses that “maintain” information about a California resident, not just those who “own” or “license” that information.
  • Requires that if the person or business providing a security breach notification was the source of a breach that involved the exposure or possible exposure of social security numbers (SSNs) or driver’s license numbers, then “an offer to provide appropriate identity theft prevention and mitigation services, if any, shall be provided at no cost to the affected person for not less than 12 months.”
  • Prohibits the sale, advertisement for sale, or offer to sell of an individual’s social security number, except in specific circumstances.

Previously, only businesses that owned or licensed personal information about a California resident were required to implement and maintain reasonable security procedures and practices to protect the personal information from unauthorized access, destruction, use, modification, or disclosure. Owned and licensed personal information include “information that a business retains as part of the business’ internal customer account or for the purpose of using that information in transactions with the person to whom the information relates.” For example, financial institutions have long been deemed “owners” of personal information under the existing law, and frequently have to issue notices of breach in situations when the actual incident did not even occur at a bank or credit union. However, with the new bill, as long as a business maintains personal information, it will be responsible for disclosing that a breach occurred. This expands the data breach laws to include retailers that have personal information about their customers, but do not use it in the manner defined above.

In addition, AB 1710 requires businesses that are the source of a security breach involving SSNs or drivers’ license numbers to provide, if any, identity theft prevention and mitigation services at no cost to the affected person for a minimum period of 12 months. The plain text of the statute makes the requirements regarding cost and length of services conditional on the company offering services at all. By saying that “an offer…if any” must meet certain requirements, the statute precludes very short-term “offers” that really function as teasers to get people to subscribe for services at their own expense. However, many commenting on the bill before and after passage have essentially read the “if any” language out of the text by construing the provision to make credit monitoring or a like service mandatory. Regardless of the interpretation, the new provision reflects the legislature’s interest in offering security breach victims a means to ameliorate the situation.

Finally, the new bill also provides that a person or entity may not sell, advertise for sale, or offer to sell an individual’s SSN except in specific circumstances allowed by the law. For example, businesses are not prohibited from incidentally releasing social security numbers when it is necessary to do so to accomplish a legitimate business purpose. Note, however, that it is not permissible to release an individual’s social security number for marketing purposes.

The new amendments go into effect January 1, 2015. Beginning then, businesses that violate the law may be subject to civil actions by customers seeking to recover damages or injunctive relief. Cal. Civ. Code § 1798.84(b) and (e).

It's a Bird...it's a Plane...it's a Drone; FAA Approves Limited Use of Drones as Camera Platforms for Film and TV Production

This post was written by Hilary St. Jean.

Unmanned aerial cameras have been legal in other parts of the world but prohibited for commercial use in the United States until last week, with the limited exception of two commercial-drone operations, which the FAA had previously approved for Alaskan oil operations. On September 25, 2014, the FAA announced that it approved certain uses of drones or unmanned aircraft systems (“UAS”) in the National Airspace System for film and TV productions. This is a breakthrough for the entertainment industry because drones allow filmmakers Superman-like abilities to take images at angles never before captured. Drones are able to cover altitudes lower than helicopters but higher than cranes, and can navigate indoor areas that are otherwise difficult or impossible to get to. However, the FAA’s approval is not without restriction.

The FAA must grant permission for all non-recreational (commercial) drone flights. Thus far, FAA permission has been granted to only six aerial photo companies for film and TV production. Additionally, various safety requirements are associated with the approval process. The FAA stated that these six applicants submitted UAS flight manuals with detailed safety procedures that were a key factor in their approval. Nevertheless, the requirements leave open the opportunity for operating requests from companies in other fields. In fact, the FAA stated it is currently evaluating requests from 40 companies (allegedly including Amazon.com Inc., which desires to test prototype delivery drones at its Seattle headquarters). Meanwhile, abroad – at DHL headquarters in Germany – drones are beginning deliveries of medications and other urgent goods to the island of Juist, after securing approval from state and federal transport ministries and air traffic control authorities to operate in restricted flight areas. These are referred to as “parcelcopters,” and illustrate the widespread potential future use and capability of UAS both domestically and abroad.
 

Click here to read the full issued Client Alert.

PCI Addresses Payment Security Risks with New Guidance

This post was written by Cynthia O’Donoghue and Kate Brimsted.

In August, the Payment Card Industry (“PCI”) Security Standards Council published the Third Party Security Assurance Information Supplement (“Supplement”) to help organisations reduce their risk by better understanding their respective roles in securing card data.

The Supplement was developed by the PCI Special Interest Group (“PCI SIG”) consisting of merchants, banks and third-party service providers, to help meet PCI Data Security Standard (“PCI DSS”) Requirement 12.8.

Under PCI DSS Requirement 12.8, an entity must maintain policies and procedures to ensure that service providers are securing cardholder data. In addition, under PCI DSS 3.0, effective from 1 January 2015, entities will be required to obtain a written acknowledgement of responsibility for the security of cardholder data from their service providers.

The Supplement focuses on practical recommendations to help meet the Requirements. Examples include:

  • Conducting due diligence of Third-Party Service Providers (“TPSP”)
  • Implementing a process to help organisations understand how services provided by TPSP meet the PCI DSS Requirements
  • Developing written agreements and policies and procedures
  • Monitoring TPSP compliance status

The Supplement could not come at a better time. Worldpay, a payment processor, reported in August that at least 6.57 million cards in the UK have been put at risk over the past three years as a result of security breaches. UK consumers are now becoming increasingly wary, and a survey commissioned by payments-provider PayPoint in May found that 55 percent of UK consumers view payment security as the most important factor in deciding how to pay.

UK High Court considers implications of the Google Spain case for the first time

This post was written by Cynthia O’Donoghue and Kate Brimsted.

In July 2014, the High Court (the ‘Court’) considered for the first time the implications of the landmark decision in Google Spain, when delivering an interim judgment in the case of Hegglin v Persons Unknown [2014] EWHC 2808 (the ‘Judgment’).

Mr Hegglin (the ‘Claimant’), a businessman who lived in London but now resides in Hong Kong, sought to have removed a number of abusive and defamatory allegations about him that had been posted on various websites by unknown persons. Google was a defendant in the case as portions of the offensive material appeared in search results, and because Mr Hegglin requested the court to order that the identities of the anonymous posters be disclosed to him.

While the substantive claims remain to be decided, the Court considered certain interim matters, including an interim injunction and permission to serve the claim on Google, Inc., incorporated and located in the United States.

The Claimant sought an interim injunction against Google based on sections 10 and/or 14 of the Data Protection Act 1998 (‘DPA’), which allow individuals the right to prevent the processing of their personal data where it is likely to cause damage or distress, or where it is otherwise inaccurate. The Court rejected Mr Hegglin’s application for an injunction on the grounds that there was insufficient notice (less than two clear working days) and it was too extensive, as it would have required Google to take “all reasonable and proportionate technical steps as might be necessary in order to ensure that [the] material does not appear as snippets in Google search results.” However, the Court did issue an order requiring Google to disclose information in its possession which could assist the Claimant in identifying the individuals who are responsible for the posts.

When considering whether the Claimant should be granted permission to serve the proceedings out of the jurisdiction, the Court considered the Google Spain case and noted that the Court of Justice of the European Union (‘CJEU’) had concluded that Google was a data controller for the purposes of Data Protection Directive 95/46/EC. As a result, there was “at least a good arguable case” that Google was required to comply with the DPA when processing the Claimant’s personal data. On this basis, permission was granted.

While this case is not concerned with the “right to be forgotten”, which has been subject to extensive press and political attention, it highlights the fact that the CJEU’s decision is in fact much broader. The full consequences remain to be seen, and the case is set to come to full trial in November 2014.

Direct Marketing Association releases New Privacy Code of Practice

This post was written by Cynthia O’Donoghue and Kate Brimsted.

On 18 August, the Direct Marketing Association (‘DMA’) issued its new Privacy Code of Practice (‘Code’) to address customer concerns about data privacy. The Code is a result of an 18-month consultation with the Information Commissioner’s Office, the Department for Culture, Media & Sport and Ofcom.

The Code focuses on five key principles:

  • Put your customer first
  • Respect privacy
  • Be honest and fair
  • Be diligent with data
  • Take responsibility

The Code contains desirable outcomes for each principle. For example, a customer receiving a ‘positive and transparent experience throughout their association with the company’ is a specified outcome against the ‘put your customer first’ principle.

The principles form a useful tool that encourages self-regulation and seeks to cultivate a relationship of trust with customers. Rather than issue a rule-based system, the DMA’s new Code provides flexibility to members to determine the way they will comply with both the principles and the law.

The Code will be enforced by the DM Commission, the industry’s independent watchdog. Breaking the Code will result in DMA members being expelled from the association, a move which is likely to cause reputational damage.

President of the EC calls for a finalisation of Europe's data protection rules and review of safe harbor

This post was written by Cynthia O'Donoghue and Kate Brimsted.

Incoming president of the European Commission, Jean-Claude Juncker, has radically transformed the EU executive to help him pursue his vision for the next five years.

Juncker seeks to make the EU “an area of justice and fundamental rights based on mutual trust,” and has led to him calling for the “conclusion of negotiations on the reform of Europe’s data protection rules,” and the review of the Safe Harbor agreement to be completed within six months’ time, particularly in light of recent mass surveillance revelations.

In Juncker’s mission letter, he requests Vice President Andrus Ansip, former Prime Minister of Estonia, to be Vice President of the ‘Digital Single Market’ team, with the aim of bringing to an end the Safe Harbor saga and the reform of Europe’s data protection rules.

What conclusions will be reached, if any, remain to be seen. Moreover, it will be interesting to see how Ansip’s attitude will differ toward data protection regulation from that of Vice President Viviane Reding, who threatened to suspend the EU/U.S. Safe Harbor Agreement in January 2014.

Update on Data Retention

This post was written by Cynthia O'Donoghue and Kate Brimsted.

In July, we reported on the controversial move by the UK Government to pass the Data Retention and Investigatory Powers Act 2014 (‘DRIP Act’). This DRIP Act enabled telecommunications operators to understand what their retention obligations are, following the CJEU’s declaration back in April that Directive 2006/24/EC on the Retention of Data was invalid.

In response to the court’s decision, the Swiss Federal Council issued a statement that the ruling has no effect on Swiss laws. A legislative proposal to expand state powers on the law of telecommunications surveillance has already been approved by the Council of States. Currently, metadata must be stored for six months in Switzerland for possible law-enforcement purposes. The revisions would expand both the scope of the surveillance and the duration to 12 months. These proposals have resulted in furor among the Swiss, with hundreds of demonstrators protesting against data retention of communications metadata in May 2014.

The controversy is not confined to Switzerland, as the legality of the new UK DRIP Act is being challenged by a Member of Parliament who is seeking a judicial review about the whether the new legislation is compatible with Article 7 of the European Charter on Human Rights and Article 8 of the European Convention on Human Rights, each of which covers the right of respect for private and family life. In addition, the UK published draft Data Retention Regulations, setting out what information must be included in retention notices.

The debate over data and record retention or destruction remains a key and difficult issue for many companies, and the controversy about mandatory data retention periods remains in flux following the CJEU’s decision. It remains important for companies to stay abreast of legal developments on document and data retention, and ensure that they have an appropriate methodology for data deletion when such data is no longer useful or necessary to retain – a practice that can reduce businesses' risk to legal exposure.

European Commission releases technical standards on Radio Frequency Identification

This post was written by Cynthia O'Donoghue and Kate Brimsted.

In July, the EU introduced new technical standards (‘Standards’) to assist users of Radio Frequency Identification (‘RFID’) technology to comply with the EU Data Protection regime and the Commission’s 2009 recommendation on RFID. The Standards are the result of a long-term EU project which began with a public consultation in 2006.

When RFID technology is used to gather and distribute personal data, it falls within the EU Data Protection regime. The Standards are being introduced at a critical time, as the use of RFID becomes more widespread, particularly in the health care and banking industries.

The key features of the Standards include:

  • The introduction of a new, EU-wide RFID sign which will allow people to identify products that use smart chips
  • New Standards for Privacy Impact Assessments (‘PIA’) to help ensure data protection by design
  • Guidance on the structure and content of RFID privacy policies
     

The Standards will be a useful tool for organisations that already use RFID technology, or are looking to do so. In particular, the Standards on PIAs will assist organisations to plan how they will comply with the forthcoming Data Protection Regulation, which requires PIAs to be carried out in various circumstances.

Article 29 Working Party supports recognition of Processor BCRs in the Data Protection Regulation

This post was written by Cynthia O'Donoghue and Kate Brimsted.

In June, the Article 29 Working Party (‘Working Party’) wrote to the President of the European Commission, setting out its case for including a reference to Binding Corporate Rules for data processors (‘BCR-P’) in the forthcoming Data Protection Regulation.

Binding Corporate Rules are one way in which data controllers or data processors in Europe can lawfully undertake international transfers of data. They are an alternative to using EU Model Clauses, or gaining Safe Harbor certification. However, to date, BCRs have been used to a much lesser extent than either of these methods since they are costly and time consuming to implement.

In the proposal for a Regulation published in January 2012, the European Commission had introduced an express reference to BCR-Ps. This reference was dropped, however, in the draft version of the Regulation that was voted on by the European Parliament in March 2014.

In its letter, the Working Party notes that it has officially allowed data processors to apply for BCRs since January 2013. In this connection, “denying the possibility for BCR-P will limit the choice of organisations to use model clauses or apply the Safe Harbor if possible, which do not contain such accountability mechanisms to ensure compliance as it is provided for in BCR-P.”

The letter makes clear that the Working Party is strongly in favour of BCR-Ps, which “offer a high level of protection for the international transfer of personal data to processors” and are “an optimal solution to promote the European principles of personal data abroad.” It is noted that three multi-nationals have already had BCR-Ps approved, and that approximately 10 applications are currently pending. By not providing for BCR-P in the Regulation, these companies will be put at a disadvantage.

The Regulation, which is currently being negotiated between the European Parliament and the European Council, is widely expected to come into force in 2017. It will implement substantial changes to the current regime, including the introduction of significant new duties for data processors.

Ireland and the UK ban forced subject access requests

This post was written by Cynthia O'Donoghue and Kate Brimsted.

The practice of employers forcing employees or applicants to exercise subject access rights has been described by the UK’s Information Commissioner’s Office (‘ICO’) as a “clear perversion of an individual’s own rights”. It is now set to become a thing of the past in the UK and Ireland, as both jurisdictions bring laws into effect to make the practice a criminal offence.

In Ireland, provisions of the Data Protection Act 1988 and Data Protection (Amendment) Act 2003 that outlaw the practice were triggered in July 2014. In addition to the employer-employee relationship, these provisions apply to any person who engages another person to provide a service. The provisions have always been included in the Acts, but have not been brought into force until now.

In June 2014, the UK’s Ministry of Justice released guidance stating that similar provisions will come into force as of 1 December 2014. Employers that attempt to force people to use their subject access rights will be committing a criminal offence, punishable by a potentially unlimited fine. The ICO has indicated that it clearly intended to enforce the provision in one of its blogs, stating that “the war is not yet won but a significant new weapon is entering the battlefield. We intend to use it to stamp out this practice once and for all.”

These developments come against a backdrop of similar regulatory changes in the United States, where the long-standing “Ban the Box” movement continues to challenge the use of criminal conviction questions on job application forms. In addition, Maryland became the first state in 2012 to ban employers from asking employees and job applicants to disclose their social media passwords. Similar legislation has now been introduced or is pending in 28 states nationwide.

In light of these developments, employers should review their procedures to ensure that they do not fall foul of this update.

FTC Workshop on Big Data: Focus on Data Brokers

This post was written by Divonne Smoyer and Christine N. Czuprynski.

On September 15, the Federal Trade Commission held a workshop entitled “Big Data: A Tool for Inclusion or Exclusion?” FTC Commissioner Julie Brill took the opportunity to discuss an industry that she has consistently maintained requires more regulation and scrutiny: data brokers.

Commissioner Brill stressed first that the FTC is very focused on entities regulated by the Fair Credit Reporting Act (FCRA), and reminded the audience that those entities will be held to the law by the agency. Those entities that are not subject to the FCRA are not off the hook: companies that engage in profiling, or “alternative scoring,” should take a very critical look at what they are doing, since alternative scoring has the potential to limit an individual’s access to credit, insurance, and job opportunities. Brill noted that the FTC’s May 2014 report focused on transparency, and called for legislation to make data brokers accountable – thoughts she echoed during Monday’s workshop.

Finally, Commissioner Brill stressed that all companies would be well-advised to see if their own big data systems cause problems that ultimately exacerbate existing socioeconomic conditions. She reiterated that companies should use their systems for good, and have a role in spotting and rooting out discrimination and differential impact. You can find the text of her full speech here.

85% of Mobile Apps Marked Down on Transparency: 'Must Try Harder' Say Global Privacy Regulators

This post was written by Cynthia O’Donoghue and Kate Brimsted.

In May this year, members of the Global Privacy Enforcement Network (GPEN) conducted a privacy sweep of 1,200+ mobile apps. The findings are now available (here).

GPEN is an informal network of 27 Data Protection Authorities (“DPAs”) established in 2007. Its members include the UK’s ICO, Australia’s OAIC, and Canada’s OPC.

DPAs from 26 jurisdictions carried out this year’s sweep (an increase of seven jurisdictions compared with the last sweep which we reported on in May 2014). The recent sweep focused on (1) the types of permissions an app was seeking; (2) whether those permissions exceeded what would be expected based on an app of its type; and (3) the level of explanation an app provided as to why the personal information was needed and how it proposed to use it.

The results showed that:

  • 85% of the mobile apps failed to explain clearly how they were collecting, using and disclosing personal information.
  • 59% left users struggling to find basic privacy information.
  • One in three apps appeared to request an excessive number of permissions to access additional personal information.
  • 43% failed to tailor privacy policies for the small screen, e.g., by providing in tiny type or requiring users to scroll or click through multiple pages.

In announcing their results, the GPEN made it clear that the sweep was not in itself an investigation. However, the sweep is likely to result in follow-up work, such as outreach to organisations, deeper analysis of app privacy provisions, or enforcement actions.

Privacy shortcomings are not just a regulatory matter; research by the ICO last year suggested that 49% of app users have decided not to download an app because of privacy concerns. In an increasingly crowded app marketplace, good privacy policies may be a valuable way to stand out from the competition.

Reed Smith attorneys conduct Q&A with State AGs

This post was written by Divonne Smoyer.

The office of Connecticut Attorney General (AG) George Jepsen has been at the forefront of state-led privacy enforcement issues for years, and Connecticut is widely considered to be one of the most active states in privacy policy and legal enforcement. The Connecticut AG’s office was one of the first to create a special privacy unit in 2011. And it was the first to exercise jurisdiction under the 2009 federal HITECH Act, which extends enforcement of federal privacy and security requirements governing protected health information to state AGs, suing Health Net in January 2010.

Reed Smith Data Privacy attorneys Divonne Smoyer and Christine Czuprynski produced a series of Q&A with AG Jepsen. Click here to read the entire piece on Privacy Advisor.

Click here to also read a Q&A with Indiana AG Greg Zoeller written by Smoyer and Reed Smith Privacy attorney Paul Bond.

Ninth Circuit Refuses To Enforce Arbitration Clause Contained in Barnes & Noble's 'Browsewrap' Terms of Use Agreement

This post was written by Mark S. Melodia and Lisa B. Kim.

During recent terms, the U.S. Supreme Court has repeatedly embraced mandatory arbitration and class action waivers contained in a wide variety of consumer contracts.  The Court has sided with corporate defendants and elevated the requirements of the Federal Arbitration Act above other legal and policy interests advanced by would-be class representatives and their class action counsel.  And yet, all of this case law takes as a starting point that a valid, enforceable contract has been formed under state contract law.  Given the increasingly online nature of consumer transactions, this means that companies offering their goods and services via website or app need to assure that their terms and conditions will be recognized later by a reviewing court as a binding contract in order to get the benefit of this pro-arbitration case law.  Those counseling companies must, therefore, closely watch court decisions – particularly federal appellate authority – that do or do not enforce online terms of use.  One such decision issued earlier this week.

On August, 18, the Ninth Circuit affirmed the district court’s denial of Barnes & Noble, Inc.’s motion to compel arbitration, finding that plaintiff did not have sufficient notice of Barnes & Noble’s Terms of Use agreement, and thus, could not have unambiguously manifested assent to the arbitration provision contained in it.  See Nguyen v. Barnes & Noble, Inc., Case No. 12-56628, 2014 WL 4056549, *1 (9th Cir. Aug. 18, 2014).  In Nguyen, the plaintiff brought a putative class action against Barnes & Noble after it had cancelled his purchase of two heavily discounted tablet computers during an online “fire sale.”  The plaintiff alleges that Barnes & Noble engaged in deceptive business practices and false advertising in violation of California and New York law.

In affirming the district court’s ruling, the Ninth Circuit found that the plaintiff did not have constructive notice of the arbitration clause in it, despite the fact that Barnes & Noble’s Terms of Use was available through a hyperlink at the bottom left of every page of its website (i.e., as a “browsewrap” agreement) and was in proximity to relevant buttons the website user would have clicked on.  Id. at *5-6.  The Ninth Circuit held that the onus was on website owners to put users on notice of the terms to which they wish to bind consumers, and that this could have been done through a “click-wrap” agreement where the user affirmatively acknowledged the agreement by clicking on a button or checking a box.  Id. at *5-6.  Indeed, the decision expressly states that had there been evidence of this, the outcome of the case may have been different.  Id. at *4.

In light of this decision, website owners utilizing a “browsewrap” terms of use agreement should consider incorporating some type of “click-wrap” method for garnering the affirmative consent of its users.  Otherwise, they will run the risk that courts, like the Ninth Circuit, will deny their enforceability.

TCPA Plaintiffs Secure Victories in Recent Rulings on Class Certification and Prior Express Consent

This post was written by Albert E. Hartmann and Henry Pietrkowski.

In separate cases, one Illinois federal judge issued several rulings favorable to Telephone Consumer Protection Act (TCPA) plaintiffs on key issues.  One ruling certified classes of almost 1 million consumers who received automated phone calls, even though the defendants’ records alone were not sufficient to identify the class members.  In a series of rulings in another case also involving automated calls, the judge refused to dismiss the case, even though the plaintiff admitted that he gave his cellular phone number to the defendant.

In the first case, Birchmeier v. Caribbean Cruise Line, Inc., et al., # 1:12-cv-04069 (U.S. District Court for the Northern District of Illinois), United States District Judge Matthew F. Kennelly certified two classes – with a combined total membership of almost 1 million consumers – who had received automated calls in alleged violation of the TCPA.  Plaintiffs initially indicated that they had received from defendants a list of almost 175,000 phone numbers to which automated calls had “unquestionably” been made.  At oral argument on class certification, defendants’ counsel conceded that the class members associated with those numbers were ascertainable. 

Ongoing discovery expanded that number to approximately 930,000.  Plaintiffs defined the putative classes as people whose numbers were on the list of 930,000 numbers from defendants, or whose own records could prove that they received a call at issue.  Judge Kennelly rejected defendants’ arguments opposing certification of classes based on this larger number.  The judge rejected the argument that the class was not ascertainable because defendants’ records could not establish the identity of the subscribers to the called numbers at the times of the calls.  The defendants’ earlier admission that the identities of the smaller number of class members were ascertainable, combined with plaintiffs’ contentions that that could (albeit with difficulty) identify the class members, rendered the putative classes sufficiently ascertainable under Rule 23.  Judge Kennelly also ruled that class members could be identified using their own records; for example, copies of phone bills showing they received a call from one of defendants’ numbers, or potentially with sworn statements providing sufficient details.  In reaching this ruling, Judge Kennelly noted that it would be “fundamentally unfair” to restrict class membership to people only identified on defendants’ records because that could result in “an incentive for a person to violate the TCPA on a mass scale and keep no records of its activity, knowing it could avoid legal responsibility for the full scope of its illegal conduct.”  After determining that the putative classes were ascertainable, the judge held that plaintiffs had carried their burden on the remaining Rule 23 elements and certified the two classes.  Thus, even when a defendant’s records cannot identify the putative class members, the class may still be certified if plaintiff can establish a viable method to ascertain class membership.

In the second case, Kolinek v. Walgreen Co., # 1:13-cv-04806 (U.S. District Court for the Northern District of Illinois), the plaintiff alleged a TCPA violation because he received automated calls to his cellular phone prompting him to refill a prescription.  Judge Kennelly initially dismissed the case because plaintiff had provided his cellular phone number to the defendant, which the defendant argued constituted “prior express consent.”  On July 7, 2014, however, Judge Kennelly reconsidered that decision in light of a March 2014 ruling from the Federal Communications Commission (FCC) that “made it clear that turning over one’s wireless number for the purposes of joining one particular private messaging group did not amount to consent for communications relating to something other than that particular group.”  Thus, while providing a cellular number may constitute “prior express consent” under the TCPA, “the scope of a consumer’s consent depends on its context and the purpose for which it is given.  Consent for one purpose does not equate to consent for all purposes.”  Because plaintiff alleged that he had only provided his number for “‘verification purposes.’ … If that is what happened, it does not amount to consent to automated calls reminding him to refill his prescription.”  Accordingly, Judge Kennelly ruled that dismissal of the case under the TCPA’s “prior express consent” exception was not warranted.

In a second opinion, issued August 11, 2014, Judge Kennelly ruled that dismissal was not warranted under the TCPA’s “emergency purposes” exception either.  While FCC regulations define “emergency purposes” to mean “calls made necessary in any situation affecting the health and safety of consumers,” 47 C.F.R. § 64.1200(f)(4), the FCC has not read that exception to cover calls to consumers about prescriptions or refills.  Noting the absence of such FCC guidance (which the judge observed would “bind the Court”), as well as the paucity of the complaint’s allegations “about the nature or contents of the call,” the judge ruled that he could not dismiss the case without “further factual development.”  Taken together, Judge Kennelly’s rulings in the Kolinek case may allow plaintiffs to survive motions to dismiss even when they admit providing their cellular phone numbers to the defendant.

In many respects, both of these opinions are outliers.  For example, other courts have concluded that providing a cellular number to a company constitutes consent to receive calls on that number.  Moreover, the rulings are fact-specific and thus may not extend beyond the cases at issue.  TCPA plaintiffs, however, will likely seize on these rulings and read them expansively to prolong cases and pressure defendants.  Defendants, therefore, must be aware of these issues and take them into account when defending TCPA cases, especially in the Northern District of Illinois.

Wearable Device Privacy - A Legislative Priority?

This post was written by Frederick Lah and Khurram N. Gore.
 
Seemingly every day, new types of wearable devices are popping up on the market.  Google Glass, Samsung’s Gear, Fitbit (a fitness and activity tracker), Pulse (a fitness tracker that measures heart rate and blood oxygen), and Narrative (a wearable, automatic camera) are just a few of the more popular “wearables” currently on the market, not to mention Apple’s “iWatch,” rumored to be released later this year.  In addition, medical devices are becoming increasingly advanced in their ability to collect and track patient behavior. 
 
As wearables become more sophisticated and prevalent, they’re beginning to attract the attention of senators and regulators.  Earlier this week, U.S. Senator Chuck Schumer (D-N.Y.) issued a press release calling on the Federal Trade Commission (“FTC”) to push fitness device and app companies to provide users with a clear opportunity to “opt-out” before any personal health data is provided to third parties.  Schumer’s concern is that the data collected through the devices and apps – which may include sensitive and private health information – may be potentially sold to third parties, such as employers, insurance providers, and other companies, without the users’ knowledge or consent.  Schumer called this possibility a “privacy nightmare,” given that these fitness trackers gather a wide range of health information, such as medical conditions, sleep patterns, calories burned, GPS locations, blood pressure, weight, and more. This press release comes on the heels of an FTC workshop held in May that analyzed how some health and fitness apps and devices may be collecting and transmitting health data to third parties. 
 
Schumer’s comments were of particular interest to us.  We’ve been beta-testing Google Glass for the past several months as we try to get a better understanding of the types of data privacy and security risks that wearables pose in the corporate environment.  As the devices continue to gain popularity, we expect regulators, legislators, and companies to start paying closer attention to the data security and privacy risks associated with their use.

House of Lords' report on Google 'right to be forgotten' case concludes that it's 'bad law'

This post was written by Cynthia O’Donoghue and Kate Brimsted.

Back in May, we covered the European Union Court of Justice’s landmark ruling in the Google Spain case (‘CJEU Judgment’). Since then, much has been made in the media about the so-called “right to be forgotten”, and the various characters that have requested the removal of links relating to them. Now, the House of Lords Home Affairs, Health and Education EU Sub-Committee (‘Committee’) has released its own report (‘Report’) on the CJEU Judgment, calling it “unworkable, unreasonable and wrong in principle”.

One of the main concerns of the Report is that the practical implementation of the CJEU Judgment imposes a “massive burden” on search engines, and that while Google may have the resources to comply with the ruling, other smaller search engines may not. In addition, the Report makes much of the argument that classifying search engines as data controllers leads to the logical conclusion that users of search engines are also data controllers.

In relation to the “right to be forgotten” – both as implemented by the Judgment and as proposed by the Data Protection Regulation – the Committee notes a particular concern that requiring privacy by design may lead to many SMEs not progressing beyond start-up stage. Labeling the Judgment “bad law”, the Committee calls for the EU legislature to “replace it with better law”, in particular by removing the current provision that would establish a right to be forgotten. The provision is unworkable in practice since it requires the application of vast resources, and leaves to individual companies the task of deciding whether a request to remove data complies with the conditions laid down in the Judgment.

The Committee’s Report is just one of a host of criticisms that has been made of the Google Spain decision – albeit one of the most high profile. Implementing the Judgment has also caused Google PR headaches, with individual instances of the removal of links subject to widespread coverage in the media.

Microsoft loses third round of battle against extra-territorial warrants

This post was written by Cynthia O’Donoghue, Mark S. Melodia, Paul Bond, and Kate Brimsted.

On 31 July, the chief judge of the Southern District of New York delivered the latest in a series of controversial judgments stemming from a test case brought by Microsoft in an extra-territorial warrant issued under the U.S. Stored Communications Act. In the third ruling on the matter, the court found in favour of the U.S. government, upholding the warrant and ordering that Microsoft turn over customer emails stored in a data centre in Ireland. The District Court agreed to stay the order while the decision is appealed further.  If Microsoft’s final appeal is dismissed, the case will have significant implications for all U.S. businesses that store customer data overseas.  The implications also extend to non-U.S. customers, including those companies located within the EEA, that have entered agreements with U.S.-based companies to store their data outside the U.S. In particular, there is concern that foreign companies and consumers will lose trust in the ability of American companies to protect the privacy of their data.

Click here to read the full issued Client Alert.

Brazilian Data Protection Authority fines Internet Provider $1.59m

This post was written by Cynthia O’Donoghue and Kate Brimsted.

In July, the Brazilian Department of Consumer Protection and Defence (‘DPDC’) fined the telecom provider Oi 3.5 million reals ($1.59 million) for recording and selling its subscriber browser data in a case based on Brazilian consumer law dating back to 1990.

The DPDC investigated allegations that Oi had entered into an agreement with British online advertising firm Phorm Inc. to develop an Internet activity monitoring program called ‘Navegador’. The investigation confirmed that this program was in use and actively collected the browsing data of Oi’s broadband customers.

The browsing data was collected and stored in a database of user profiles, with the stated purpose of improving the browsing experience. Oi then sold this data to behavioural advertising companies without having obtained the consent of its customers.

The amount of the fine imposed took into account several factors, including the economic benefit to Oi, its financial condition, and the serious nature of the offence. The fine was issued after Oi suspended its use of the Internet activity monitoring software.

Oi denied violating customer privacy and claimed that use of the Internet monitoring program was overseen by government regulators. Phorm Inc. denied that any of the data collected from Oi’s customers was sold, and said that all relevant privacy regulations had been adhered to strictly.

The fine serves as a warning that Brazil will take strong action to enforce its new Internet law.

EU Regulation on electronic identification and trust certificates

This post was written by Cynthia O’Donoghue and Kate Brimsted.

In July, the Council of the European Union adopted a Regulation on electronic identification and trust services for electronic transactions (‘Regulation’). The Regulation is part of the Commission’s Digital Agenda for Europe, which promotes the benefits of a digital single market and cross-border digital services.

The Regulation will replace Directive (1999/93/EC) on electronic signatures and address its shortcomings. In particular, trust in electronic transactions is increased by creating a common foundation for secure electronic interaction between citizens, businesses and authorities. Essential to this development is that Member States should build the necessary trust in each other’s electronic identification schemes and the level of data security provided by them.

One shortcoming of the current system is that citizens are unable to use their electronic identification to authenticate themselves in another Member State because the national electronic identification scheme in their countries is not recognized in other Member States.

The Regulation will implement several measures, including:

  • Mutual recognition of electronic identification and authentication systems, where they comply with the conditions of notification and have been notified to the Commission.
  • Rules concerning trust services, including the creation and verification of electronic time stamps and electronic registered delivery services. This is a substantial enhancement of the previous position, under which EU provisions only existed for electronic signatures.

Under the Regulation, trust service providers will be under a duty to apply security practices that are appropriate for the level of risk presented by their activities. In addition, these services will be subject to a regulatory regime and liability in the event that damage is caused to any company or person through a failure to comply with this regime.

The Regulation will come into full force in July 2016.

New Russian legislation requires local storage of citizens' personal data

This post was written by Cynthia O’Donoghue and Kate Brimsted.

President Putin recently signed Federal Law No. 242-FZ (the “Law”) which amends Russia’s 2006 data protection statute and primary data security law (Laws 152-FZ and 149-FZ), to require domestic data storage of Russian citizens’ personal data. The Law will allow the websites that do not comply to be blocked from operating in Russia and recorded on a Register of organisations in breach.

The requirement to relocate database operations could place a significant burden on both international and domestic online businesses. All retail, tourism, and social networking sites, along with those that rely on foreign cloud service providers, could have their access to the Russian market heavily restricted by the Law. The Law takes effect 1 September 2016, which may not provide some organisations with enough of a transition period to make the necessary changes.

Earlier this year, the Brazilian government decided not to include a similar provision in their Internet bill in recognition of the draconian nature, the potential economic impact and the practical difficulties.  Russia has not taken this more pragmatic approach.  

FTC Commissioner Brill Urges State AGs to Up the Ante

This post was written by Divonne Smoyer and Christine Czuprynski.

Businesses that think they know what privacy issues are on the minds of the state attorneys general (AGs) should be aware that AGs are being urged to take action, either on their own, or in concert with the FTC, on key cutting edge privacy issues. At a major meeting of state AGs this week at the Conference of Western Attorneys General, FTC Commissioner Julie Brill, one of the highlighted speakers at the event, emphasized the importance of the AGs’ role in privacy regulation, and encouraged AGs to collaborate and cooperate on privacy investigations consistent with FTC efforts.

Commissioner Brill, a former assistant AG in two influential state attorney general offices, Vermont and North Carolina, outlined for the AGs several high-level privacy priorities for the FTC, including: (1) user-generated health information; (2) the Internet of Things; and, (3) mobile payments and mobile security. She invited the states to follow these and other privacy issues, and to complement the FTC’s actions in these areas in appropriate ways.

Also a focus: the Commission’s “Big Data” data broker report. Commissioner Brill emphasized her concerns about data broker practices, including their use of terms to describe and categorize individuals, such as “Urban Scramble,” “Mobile Mixers,” “Rural Everlasting,” and “Married Sophisticates.” She stressed that the information gathered by data brokers about these groups may allow businesses to make inferences about people, which in turn could impact access to credit, and in other ways. She pointed out that the FTC unanimously called for legislation to increase transparency and provide consumers with meaningful choices about how their data is used.

Building on her comments about data brokers, Commissioner Brill voiced concerns about the United States’ sectoral approach to privacy law and stressed that there needs to be gap-filling in areas outside of those sector-specific laws, and, since Congress is focused elsewhere on privacy issues, state action may be the best option to take on these issues and fill the gaps. This is not the first time Commissioner Brill has called on the states to take decisive action, and it won’t be the last.

Finally, Commissioner Brill addressed the FTC’s case against Wyndham in particular, noting that the FTC is aggressively fighting challenges to its Section 5 authority. She reminded the states that they have an interest in this fight given that state UDAP statutes share a common blueprint as so-called “mini-FTC Acts,” and invited collaboration on future challenges.

It is likely that many of the states will take action consistent with Commissioner Brill's urging.

UK set to implement emergency Data Retention and Investigatory Powers Bill

This post was written by Cynthia O'Donoghue, Angus Finnegan and Kate Brimsted.

In April, the Court of Justice of the European Union (‘Court’) declared Directive 2006/24/EC on the Retention of Data to be invalid, creating uncertainty for telecommunications operators across the region. In a controversial move by the UK Government, the Data Retention and Investigatory Powers Act 2014 (‘Act’) has been passed using emergency procedures.

Formulated in 2006, the Directive aimed to harmonise the laws of Member States in relation to the retention of data. It introduced an obligation on telecommunications operators to retain a wide range of traffic and location data, which could then be accessed by national authorities for the purpose of detecting and investigating serious crime. The Directive was implemented in the UK through the Data Retention (EC Directive) Regulations 2009.

In its judgment, the Court stated that the obligation to retain communications data and the ability of national authorities to access them constituted an interference with both Articles 7 and 8 of the Charter of Fundamental Rights. Whilst this satisfied the objective of general interest, it was not proportionate or limited to what was strictly necessary. There was concern that the data collected “may allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained.”

The Act seeks to maintain the status quo by preempting any legal challenge to the Regulations, and allows the Secretary of State to issue a notice requiring the retention of all data, or specific categories of data, for a period of 12 months. Whilst the effect of the Act is largely similar to its predecessor, the language used is more expansive and appears to be capable of encompassing a broader range of data.

The Act also amends certain provisions of the Regulation of Investigatory Powers Act 2000, allowing for the extra-territoriality of warrants in certain circumstances. This is a major step not only for UK interception powers, but for interception powers globally. Last month, we reported that Microsoft would continue to challenge a U.S. court ruling that effectively allowed an extra-territorial warrant to be issued; it appears that the legal basis for similar powers could be being introduced by the back door in the UK.

It is unclear whether the Act will be a temporary piece of legislation, staying in place until a more permanent solution is implemented at EU level, or whether it will be permanent. However, one positive effect will be that telecommunications operators will know what their retention obligations are. That is not the case in almost all other Member States at present.

Has Facebook been evil? It's down to the regulators to decide

This post was written by Cynthia O'Donoghue and Kate Brimsted.

In June, Facebook came under public scrutiny after it was revealed that the company carried out research in 2012 that manipulated the News Feeds of 689,000 users. Several regulators are now poised to investigate Facebook’s conduct.

The study exposed users to a large amount of either positive or negative comments in order to observe the effect of this on the way that they used the site. It found that “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.”

Facebook’s behavior will now be scrutinized by data protection regulators, with the UK’s Information Commissioner’s Office indicating on 1 July that it will work with the Irish Data Protection Commissioner to learn more about the circumstances surrounding the research. The regulators are likely to be particularly interested in the terms of use and privacy policy that applied at the time of the research, and whether they contained adequate notices.

Meanwhile, on 3 July, the Electronic Privacy Information Centre (‘EPIC’) filed a formal complaint with the U.S. Federal Trade Commission, requesting that the regulatory body undertake an investigation of Facebook’s practices. The FTC has not yet responded to this request.

Although perhaps an extreme example, this issue highlights the challenges that organisations can face when using data for a purpose that goes beyond what users would expect. Given the mysterious algorithms that underlie what any Facebook user sees (contrary to common belief, it is not simply a chronological list of activities), it is arguable that the issue here arises out of functionality that is not far removed from Facebook’s everyday operations. It will be interesting therefore to see whether the regulators take any robust action.

Italian Data Protection Authority issues new EU guidelines

This post was written by Cynthia O’Donoghue, Kate Brimsted, and Matthew N. Peters.

In early May the Italian data protection authority (“Garante”) issued “Simplified Arrangements to Provide Information and Obtain Consent Regarding Cookies” (“Guidelines”).  These are intended to provide clarity on the application of Legislative Decree No. 69/2012 (the “2012 Act”), which implemented the EU Cookie Directive in Italy.

The Guidelines synthesize the findings of a public consultation and set out simple methods for informing website users about the use of cookies and procuring their consent.

Key topics include:

i) Distinguishing technical cookies from profiling cookies: technical cookies only require users to be clearly informed and include browsing/session cookies, first-party analytics cookies and functional cookies; while profiling cookies require users’ consent to create a user profile and for the website operator and any third parties to carry out marketing and promotional activities.

ii) A ‘double decker’ approach to inform users and obtain consent by providing summary cookie by means of a ‘banner’ on a website landing page with more detailed information included in a full privacy notice that is linked to the banner.

iii) Links to third parties that also place cookies on a user’s device to each respective third party’s own consent and privacy notices so users remain fully informed and retain their ability to consent.   

iv) Implementation and sanctions: Garante has given data controllers one year from the date of publication of the Guidelines to meet these requirements. Failure to do so carries a range of sanctions, including a maximum fine of €300,000 and ‘naming and shaming’.

European Commission Releases Cloud Computing Service Level Agreements

This post was written by Cynthia O’Donoghue and Kate Brimsted.

Back in 2012, the European Commission (‘Commission’) adopted the Cloud Computing Strategy to promote the adoption of cloud computing and ultimately boost productivity. In June 2014, the Cloud Select Industry Group – Subgroup on Service Legal Agreements published Standardisation Guidelines for Cloud Service Level Agreements (‘Guidelines’) as part of this strategy.

To achieve standardisation of Service Level Agreements (‘SLAs’), the Guidelines call for action “at an international level, rather than at national or regional level”, and cite three main concerns. Firstly, SLAs are usually applied over multiple jurisdictions, and this can result in the application of differing legal requirements. Secondly, the variety of cloud services and potential deployment models necessitate different approaches to SLAs. Finally, the terminology used is highly variable between different service providers, presenting a difficulty for cloud customers when trying to compare products.

A number of principles are put forward to assist organisations through the development of standard agreements, including technical neutrality, business model neutrality, world-wide applicability, the use of unambiguous definitions and comparable service level objectives, standards and guidelines that span customer types, and the use of proof points to ensure the viability of concepts.

The Guidelines also cover the common categories of service level objectives (‘SLOs’) typically covered by SLAs relating to performance, security data management and data protection.  In particular, SLOs cover availability, response time, capacity, support, and end-of-service data migration, as well as authentication and authorization, cryptography, security incident management and reporting, monitoring, and vulnerability management.  Some of the important data-management SLOs cover data classification, business continuity and disaster recovery, as well as data portability. The personal data protection SLOs address codes of conduct, standards and certification, purpose specification, data minimization, use, retention and disclosure, transparency and accountability, location of the personal data, and the customer’s ability to intervene.

The Commission hopes the Guidelines will facilitate relationships between service providers and customers, and encourage the adoption of cloud computing and related technologies.

European Commission releases communication on building a data-driven economy, calling for a rapid conclusion to data-protection reform

This post was written by Cynthia O'Donoghue and Kate Brimsted.

In July, the European Commission (‘Commission’) published a communication titled “Towards a thriving data-driven economy” (‘Communication’), setting out the conditions that it believes are needed to establish a single market for big data and cloud computing. The Communication recognizes that the current legal environment is overly complex, creating “entry barriers to SMEs and [stifling] innovation.” In a press statement, the Commission also called for governments to “embrace the potential of Big Data.”

The Communication follows the European Council’s conclusions of 2013, which identified the digital economy, innovation and services as potential growth areas. The Commission recognizes that for “a new industrial revolution driven by digital data, computation and automation,” the EU needs a data-friendly legal framework and improved infrastructure.

Citing statistics about the amount of data being generated worldwide, the Commission believes that reform of EU data-protection laws and the adoption of Network and Information Security Directive will ensure a “high level of trust fundamental for a thriving data-driven economy.” To this end, the Commission seeks a rapid conclusion to the legislative process.

The Commission’s vision of a data-driven economy is founded on the availability of reliable and interoperable datasets and enabling infrastructure, facilitating value and using Big Data over a range of applications.

To achieve a data-driven economy, coordination among Member States and the EU is necessary. The key framework conditions are digital entrepreneurship, open data incubators, developing a skills base, a data market monitoring tool and the identification of sectorial priorities, and ensuring the availability of infrastructure for a data-driven economy, along with addressing regulatory issues relating to consumer and data protection, including data-mining and security.

In an atmosphere of increasingly complex regulation anticipated by the Draft Data Protection Regulation and rulings of Europe’s senior courts, a positive slant on the use of data should be refreshing to organisations that depend on it in their operations. The test for the recommendations will be in how the Commission and the EU seek to implement them.

Apps and Data Privacy - New Guidelines from the German DPAs

This post was written by Dr. Thomas Fischl and Dr. Alin Seegel.

Under the auspices of the Bavarian state data protection authority, the so-called Düsseldorfer Kreis (an association of all German data privacy regulators for the private sector) on June 23 published guidelines for developers and providers of mobile apps.  Since mobile applications increasingly become the focus of regulators, the guide points to data privacy and technical requirements regarding the field of app development and operation, and provides practical examples.

In spring, the Bavarian data privacy regulatory agency had randomly selected 60 apps for closer examination. In the process, the agency looked at privacy notices and compared them with the type of data that, at first glance, was transmitted.  In its conclusion, the agency noted that “every app provides some data privacy information, but that this information cannot be adequately reviewed.”  Based on this finding, the agency has more closely examined 10 apps, and subsequently created an orientation guide for app-developers and app-providers.

Among other things, the 33-page guide addresses the applicability of German data privacy laws, permit-related statements of fact regarding the collection and processing of personal data in the context of operating a mobile application, technical data privacy, and the notification obligations to be adhered to by the app provider. In addition to the legal notice, the latter include an app-specific privacy statement and other legal obligations.

With regard to app development, the guide of the German DPAs recommends that by utilizing data privacy preferences (“privacy by default”), one must ensure that the app can later be offered without deficiencies in data privacy.

Regarding technical data privacy, the guide elaborates on secure data transmission, as well as the application’s access to the location data of the respective device.

In addition to the above aspects, the guide addresses specific issues arising during the development of mobile applications, such as the integration of functions for payments or apps for young people and children.

For the future, regulators can be expected to be even more concerned with infringements related to apps, and will also be expected to initiate procedures to impose fines. The guidelines are a must-read for every app developer making apps available in Germany and throughout Europe.

U.S. extraterritorial data warrants: yet another reason for swift Data Protection reform, says EU Commission

This post was written by Kate Brimsted.

In May, we reported that a U.S. magistrate judge had upheld a warrant requiring Microsoft to disclose emails held on servers in Ireland to the U.S. authorities. The ruling has now attracted the attention of Brussels, with the Vice-President of the European Commission, Viviane Reding, voicing her concern.

Microsoft had argued before the court that the warrant, which was issued under the Stored Communications Act, should be quashed. This was because it amounted to an extraterritorial warrant, which U.S. courts were not authorised to issue under the Act. In summary, the court ruled that the warrant should be upheld, noting that otherwise the U.S. government would have to rely on the “slow and laborious” procedure under the Mutual Legal Assistance Treaty, which would place a “substantial” burden on the government.

In a letter to Sophie in’t Veld, a Dutch MEP, Ms Reding noted that the U.S. decision “bypasses existing formal procedures”, and that the Commission is concerned that the extraterritorial application of foreign laws may “be in breach of international law”. In light of this, Ms Reding states that requests should not be directly addressed to companies, and that existing formal channels such as the Mutual Legal Assistance Treaty should be used in order to avoid companies being “caught in the middle” of a conflict of laws. She also advocates that the EU institutions should work towards the swift adoption of the EU data protection reform.  Ms Reding further reported that the Council of Ministers has agreed with the principle reflected by the proposed Regulation – and consistent with the recent Google Spain decision – that “EU rules should apply to all companies, even those not established in the EU (territorial scope), whenever they handle personal data of individuals in the EU”.

Florida Strengthens Data Breach Notification Law

This post was written by Divonne Smoyer and Christine N. Czuprynski.

Florida’s new data breach notification law, effective July 1, 2014, follows a recent trend of expanding the definition of personal information and requiring entities to notify state attorney general offices or other regulators. The Florida Information Protection Act, signed into law June 20, repeals the existing data breach notification law and imposes new requirements on covered entities.

First, the definition of personal information has been expanded. Personal information includes those data points that are present in most data breach notification laws – an individual’s name in combination with Social Security number, driver’s license number, or financial account number with a its corresponding security code or password – but also includes medical history and health insurance policy number. In addition, the definition now includes a user name or email address in combination with a password or some other information that allows access to an online account.

The Florida law requires notification to be made to the affected individuals, the state Department of Legal Affairs with the attorney general’s office, and credit reporting agencies, under certain circumstances. Notification to individuals and to the attorney general must occur within 30 days after determination of the breach or reason to believe a breach occurred. Florida already allows an entity to conduct a risk-of-harm analysis to determine if notification is required, and the new law retains that right. An entity is not required to notify individuals if it “reasonably determines that the breach has not and will not likely result in identity theft or any other financial harm to the individuals whose personal information has been accessed.” That determination must be documented in writing and maintained for five years, and must be provided to the attorney general within 30 days. If an entity determines that notification to individuals is required, such notification should include the date of the breach, a description of the information compromised, and contact information for the entity.

Notification to the attorney general must include a description of the breach, the number of Floridians affected, information regarding any services being offered, a copy of the notice, and contact information for an individual who can provide additional information. Upon request, an entity must also provide a copy of any police report or incident report, as well as a computer forensic report and internal policies relating to breaches. These sensitive documents – forensic reports and internal policies – do not have to be disclosed in any other state.

The new law also requires entities to take reasonable measures to protect and secure data in electronic form containing personal information.

Plaintiffs Take Another Blow In Video Privacy Protection Act (VPPA) Class Action Against Hulu and Facebook

This post was written by Lisa B. Kim and Paul Bond.

On June 17, 2014, Magistrate Judge Laurel Beeler of the Northern District of California denied class certification for the proposed class of Hulu and Facebook users alleging that their personal information was transmitted to Facebook in violation of the Video Privacy Protection Act (VPPA).  We’ve written about this VPPA case before. At the end of April, the court granted Hulu’s motion for summary judgment as to disclosures Hulu made to comScore (a third-party analytics provider), but denied it as to disclosures made to Facebook.

In denying class certification, Judge Beeler found that the class was not ascertainable because the only manner in which to identify the class would be by class members self-reporting via an affidavit.  The court reasoned that it would be inappropriate here because the higher dollar amount involved with VPPA violations (i.e., $2,500) required some form of verification using objective criteria.  The court further noted that the claims alleged here could not be easily verified.  Based on the record before it, the court could not tell how a potential class member could reliably establish whether s/he logs into Facebook and Hulu from the same browser, logs out of Facebook, clears cookie settings, and uses software to block cookies.  The importance of these things is that plaintiffs’ disclosure theory is based on transmission of a certain cookie to Facebook, and this would potentially happen for Hulu users who watched a video using hulu.com, having used the same computer and web browser to log into Facebook in the previous four weeks using default settings.

Relatedly, the court also found that while there were common questions of law or fact that pertained to the class, those common questions did not “predominate,” as required by FRCP 23(b)(3).  The court held that substantial issues about whether class members remained logged into Facebook and whether they would clear or block cookies indicated that common issues did not predominate over individual ones. 

The court denied class certification without prejudice, but noted that it was unaware of how plaintiffs could overcome these issues given the current factual record.  It will be interesting to see whether the plaintiffs take another attempt at certifying the class and how this ruling impacts other VPPA cases pending around the nation.

CFPB Proposes Changes to the Annual Privacy Notice: There is Still Time To Comment

This post was written by Timothy J. Nagle and Christopher J. Fatherley.

In December 2011, the Consumer Financial Protection Bureau (CFPB) published a Federal Register (FR) notice [76 FR 75825] on “Streamlining Inherited Regulations.”  These regulations consist of federal consumer financial laws that were transferred to CFPB authority under the Dodd-Frank Wall Street Reform and Consumer Protection Act from seven other federal agencies. Among the regulations that were identified as opportunities for “streamlining” was the annual privacy notice required by Regulation P (“Privacy of Consumer Financial Information”) issued by the Federal Reserve [12 CFR Part 216].  In its fall 2013 “Statement of Regulatory Priorities,” the Bureau continued the process by stating its intent to publish a notice of proposed rulemaking “to explore whether to modify certain requirements under the Gramm-Leach-Bliley Act's implementing Regulation P to which financial institutions provide annual notices regarding their data sharing practices.”

The CFPB issued its proposed rule (“Amendment to the Annual Privacy Notice Requirement Under the Gramm-Leach-Bliley Act (Regulation P)”) on May 13, 2014 [79 FR 27214].  The amendment describes an “alternate delivery method” for the annual disclosure that financial institutions could use in specified situations.  These circumstances are consistent with the purpose of section 503 of the Gramm-Leach-Bliley Act (GLBA), which requires financial institutions to provide initial notice upon entering into a relationship with a customer, and then annually thereafter.

A financial institution may (but is not required to) use the alternate delivery method if its practices satisfy five criteria:

  • It does not share customer nonpublic personal information with nonaffiliated third parties in a manner that would trigger opt-out rights under GLBA.  Financial institutions are not required to provide opt-out rights to customers when sharing information with third-party service providers, pursuant to joint marketing agreements or in response to a formal law enforcement request.  However, using an example mentioned in the notice, a bank would be required to provide such rights to its mortgage customers whose personal information it intends to sell to an unaffiliated home insurance company.  In this latter situation, the new alternative notice process would not be available.
  • It does not include in its annual notice the separate opt-out notice required under section 603(d)(2)(A)(III) of the Fair Credit Reporting Act (FCRA) if a financial institution shares information about a consumer with its affiliates.  Such activity is excluded from the definition of a consumer report” in FCRA, but notice to the consumer and an opportunity to opt out is required.  Financial institutions are required to include this disclosure in the annual privacy notice.  Therefore, if a financial institution does share such information internally, and does not provide a separate disclosure, it may not take advantage of the “alternate delivery method.”
  • The annual notice is not the only notice used to satisfy the Affiliate Marketing Rule in section 624 of FCRA.  Financial institutions are not required to include this opt-out notice in the annual privacy notice, but many do.  If a financial institution shares information about a consumer with an affiliate for marketing purposes, it may use the new delivery process only if it independently satisfies the section 624 disclosure requirement.
  • The information contained in the prior year’s notice (e.g., information sharing practices) has not changed.
  • The institution uses the Model Privacy Form Under the Gramm-Leach-Bliley Act published in 2009 [74 FR 62890] for its annual privacy notice.

Financial institutions that satisfy the above criteria may discontinue mailing the annual privacy notice if they provide notice by other means described in the proposed rule.  Institutions using the alternate delivery method will be required to post the privacy notice continuously and conspicuously on their website, deliver an annual reminder on another notice or disclosure of the availability and location of the notice, and provide a toll-free telephone number for customers to request that a paper copy of the notice be mailed to them.  While GLBA and Regulation P provide for notice in written or electronic form, most financial institutions mail the notices at substantial cost.  This action by the CFPB is intended to balance the cost considerations with the benefit to consumers of the annual notice and the potential for confusion where an institution’s practices have not changed.  And small financial institutions, which are less likely to share customer information in a way that triggers customer opt-out rights, would benefit from the cost savings with no harm to the customer. 

In the proposed rule, the CFPB requested comment and information regarding the practical aspects of the changes, such as the number of financial institutions that change their policies, deliver notices electronically, or combine the FCRA and privacy notices. The initial rule provided only 30 days to comment, but this has been extended to July 14, 2014 [79 FR 30485] in response to requests from several financial services industry groups.  This initiative by the CFPB seems to have more velocity than similar efforts in Congress, where bills in the House (Eliminate Privacy Notice Confusion Act - H.R. 749) and Senate (Privacy Notice Modernization Act of 2013 - S. 635) are languishing.  Financial institutions should at least be aware of this development and evaluate whether they will benefit from the proposed revisions.

Canadian Court Certifies Facebook Class Action Over 'Sponsored Stories'

This post was written by Mark S. Melodia and Frederick Lah.

The British Columbia Supreme Court recently certified a class action against Facebook in connection with its Sponsored Stories program.  Under that program, advertisers paid Facebook for Sponsored Stories, which would in turn generate ads featuring a user’s name and profile picture based on which products and companies the user “liked.”  We previously analyzed a California privacy class action brought over the program.  Since the publication of our previous article, the California court granted final approval to a $20 million settlement that required Facebook to make small payments to class members.  That settlement is currently being challenged in the Ninth Circuit Court of Appeals by a public interest group. 

In the Canadian case, one of the main issues was whether Facebook users have the protection of BC’s Privacy Act, or instead, whether Facebook’s online Terms of Use overrode these protections.  Facebook’s Terms of Use contained a forum selection clause that bound users to adjudicate disputes in California.  Interestingly, despite the Court finding a “prima facie basis” for the “validity, clarity and enforceability” of the forum selection clause in the Terms of Use, it still rejected the clause.  Instead, the Court pointed to section 4 of B.C.’s Privacy Act, which states that an action under the Privacy Act “must be heard and determined by the Supreme Court.”  Per the Court, claims brought under the Privacy Act could not be brought in California, and held that “the Forum Selection Clause must give way to the Privacy Act.” 

After holding that it had jurisdiction, the Court then certified the class, defining it as all B.C. residents who are or have been Facebook members at any time between January 2011 and May 2014, and whose name or picture was used as part of the Sponsored Stories.  The Court rejected Facebook’s argument that the class definition was overly broad and that it had several problems, including that the class definition: (i) has no temporal limitations; (ii) does not address the fact that many users use false names or unidentifiable portraits; (iii) does not address the fact that Sponsored Stories were used for non-commercial entities as well as for businesses; (iv) does not  address the necessary element of lack of consent; and (v) includes people who do not have a plausible claim, as well as people will not be able to self-identify whether they are in the class.  Per the Court, “[h]ere, the tort [ ] of the Privacy Act seems tailor-made for class proceedings, where the alleged wrongful conduct was systemic and on a mass scale, and where proof of individual loss is not necessary or sought. Without the assistance of the [ ] class action procedure, the plaintiff and proposed class members’ claims based on [ ] the Privacy Act would be unlikely to have access to justice. Furthermore, the sheer number of individual claims, given the reach of Facebook, would overwhelm the courts unless a class proceeding was available.”

It is becoming increasingly clear that the risk of privacy class actions in Canada is growing.  This case shows us that even if a Canadian court acknowledges the enforceability of a website’s online terms and conditions, the court’s interest in protecting the privacy of its own citizens and upholding its own law will control.  While various news outlets have reported that Facebook plans to appeal the ruling, there’s no denying the fact that Facebook is now in the thick of the fight in the Canadian judicial system, whether it “likes” it or not.
 

Oklahoma Joins the Rapidly Growing Number of States with Social Media Password Laws

This post was written by Rose Plager-Unger and Frederick Lah.

On May 21, 2014, Oklahoma enacted H.B. 2372, following the trend outlined in our earlier article on the growing number of states prohibiting employers from requesting employee or applicant social media account passwords.  H.B. 2372 prohibits employers from requesting or requiring the user name and password of employees’ or applicants’ personal social media accounts or demanding employees or applicants to access the accounts in front of the employer.  The law also prohibits employers from firing, disciplining, or denying employment to employees or applicants who refuse to provide the requested information.

Click here to read the full post on our sister blog AdLaw By Request.
 

FTC Releases Report on Data Brokers - Calls for Legislative Action

This post was written by Frederick Lah and Michael E. Strauss.

On May 27, 2014, the FTC released its report “Data Brokers: A Call for Transparency and Accountability”. In the report, the FTC advocates for more transparency from data brokers, defined in the report as “companies that collect consumers’ personal information and resell or share that information with others.” 

Expounding upon findings from its 2012 privacy report, and gleaning new information from nine data brokers, the Commission’s latest paper characterizes data brokers and their products as both beneficial and risky to consumers.  While the FTC did acknowledge that data brokers' marketing products – for example, the sale of consumer data to businesses – “improve product offerings, and deliver tailored advertisements to consumers,” its praise of the industry was short-lived.  Instead, the FTC emphasized risk, noting that data brokers are storing delicate consumer information, which in the event of a security breach could expose consumers to fraud, theft, and embarrassment.  As the report describes, “identity thieves and other unscrupulous actors may be attracted to the collection of consumer profiles that would give them a clear picture of the consumers’ habits over time, thereby enabling them to predict passwords, challenge questions, or other authentication credentials.”

Perhaps the FTC’s most significant finding – the one driving its push for legislation – is its determination that consumers have little access or control over their information once data brokers obtain it.  Since consumer information is gathered, analyzed, and disseminated from and to a variety of sources, and because data brokers are not consumer-facing entities, the FTC believes that it is virtually impossible for consumers to trace their personal data back to where it originated.  For example, according to the FTC, some data brokers are creating detailed individual consumer profiles that may include sensitive inferences about consumers, such as their ethnicity, income levels, and health information.  If a consumer is denied the ability to complete a transaction based on such profiles, the consumer would have no way of knowing why he or she was denied, and would therefore not be able to take steps to prevent the problem from recurring. As a result, and in light of the foregoing, the Commission contends that the data broker industry is insulated from accountability, and proposes that Congress adopt the following legislative recommendations: 

  • Give consumers an easy way to identify which data brokers have their information, and establish a mode of contact and ability to control such data
  • Force data brokers to disclose the type of information they acquire and subsequently sell to businesses
  • Require that data brokers disclose the original source of the data
  • Require businesses that share consumer data with data brokers to give notice of such, and allow consumers to prevent businesses from doing so

If Congress were to adopt the FTC’s recommendations, the data broker industry would be widely affected.  Not only would data brokers be subject to additional requirements, but businesses on the sale and buy side of the industry would also be subject to greater transparency requirements.  We will be following this issue closely to see if and how Congress acts.    
 

Japanese data privacy developments - global transfers and privacy notices code

This post was written by Taisuke Kimoto, Kate Brimsted, Cynthia O’Donoghue, Matthew N. Peters, and Yumiko Miyauchi.

In recent weeks, Japanese data protection and privacy law has seen developments in two areas:

(1) The Ministry of Economy, Trade and Industry (METI) issuing its first code of practice on privacy notices
(2) The Asia-Pacific Economic Cooperation (APEC) approving Japan’s participation in the APEC Cross Border Privacy Rules (CBPR) system

METI Code of Practice (the Code)

This comes on the back of a period of activity for data protection legislation in Japan.  In December 2013, the IT Strategy HQ of the Cabinet Office published an Institutional Review Policy concerning utilization of personal data, with a plan to publish proposed amendments to the Japanese Data Protection Act in June 2014.

The Code is non-binding and therefore there is no penalty for organisations that do not comply with it. However, it sets out what organisations should notify consumers about the collection and use of their personal data, and includes a checklist of what should appear in all consumer privacy notices, particularly:

• A description of the service
• The nature of the personal data collected, and the process of collection
• How the company intends to use the data
• Whether the data will be shared and with whom
• The extent of the consumer’s rights to object to the collection of their data, or have their personal data corrected, and the procedure
• Organization contact details
• How long the data will be retained, and how it will be destroyed

The Code also calls for standardised and clear notices to avoid confusion among consumers.  With the Australian Privacy Principles (effective since March 2014) also providing guidance on privacy policy content, Japan is not the only APEC jurisdiction where this has been given priority. 

Proposals to revise the Japanese Data Protection Act are expected to be published in June 2014.

 The APEC Cross Border Privacy Rules

Beyond domestic data protection standards across the region, on 28 April, Japan became the third APEC nation (after Mexico and the United States) to have its participation in the APEC CBPR System approved.  This system is designed to develop global interoperability of organisations’ consumer data protection measures, and to complement the EU’s system of Binding Corporate Rules for international data transfers.

Using a common set of principles, adopted by all 21 APEC countries – for ensuring the protection of cross-border data transfers – Japan will now begin the process of undertaking measures to ensure they can provide certification to any organisation wishing to become CBPR compliant.  This begins with a commitment to use an APEC-approved accountability agent, supported by a domestic privacy enforcement authority, in order to meet their obligations under the CBPR System.     
 

Whistleblowing hotlines in France: a welcome lightening of regulation

This post was written by Daniel Kadar.

Implementing whistleblowing hotlines in France has caused significant concern for companies implementing such hotlines globally, as French regulation had considerably narrowed their scope with the major threat of considering non-compliant hotlines as null and void.

Times have changed: a couple of months ago, the French CNIL adopted an important modification of its unique authorisation policy AU-004 dedicated to whistleblowing hotlines, last revised in 2010. Initially, the companies were only allowed to collect and record through a whistleblowing hotline any serious situation related to banking, accounting, financial, and fight against corruption areas, as well as any facts involving compliance with the applicable competition law – but only to “answer to a legislative or regulatory requirement”.

Every time the planned policy fell outside this very limited scope, the company had to ask the CNIL an individual authorization with very limited chances of success, besides an exception concerning harassment.

To face those increasing requests – more than 60 between 2011 and 2013 – the CNIL amended its AU-004 in two ways:

  • In addition to the areas already under its scope, the Commission extended the unique authorization system to environmental protection, fighting against discriminations and harassment in the workplace, and health, hygiene and security at work.
  • The AU-004 now applies in those areas to “answer a legislative requirement or a legitimate interest”.

In order to empower and protect users of such hotlines, the Commission has always insisted on the principle of an identification of the author of the alert , which has been reaffirmed. Nevertheless, the new applicable rules open the way towards anonymous alerts in exceptional cases, when “the gravity of the facts is established and the factual elements sufficiently detailed”. The Commission specifies that processing those anonymous alerts has to be surrounded with special precautions, such as a “preliminary examination, from its first consignee, on the opportunity of its diffusion within the scheme”.

With these amendments, the CNIL obviously seeks to ease, step by step, the use of whistleblowing hotlines in France, and to finally allow global compliance programs to be rolled out without too many exceptions.

California Attorney General Issues Recommendations for Privacy Policies and Do Not Track Disclosures

This post was written by Lisa B. Kim, Paul H. Cho, Divonne Smoyer, and Paul J. Bond.

On May 21, 2014, the California Attorney General, Kamala D. Harris, issued her long-awaited guidance for complying with the California Online Privacy Protection Act (“CalOPPA”).  “Making Your Privacy Practices Public,” which can be found here, provides specific recommendations on how businesses are to comply with CalOPPA’s requirements to disclose and comply with a company-drafted privacy policy. 

As we have written about in the past, CalOPPA is the California privacy statute that requires any company that collects personally identifiable information from a California resident online, whether via a commercial website or a mobile application, to draft and comply with a privacy policy that conforms with the guidelines provided in it.  More recently, CalOPPA was amended to include information on how the website operator responds to Do Not Track signals or similar mechanisms.  The law also requires privacy policies to state whether third parties can collect personally identifiable information about the site’s users. 

Click here to read the issued Client Alert.

To Those Calling Consumers for Marketing Purposes: LISTEN UP!!

This post was written by Judith L. Harris.

The Federal Communications Commission (FCC) yesterday announced the largest Do-Not-Call settlement it has ever reached.  Under that settlement, a major telecommunications company will pay $7.5 million for the mobile wireless company’s alleged failure to honor consumer requests to opt out of phone and text marketing communications.  In addition, the company has agreed to take a number of steps to ensure compliance with the Commission’s Do-Not-Call rules going forward. Those steps include:

  • Developing and putting into action a robust compliance plan to maintain an internal Do-Not-Call list and to honor Do-Not-Call requests from consumers
  • Developing operating procedures and policies to ensure that its operations comply with all company-specific Do-Not-Call rules
  • Designating a senior corporate manager to act as a compliance officer
  • Implementing a training program to ensure that employees and contractors are properly trained in how to record consumer Do-Not-Call requests so that the names and phone numbers of those consumers are removed from marketing lists
  • Reporting to the FCC any noncompliance with Do-Not-Call requests
  • Filing with the FCC an initial compliance report and then annual reports for an additional two years

One of the reasons that the FCC came down so hard on the company was that it was already acting under a 2011 Consent Decree resolving an earlier investigation into similar consumer complaints.  “Ah,” you might say, “this then has no relevance to our company; we have never even been investigated a first time by the Commission.”  While that may be true, this still might be a good time to compare your company’s own internal plan for honoring Do-Not-Call requests with the plan being required of the entity that settled with the FCC. 

If your company’s current plan is missing any of the first four elements listed above, you might want to consider adding them. By laying out these elements, the FCC is sending a strong signal regarding what it considers to be reasonable efforts by an entity to ensure that its agents and employees are well aware of what is expected of them when making marketing calls.

Companies would do well to consider adopting and enforcing a comprehensive compliance plan now, and not wait to have one imposed if some disgruntled consumers complain to a regulatory agency.  At a minimum, adoption and adherence to a comprehensive compliance program would go far in protecting against any trebling of damages in a putative class action, and be a strong mitigating factor in any investigation down the road by an enforcement agency.
 

Online Advertising Targeted by Federal Trade Commission

On May 15, 2014, Maneesha Mithal, Associate Director of the Division of Privacy and Identity Protection at the Federal Trade Commission (“FTC” or “Commission”) testified, on behalf of the FTC, before the U.S. Senate Committee on Homeland Security and Governmental Affairs addressing the Commission’s work regarding three consumer protection issues affecting online advertising: (1) privacy, (2) malware and (3) data security. Below is a summary of the Commission’s testimony regarding these three key areas and the Commission’s advice for additional steps to protect consumers.

Click here to read the full post on our sister blog Ad Law By Request.

The ECJ Google Spain decision: watch out for the long arm of EU data protection law!

On 13 May the Court of Justice of the European Union (“ECJ”) delivered a ground-breaking ruling on the application of the Data Protection Directive 95/46/EC (the “Directive”) to internet search engine operators. In its eagerly anticipated judgment, the ECJ ruled on key issues including the circumstances in which search engines must block certain information from being returned in the results of a search made against the name of an individual (even where those data were originally lawfully published by a third party), the so-called “right to be forgotten”, and the territorial application of the Directive.

Click here for the issued Client Alert.

Two Health Care Entities Pay to Resolve HIPAA Violations Exposed by Theft of Unencrypted Laptops

As mentioned on our Life Sciences Legal Update blog, two separate HIPAA settlements resulted from investigations by the Department of Health and Human Services, Office for Civil Rights (OCR) into two self-reported instances of unencrypted laptop theft from health care entities.  In the first instance, OCR’s investigation found that the company had previously recognized a lack of encryption on its technology but had failed to fully address the issue before the breach occurred.  In the second instance, OCR determined that the company had failed to comply with multiple requirements of the HIPAA Security Rule.  Both instances resulted in settlements that included financial penalties as well as agreement to continued oversight by OCR through Corrective Action Plans.

To read the entire post, click here.

Article 29 Working Party releases opinion on Anonymisation Techniques

This post was written by Kate Brimsted, Katalina Chin, and Tom C. Evans.

In April, the EU’s Article 29 Working Party (Working Party) adopted an opinion on Anonymisation Techniques (Opinion). The Opinion is designed to provide guidance for organisations on the use of common anonymisation techniques, and the risks that can be presented by them.

When data is truly anonymised – so that the original data subject cannot be identified – it falls outside of EU data protection law. The Opinion notes that the re-use of data can be beneficial, providing “clear benefits for society, individuals and organisations”. However, achieving true anonymisation is not easy, and can diminish the usefulness of the data in some circumstances.

The EU regime does not prescribe any particular technique that should be used to anonymise personal data. To guide organisations in designing their own policy on anonymisation, the Opinion examines the two principle forms: (a) randomization and (b) generalization.

In particular, the Opinion looks at the relative strengths and weaknesses of each technique, and the common mistakes and failures that arise in relation to them. Each technique is analysed using three risk criteria, which include:

1. The risk that data identifying an individual could be singled out
2. The ‘linkability’ of two records that relate to an individual
3. Inferences that can be drawn about one set of data based on a second set of data

The Working Party stated that by considering these strengths and weaknesses, organisations will be able to take a risk-based approach to the anonymisation technique used and tailor it to the dataset in question. The Opinion emphasizes that no technique will achieve anonymisation with certainty, and that since the fields of anonymisation and re-identification are actively researched, data controllers should regularly review their policies and the techniques employed.

In addition, the Opinion makes clear that pseudonymisation is not a method of anonymisation in itself. Therefore, organisations that use this technique should be aware that the data they process does not fall outside of the EU data protection regime. These comments are significant because the draft EU General Data Protection Regulation contains specific references to pseudonymisation and the circumstances in which the technique can be used.

At the recent IAPP Europe Data Protection Intensive 2014 held in London, Security Engineering Professor Ross Anderson of the University of Cambridge put to the conference audience that anonymisation will never be a completely infallible tool for the security of personal data – a discussion set in the context of secondary uses of medical records. Despite these wider questions on anonymisation being posed by many, the Working Party’s Opinion will at least provide some useful guidance for organisations that have a need to anonymise data.
 

 

FTC Settlement with Snapchat - What Happens on Snapchat Stays on Snapchat?

Last Thursday, the Federal Trade Commission (FTC) announced that messaging app Snapchat agreed to settle charges that it deceived consumers with promises about the disappearing nature of messages sent through the app. The FTC case also alleged that the company deceived consumers over the amount of personal data the app collected, and the security measures taken to protect that data from misuse and unauthorized disclosure. The case alleged that Snapchat’s failure to secure its Find Friends feature resulted in a security breach that enabled attackers to compile a database of 4.6 million Snapchat usernames and phone numbers.

Click here to read the full post on our sister blog AdLaw By Request.
 

Privacy Regulators of the World Unite to Conduct "Sweep" of Mobile Apps from 12 May

This post was written by Kate Brimsted, Mark S. Melodia, Daniel Kadar, Paul Bond and Cynthia O’Donoghue.

In the week commencing 12 May, members of the Global Privacy Enforcement Network (GPEN) will conduct an online privacy sweep, focusing on the transparency with which mobile apps collect personal data.

GPEN is an informal network of 27 Data Protection Authorities (“DPAs”) that was established in 2007. Its members include the UK’s ICO, France’s CNIL, Spain’s AEPD, Canada’s OPC and the U.S. FTC.

The network’s tasks are to:

  • Support joint enforcement initiatives and awareness campaigns
  • Work to developed shared enforcement policies
  • Share best practices in addressing cross-border challenges
  • Discuss the practical aspects of privacy law enforcement co-operation

The sweep is part of an effort to ensure that consumers are fully aware of the ways in which apps gather and use personal data. To this end, DPAs will focus on the level of permission requested by apps, the way in which the permission is requested and the purposes for which personal data are used. The DPAs will focus in particular on whether the level of permission requested by the app is what would be expected of an app of its type, or whether it appears excessive.

This is the second time that GPEN has conducted an Internet privacy sweep. In May 2013, DPAs from 19 jurisdictions carried out a sweep of websites and apps and their privacy policies.  This looked at (1) was there a privacy policy? (2) was it easy to find? (3) was it easy to read and understand?  This led to regulators following up with a number of organisations, including insurance companies, financial institutions, and media companies, resulting in some substantial changes being made to their privacy policies. 

The results of the 2014 sweep are expected to be published later this year.
 

N.Y. court rules U.S. warrants include overseas data: Microsoft loses first round of legal challenge

This post was written by Cynthia O'Donoghue and Kate Brimsted.

At the end of April, a magistrate judge of the Southern District of New York denied a motion filed by Microsoft for the quashing of a search warrant issued under the Stored Communications Act (the Act). Microsoft had argued that the warrant should be quashed because the data concerned was stored in Ireland, and the Act did not authorize U.S. courts to issue extraterritorial warrants. 

Under the provisions of the Act, the U.S. government can require information from Internet Service Providers (ISPs) in three ways: by subpoena, court order or warrant. The method chosen determines the extent of the information that an ISP is required to provide. In this case, the warrant ordered Microsoft to disclose extensive information, including:

  • The contents of all emails stored in the account
  • Records and information regarding the identification of the account (including everything from the user’s name to their method of payment)
  • All records stored by the user of the account, including pictures and files
  • All communications between Microsoft and the user regarding the account

Denying the motion, the judge stated that although the language of the Act was ambiguous, the interpretation advanced by Microsoft would be inconsistent with the Act’s structure and legislative history. In addition, the judge pointed to the practical consequences if Microsoft’s motion were upheld, noting that the burden on the government would be “substantial”, and that it would lead to reliance on a Mutual Legal Assistance Treaty which “generally remains slow and laborious”.

Microsoft’s robust stance on this issue comes at a time when ISPs face increasing public and political scrutiny of their dealings with investigatory agencies. Following the ruling, Microsoft Corporate VP and Deputy General Counsel David Howard stated, “the US Government doesn’t have the power to search a home in another country, nor should it have the power to search the content of email stored overseas.” It appears that Microsoft intends to take this issue further, with Howard noting that the path of the legal challenge could “bring the issue to a US district court judge and probably to a federal court of appeals.”
 

EU - US Privacy Bridge Project Announced

This post was written by Cynthia O’Donoghue, Katalina Chin, and Matthew N. Peters.

On 2 May, Dutch Data Protection Authority (DPA) Chairman Jacob Kohnstamm announced a new Privacy Bridge Project between the U.S. and the EU at the IAPP Data Protection Intensive.  In his announcement, Kohnstamm highlighted the need for these two privacy regimes to find common ground, and to abandon the age-old position that ‘interoperability’ will only be achieved when one regime has made wholesale changes to its privacy laws.

This announcement follows a period of strained relations between the U.S. and EU on the subject of privacy.  With the threat of suspension hanging over Safe Harbor (should the EU Commission’s proposals to strengthen the framework fail) this announcement offers a new avenue of dialogue, which focuses on compromise and the need ‘to find practical, maybe technological solutions’ to the differences between the U.S. and EU regimes. 

The project team will be made up of around 20 experts from both sides of the Atlantic, and led by the CSAIL Decentralized Information Group at the Massachusetts Institute of Technology, together with the Institute for Information Law of the University of Amsterdam.  The project program will include four two-day meetings, with the intention of delivering a paper of recommendations by summer 2015, and a global DPA conference later that year.  The team’s first meeting was held in Amsterdam on 28 – 29 April, with Fred Cate (Indiana School of Law) and Bojana Bellamy (President of the Centre for Information Policy Leadership), amongst others,  in attendance.  The remaining three meetings are scheduled to be held in Washington DC, Brussels and Boston.      
 

VPPA Class Action Against Hulu Survives

This post was written by Frederick Lah and Lisa B. Kim.

On April 28, the Northern District of California granted in part and denied in part Hulu’s motion for summary judgment over allegations that it violated the Video Privacy Protection Act (VPPA) by sharing users’ information with comScore and Facebook.  The court granted the motion for the comScore disclosures but denied the motion for the Facebook disclosures. 

The VPPA restricts video service providers from disclosing “personally identifiable information” to third parties.  Under the statute, the term “personally identifiable information” means “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.”  In this case, the court drew a distinction between the types of information Hulu was disclosing to comScore and the types of information it was disclosing to Facebook.  The court held that, “[t]he comScore disclosures were anonymous disclosures that hypothetically could have been linked to video watching.  That is not enough to establish a VPPA violation.  As to the Facebook disclosures, there are material issues of fact about whether the disclosure of the video name was tied to an identified Facebook user such that it was a prohibited disclosure under the VPPA.”

According to declarations from Hulu, Hulu’s main source of income is its advertising revenue.  Advertisers pay Hulu to run its commercials during breaks, and the amount they pay is tied to how often an ad is viewed.  Hulu uses analytics providers like comScore to verify those types of metrics.  In Hulu’s case, comScore performed its analytics on the Hulu site and then reported its data to Hulu in the “aggregate and generalized” form.  While the court acknowledged that “comScore doubtless collects as much evidence as it can about what webpages Hulu users visit,” the court held that “there is a VPPA violation only if that tracking necessarily reveals an identified person and his video watching.”  Since there was no evidence that comScore’s tracking did that here, the court granted the motion in Hulu’s favor with respect to the comScore disclosures.

As for the Facebook disclosures, the court’s ruling turned on its determination that “personally identifiable information” was transmitted from Hulu to Facebook via Facebook’s “Like” button.  Each hulu.com watch page has a “Like” button.  During the relevant time period, the URL of each watch page – which was sent to Facebook in order to offer the “Like” button – included the video title that the user watched.  And through Hulu’s cookies associated with the “Like” button, the name of the Facebook user was provided.  Based on these facts, the court found that plaintiff’s VPPA claims for the Facebook disclosures should survive.

The VPPA was amended last year and state iterations of the law have recently given rise to litigation.  With statutory damages of $2,500 per violation, the potential liability under the VPPA can be catastrophic.  Plaintiffs’ counsel has brought VPPA class actions against various content platforms, including Blockbuster, Netflix and Redbox, with mixed results.  The cases against Blockbuster and Netflix each settled.  Redbox’s motion to dismiss was denied by the district court, but on interlocutory appeal the case was dismissed by the Seventh Circuit.

There has also been an recent up-surge of VPPA cases alleging that various news and entertainment organizations violated the VPPA by sharing what consumers watched with third-party analytics companies.  See Perry v. Cable News Network, Inc., et al., Case No. 1:14-cv-01194, N.D. Ill.; Ellis v. The Cartoon Network, Inc., Case No. 1:14-cv-00484, N.D. Ga.; Locklear v. Dow Jones & Co. Inc., Case No. 1:14-cv-00744, N.D. Ga.; Eichenberger v. ESPN, Inc., Case No. 2:2014-cv-00463, W.D. Wash.  These cases are still in the pleading stage and will surely be impacted by this new ruling.
 

French data protection authority ramps up inspections for 2014 - will it be a knock on the door or a "remote audit"?

This post was written by Daniel Kadar and Kate Brimsted.

At the end of April, the French data protection authority (CNIL) released its inspection schedule for 2014 (the Schedule), promising to carry out some 550 inspections over the course of the year.

Approximately 350 inspections are expected to be on-site, a quarter of which will focus on CCTV/video surveillance, and 200 will be carried out using the CNIL’s new powers of online investigation. These powers, introduced in April 2014, enable agents to carry out “remote investigations” into compliance with the French Data Protection Act.

The Schedule sets out six priority areas for inspections in the period, including:

  • Processing personal data by the National Database on Household Credit Repayments
  • Handling data security breaches by electronic communications operators
  • Collecting and using personal data, including sensitive data, by social networks, online dating providers and third-party applications linked to social networks
  • Processing personal data by the government’s system for the payment and collection of income tax
  • Processing personal data by online payment systems
  • Processing personal data by the National Sex Offenders Register

The CNIL will also continue to participate in the Article 29 Working Party’s effort to harmonise the approach of EU data protection authorities regarding Internet cookie compliance.

The CNIL further renewed its commitment to support international cooperation between data protection authorities, and is set to take part in the 2nd Global Privacy Enforcement Network’s Internet Sweep (Internet audits evaluating how well websites protect the data privacy of their users). 

International cooperation is a hot topic for EU data protection authorities. In anticipation of the General Data Protection Regulation and its proposed introduction of a “one-stop shop” mechanism, regulators across Europe will be looking to plan ahead for the changes to come. The CNIL has also, on behalf of the Article 29 Working Party, been leading the EU data protection enforcement against Google after the implementation of its new platform (also covered by this blog here [dated Jan. 21, 2014]).
 

Article 29 Working Party proposes clauses for data transfers from EU processors to non-EU subprocessors

This post was written by Cynthia O'Donoghue, Kate Brimsted, and Matthew N. Peters.

As is well-known, personal data is restricted from leaving the EEA. On 21 March, the EU’s Article 29 Working Party (the WP) issued draft ad hoc model clauses for data transfers from EU processors to non-EU subprocessors.  While not yet approved by the EC Commission, this working document provides useful guidance on the WP’s thinking on data transfers to processors which fall outside the scope of EU Decision 2010/87/EU on standard contractual clauses for the transfer of personal data to processors established in third countries under Directive 95/46/EC (the 2010 Clauses).

The 2010 Clauses apply to situations where an EU-based controller is transferring data to a processor based outside the EEA.  In practice, the initial transfer will often be from an EU-based controller to an EU-based processor, with a subsequent transfer to a non-EU subprocessor.  As the 2010 Clauses do not strictly apply to such arrangements, the guaranteed adequacy of protection is not available and alternative means for overcoming the restriction on extra-EEA data transfer must be found, e.g. valid consent by the individuals to the transfer of their data.

The draft clauses include expected restrictions, such as preventing further subprocessing activity without the controller’s prior consent.  They will also need to be used in conjunction with a suitable Framework Contract between the EU-based controller and processor (to satisfy Article 17 of the Directive).

The working document represents an early, though welcome, step along the path towards Commission-approved supplementary clauses to solve this practical problem. 
 

Domain Dispatches: NETmundial Is Right Around The Corner

On Wednesday, April 23, 2014, Sao Paulo, Brazil will host NETmundial – the Global Multistakeholder Meeting on the Future of Internet Governance. Approximately 800 people will descend on Sao Paulo to spend two days and nights discussing, debating, arguing, cajoling, pleading, and demanding potential changes in the governance of the Internet. Hundreds more will participate remotely, at "remote hubs" and through the Internet (of course).

Click here to read more on our sister blog, AdLaw by Request.

ICANN Goes to Singapore

This post was written by Gregory S. Shatan.

The Internet Corporation for Assigned Names and Numbers (ICANN) held its 49th semi-annual meeting in Singapore in March.  Reed Smith partner, Gregory Shatan, provided real time reports Straight from Singapore on our sister blog AdLaw By Request.

Article 29 Working Party adopts opinion on Personal Data Breach Notification

This post was written by Cynthia O'Donoghue.

At the end of March, the EU’s Article 29 Working Party adopted an opinion on Personal Data Breach Notification (the Opinion). The Opinion is designed to help data controllers decide whether they are obliged to notify data subjects when a ‘personal data breach’ has occurred.

A ‘personal data breach’ under Directive 2002/58/EC (the Directive) broadly covers the situation where personal data is compromised because of a security breach, and requires communications service providers (CSPs) to notify their competent national authority. Depending on the consequences of the personal data breach, CSPs may also be under a duty to notify the individual data subjects concerned.

The Opinion contains factual scenarios outlining the process that should be used by CSPs to determine whether, following a personal data breach, individuals affected should be notified. Each scenario is assessed using the following three “classical security criteria”:

  • Availability breach – the accidental or unlawful destruction of data
  • Integrity breach – the alteration of personal data
  • Confidentiality breach – the unauthorized access to or disclosure of personal data

The Opinion includes practical guidance for notifying individuals, including where a CSP does not have the contact details of the individuals concerned, or where the compromised data relates to children.  The Opinion also stresses the importance of taking measures to prevent personal data breaches.
 

OpenSSL reveals significant security flaw

This post was written by Cynthia O'Donoghue.

On 7 April, OpenSSL released a Security Advisory exposing a flaw which, if exploited, would allow hackers to reveal communications between servers and the computers of Internet users.

OpenSSL is the most popular open source encryption service on the Internet, and is used by a large number of commercial and private service providers, including many social medial sites, email providers and instant messaging platforms.  The tool is used to encrypt information passed between Internet users and website operators, and the encrypted communication should have only been capable of being decrypted by the particular service provider.

When exploited, the security flaw, dubbed “Heartbleed”, revealed the encryption keys of service providers using the system. Once decrypted, the hackers essentially had unrestricted access to the communications.  OpenSSL has released an update to address the security flaw; however, service providers will find it impossible to assess whether the security of their systems has been compromised, making the situation particularly serious. In addition, the update will only protect future communications, and therefore any that may have already been intercepted will remain vulnerable.

Internet users are being advised to change all of their passwords, and in particular those for important services such as Internet banking.

The security flaw is likely to raise data protection issues for organisations, and it may behoove users of OpenSSL to take a proactive approach to communicating with their customers about security issues.  Those organisations that have suffered a security breach may be under a duty to notify individuals, and could be subject to adverse publicity, as well as litigation and regulatory investigation. 
 

Update on Federal Trade Commission v. Wyndham Worldwide Corp.: FTC Allowed To Proceed with Data Security Suit, Rejects Fundamental Challenge to FTC Authority

This post was written by Paul Bond and Christine N. Czuprynski.

A New Jersey federal court is allowing the FTC’s case against Wyndham Worldwide Corporation to go forward, denying Wyndham’s Motion to Dismiss on both the unfairness and deception counts.  In this closely watched case, the court emphasized that in denying Wyndham’s request for dismissal, it was not providing the FTC with a “blank check to sustain a lawsuit against every business that has been hacked.”  The far-reaching implications of this decision, though, cannot be ignored.

The Wyndham decision may well prove rocket fuel to an agency already proceeding at break-neck speed to formulate and enforce (often at the same time) new data security law.  Any company that was still waiting for the FTC to go through a formal rulemaking process on data security can wait no more.  The decision by Judge Salas has arguably ratified all of the reams of informal guidance the FTC has provided over the past decade, plus in enforcement actions, panel discussions, white papers, and more, as though they had gone through the formal notice and comment-based rulemaking process.  Unless a company is confident that it knows, has synthesized, and has applied this informal guidance to its own activities, it stands at risk of being the next target for the FTC's newly affirmed section 5 authority. 

The Federal Trade Commission sued Wyndham Worldwide in June 2012 in the District of Arizona.  The FTC alleged that Wyndham’s failure to properly safeguard the personal information in its possession led to a data security breach that exposed thousands of customers to identity theft and other fraud. The case was transferred to the District of New Jersey in March 2013.  Soon thereafter, Wyndham filed its Motion to Dismiss.

Wyndham challenged the FTC’s authority to regulate unfairness in the data security context.  Wyndham further argued that the FTC could not bring unfairness claims unless and until it had promulgated regulations on the issue.  U.S. District Judge Esther Salas rejected both of these challenges, as well as Wyndham’s third challenge, that the FTC failed to sufficiently plead both its unfairness and deception claims.

Wyndham argued that section 5 of the FTC Act does not confer unfairness authority that covers data security.  Wyndham contrasted section 5 of the FTC Act to the Fair Credit Reporting Act (FCRA), the Gramm-Leach-Bliley Act (GLBA), and the Children’s Online Privacy Protection Act (COPPA), all of which include specific authority for the FTC to regulate data security in certain contexts.  Wyndham argued that those statutes, which were enacted after the FTC Act, would be superfluous if the FTC had the general data security authority it seeks to wield in this case.  The court disagreed and ruled that the FTC’s general authority over data security can coexist with more specified authority in the FCRA, GLBA, and COPPA.

Wyndham also argued that the FTC had not provided fair notice of what data-security practices a business had to implement in order to comply with the FTC Act. . In rejecting that argument, the court held that the FTC was not required to engage in rulemaking before enforcing Section 5 in data-security cases, but could instead develop the law on a case-by-case basis. The court also found that fair notice was provided through the FTC’s public complaints, consent agreements, public statements and business guidance brochure. As such, the FTC was not required to also promulgate formal regulations. In addition, the court found that the FTC had pled with enough particularity to satisfy the heightened requirements in Rule 9(b), even though it was no persuaded that this action fell under that rule.

With respect to the deception claim, the ruling also touched on the respective liability between franchisors and franchisees, and issues we’ve written about recently.  Wyndham sought to exclude Wyndham-branded hotels from the case on the grounds that Wyndham Hotels and Resorts is a legally separate entity from Wyndham-branded hotels.  It therefore argued that statements on the Hotels and Resorts website privacy policy could not form the basis for deception claims, where personal information was accessed from Wyndham-branded hotels.  The court reviewed the language from the privacy policy to determine that a reasonable person could conclude that the privacy policy on the Hotels and Resorts website made statements about data security at both the Hotels and Resorts and the Wyndham-branded properties.  Despite the court’s claims of not providing the FTC with carte blanche to pursue companies that fall victim to hackers, the court’s ruling makes clear that when companies experience data breaches, they – and their franchisees – are now more, not less, likely to face the possibility of enforcement action by the FTC.
 

Spain's AEPD Publishes Draft Privacy Impact Assessment Guide

This post was written by Katalina Chin.

On 17 March, the Spanish data protection agency (la Agencia Española de Protección de Datos - AEPD) published a draft privacy impact assessment guide (Evaluación del Impacto en materia de Protección de Datos Personales). At the same time, the AEPD has initiated a public consultation, open until 25 April, to garner opinion and comments on the guide, after which they will issue a final version.

The guide sets out a framework to improve privacy and data protection in relation to an organisation’s technological developments, with the aim of helping them identify, address and minimise data protection risks prior to the implementation of a new product or service.

In this draft guide, the AEPD comments on the increasing importance for organisations to demonstrate their commitment to the rights of individuals whose personal data they process, and in meeting their legal obligations (essentially advocating the principle of accountability). In this regard, they advise that a developed privacy impact assessment will go a long way in evidencing an organisation’s good diligence, as well as assisting it to develop appropriate methods and procedures for addressing privacy risks.

It is not suggested, however, that the guide will provide the only methodology for carrying out a privacy impact assessment. Indeed, the AEPD says that they would be receptive to organisations who wish to develop an assessment specifically adapted to their business or sector, and they would be open to providing such organisations with guidance to ensure that they meet the minimum regulatory requirements.

As well as providing general guidance on privacy impact assessments, the guide sets out a set of basic questions, together with an ‘evaluation’ tool developed by the AEPD, whereby organisations can ‘check off’ and determine the legal obligations that must be met in order to implement their intended product or service in compliance with data protection legislation.

While this privacy impact assessment is not obligatory in Spain, this type of compliance review could become a legal requirement across the EU if the European Regulation on Data Protection remains as currently drafted (Article 33).
 

The ICO Sets Out Agenda for 2014-2017

This post was written by Cynthia O'Donoghue.

At the end of March, the UK Information Commissioner’s Office (ICO) released its corporate plan for 2014-2017 titled “Looking ahead, staying ahead” (the Plan). Information Commissioner Graham stated that the changes proposed are “about getting better results, for both consumers and for data controllers.”

As the UK’s supervisory body for upholding information rights, the ICO has a wide range of responsibilities. These include educating citizens and organisations about their rights and responsibilities under the various pieces of legislation, and also investigating complaints and taking enforcement action when things go wrong.

In the Plan, the ICO recognises that its role will evolve in light of the proposed EU General Data Protection Regulation and in relation to press regulation stemming from the Levison report. In order to be proactive in fulfilling its duties, the ICO has stated that there will be a “shift in focus, with cases brought to the ICO used to identify broader data protection problems and improve organisations’ current practices.”

The Plan details a number of specific changes and initiatives that organisations can expect to see over the next three years, including:

  • Closer work with organisations such as trade bodies and other regulators to improve compliance and develop privacy seals and trust marks
  • The introduction of an on-line, self-reporting breach tool to assist organisations in complying with the law
  • The development of new and existing codes of practice to ensure organisations have access to up-to-date advice
  • The reactive investigation of offences under the Data Protection Act 1998 and Freedom of Information Act 2000, along with initiatives for increased cooperation between the ICO and other regulators
  • The introduction of a monitoring process to check how quickly data controllers respond to subject access requests
  • A target of resolving 90% of data protection and freedom-of-information complaints within six months of being opened
  • The development of free training materials for organisations to use when training their own staff

Further reform in Australia

This post was written by Cynthia O’Donoghue.

Australia’s privacy protection reform laws came into force in mid-March, making significant changes to the regulation of data. Further reform is now on the horizon, with theAustralian Law Reform Commission (the Commission) publishing a discussion paper titled, ‘Serious Invasions of Privacy in the Digital Era’ (Discussion Paper).

The Commission is carrying out an inquiry at the request of the Australian government to find “innovative ways the law might prevent or redress serious invasions of privacy.”  Two of the Commission’s proposals are likely to be of particular concern to businesses.

First, the Discussion Paper proposes the introduction of a principle for the deletion of personal data. The principle would differ significantly from the ‘Right to Erasure’, one of the headline provisions contained in the proposed EU General Data Protection Regulation.

The current draft of the EU provision would allow citizens to request the deletion of any personal data held about them, where the data controller has no reason for retaining it. Data controllers would also be required to take reasonable steps to inform any third parties to whom they have passed the data of this request. In contrast, the Australian recommendation on data erasure would apply only to data that the citizen had personally provided to a data controller. The Discussion Paper calls for comments as to whether the data controller should be under a duty to inform third parties of this request.

Second, the Discussion Paper contains a proposal to introduce a new Commonwealth Statute which would apply to all territories in Australia. This statute would provide citizens with the ability to bring a cause of action against any individual or entity that seriously invades their privacy. The action would enable individuals to obtain damages independent of breach of the Australian Privacy Act.

The Commission is scheduled to deliver its final report to the Attorney-General in June 2014.

 

Safety of US-EU Safe Harbor Given Boost

This post was written by Cynthia O'Donoghue.

Following months of uncertainty about the future of the EU-U.S. Safe Harbor Framework, political leaders from the EU and the United States reiterated their commitment to the regime in a joint statement issued 26 March (the Statement).

EU-U.S. Safe Harbor is designed to essentially transpose EU data protection law into U.S. law so that organisations certified to the program are deemed to adequately protect personal data transferred from the EU to them in the United States. 

The future of the Safe Harbor regime was cast into doubt last year, following Edward Snowden’s revelations about the extent of NSA information gathering. In November 2013, the European Commission released a Strategy Paper which noted that “the current implementation of Safe Harbor cannot be maintained.” In particular, the paper pointed to shortcomings in transparency, enforcement and the use of the national security exception.

The situation became worse at the beginning of last month when a resolution of the EU Parliament drastically called for the “immediate suspension” of the Safe Harbor regime on the ground that it provides an insufficient level of protection to EU citizens.

The Statement is the latest development in the saga, with officials pledging to maintain the Safe Harbor framework subject to a commitment to strengthening it “in a comprehensive manner by summer 2014”. This demonstrates a slightly more diplomatic approach, which should be reassuring to businesses that currently rely on the Safe Harbor exception.

The Statement also confirms the commitment of the EU to introducing a new “umbrella agreement” for the transfer and processing of data in the context of police and judicial proceedings. The aim of this agreement is to provide citizens with the same level of protection on both sides of the Atlantic, with judicial redress mechanisms open to EU citizens who are not resident in the United States. Negotiations around this agreement commenced in March 2011, and are still on-going.
 

Brazil's Internet Bill: Latest Developments

This post was written by Cynthia O’Donoghue.

At the end of March, the Brazilian Chamber of Deputies voted in favour of the Marco Civil da Internet (Internet Bill), bringing the ground-breaking legislation one step closer to enactment. The Internet Bill will now progress to the Senate for approval.

In the wake of Edward Snowden’s revelations about global surveillance programs, the Internet Bill had included a provision that would have required organisations to store all data held on Brazilian citizens within the country’s border. This controversial requirement has been dropped by the Brazilian government in the latest version of the Internet Bill. However, the text voted on by the Chamber of Deputies now provides that organisations will be subject to the laws and courts of Brazil in cases where the information of Brazilian citizens is involved.

The Internet Bill will introduce a variety of measures to govern the use of the Internet, providing civilians with robust rights and implementing strict requirements for organisations to comply with. The legislation is the first of its kind, and has been hailed by the Brazilian Minister of Justice as a sign that Brazil is at the forefront of efforts to regulate the web democratically. The most important provisions in the legislation are:

  • A statement of the rights of web users, including freedom of expression and the confidentiality of online communications
  • The enshrinement of “net neutrality”, a principle that prohibits ISPs and governments from making a distinction between different types of data traffic. This will prevent organisations from being able to limit access to different websites based upon subscription plans.
  • Confirmation that ISPs cannot be held liable for content uploaded by third parties using their services unless they refuse to remove such content following a court order

ANA Responds to Request for Information on Big Data

This post was written by Frederick Lah.

Earlier this week, the ANA submitted its comments in response to a Federal Register Notice by the White House’s Science and Technology Policy Office seeking comments from industry participants on a variety of issues related to Big Data, including the public policy implications surrounding the use, gathering, storage and analysis of Big Data. The ANA’s views are shared and respected by many in the advertising industry. Government action (if any) in this space should appropriately correlate to the type of data at issue and in coordination with the ongoing efforts by the private sector to develop self-regulatory solutions.

Click here to read more on our sister blog, AdLaw By Request.
 

The EU Cyber Security Directive: Latest Developments

This post was written by Cynthia O'Donoghue.

The Cyber Security Directive (formally known as the Network & Information Security Directive) (the Directive) was considered by the European Parliament (the Parliament) in March. After a first reading of the Directive, MEPs voted strongly in favour of its progression to the next stage of the legislative process. This will involve negotiations between the European Commission (EC) and the Council.

Work on the Directive first began in February 2013, as part of the EU Cyber Security Strategy. In a speech to the Parliament, Vice President Kroes reiterated that the Directive’s main aims are to bring all member states to a minimum security standard, promote cooperation and ensure preparedness and transparency in important sectors.

The Directive will introduce mandatory breach notification for certain organisations and set out minimum security requirements.

The Parliament made substantial amendments to the version of the Directive that had been proposed by the EC, such as:

  1. Narrowing the scope of organisations that fall within the Directive’s requirements to eliminate its application to search engines, social media platforms, internet payment gateways and cloud computing services, software developers and hardware manufacturers, by limiting its application to providers of “critical infrastructure”, such as organisations in the energy, transport, banking, finance, and health sectors.
  2. Developing National Security Strategies, with the assistance of ENISA (European Union Agency for Network and Security), that will allow Member States to develop minimum standards.
  3. Appointment of a single point of contact among national competent authorities (NCAs) for security and network information systems to facilitate cooperation and communication between Member States. NCAs will be responsible for ensuring compliance, including imposing sanctions where an organisation suffers a breach intentionally or where there has been gross negligence. The amendment to the original text of the Directive permits Member States to appoint several NCAs, so long as only one “national single point of contact” is responsible and restricts the imposition of sanctions.

As the Directive progresses to the next stage of the legislative process, additional changes could be made. The Commission aims for the Directive to have completed the legislative process by the end of 2014.

 

N.D. Cal. Denies Class Cert. Motion in Gmail Wiretapping Litigation

This post was written by Mark MelodiaPaul Bond, and Frederick Lah.   

Last week, the Northern District of California denied a motion for class certification in a multidistrict litigation brought against Google over its alleged practice of scanning Gmail messages in order to serve content-based advertising. In re: Google Inc. Gmail Litigation, 5:13-md-02430 (N.D. Cal.). In sum, the court found that questions relating to whether class members had consented to the practice were too highly individualized to satisfy the predominance requirement based on the myriad of disclosures available to class members.

The original complaint in this case dates back to 2010. Six class actions were eventually centralized in the Northern District of California where a consolidated complaint was filed. The complaint sought damages on behalf of individuals who either used Gmail or exchanged messages with those who used Gmail and had their messages intercepted by Google. The causes of action were brought under California’s Invasion of Privacy Act, as well as federal and state wiretapping laws (California, Maryland, Pennsylvania, and Florida).

In general, the Wiretap Act prohibits the unauthorized interception of wire, oral, or electronic communications. Under the federal Wiretap Act, there are several exceptions to this general prohibition, one of which is if the interception is done subject to “prior consent.” So, the issue of whether the class members had consented to the interception, either expressly or impliedly, was a central issue in the case.

Google filed a Motion to Dismiss on the basis that its interception fell within the ordinary course of Google’s business and was therefore exempt from the wiretapping statutes. That argument was rejected by the court. Google also argued that class members had expressly consented to the interception based on Gmail’s Terms of Service and Privacy Policy (collectively, the “Terms”); and even that if they hadn’t viewed the Terms, they impliedly consented to the interception because, per Google, all email users understand and accept the fact that email is automatically processed. In September 2013, the court granted in part and rejected in part Google’s Motion to Dismiss. Only the claims based on California’s Invasion of Privacy Act and Pennsylvania’s wiretapping law (with respect to a subclass) were dismissed; the rest of the claims survived. A month later, the plaintiffs filed their motion for class certification.

What proved fatal for plaintiffs on this go-around was their inability to demonstrate that the proposed classes satisfied the predominance requirement under FRCP 23. There were several proposed classes and subclasses. Members in each of the classes were potentially subject to a different set of disclosures and registration processes. For instance, one of the classes represented users who signed up for Google’s free web-based Gmail service. These users were required to check a box indicating that they agree to be bound by the Terms of Service. Another class was comprised of users of an internet service provider (“ISPs”), Cable One, that had contracted with Google for Google to provide email service under the Cable One domain name. Another class consisted of users from educational institutions, such as the University of Hawaii; similar to Cable One, the educational institutions had contracted with Google for email services. For businesses such as the ISPs and the educational institutions, the contract required that the contracting business, not Google, ensure that end users agreed to Google’s Terms of Service.

With respect to the Terms themselves, it is interesting to note that in the court’s Order denying Google’s Motion to Dismiss, the court previously characterized the Terms as “vague at best and misleading at worst.” Per the court, the Terms of Service stated only that Google retained authority to prescreen content to prevent objectionable content, while the Privacy Policy suggested that Google would only collect user communications directed to Google, not among users. And while the contracting businesses, like Cable One and the University of Hawaii, were required to ensure end users were accepting Google’s Terms of Service, there were variations among the businesses as to how they would present the Terms of Service and obtain consent. Ironically, the fact that the court considered Google’s Terms to be vague or misleading and the fact that the Terms were not presented uniformly to end users appeared to actually help Google avoid certification -- it led to more individualized inquiries as to whether the users had given their express consent.

In addition to Google’s Terms, the court noted that there was also a “panoply of sources” where users could have impliedly consented to Google’s practices, such as Google’s Help pages, Google’s Privacy Center, Google’s Ad Preference Manager (which included a webpage on “Ads on Search and Gmail,” Gmail’s interface itself, the Official Gmail Blog, Google’s SEC filings (which includes the statement, “we serve small text ads that are relevant to the messages in Gmail”), and even media reports (e.g., New York Times, Washington Post, NBC News, PC World, etc.). The breadth of these sources helped to further convince the court that determining whether class members impliedly consented to Google’s interception was a highly individualized determination, and not one based on common questions. Whether each individual knew about or consented to the interception would depend on the sources to which he or she had been exposed. The plaintiffs contended that relying on extrinsic evidence outside of Google’s Terms would violate the parol evidence rule. The court was quick to point out that while that argument might work for a breach of contract case, the parol evidence rule was not applicable under the Wiretap Act, which requires the fact finder to consider all surrounding circumstances in relation to the issue of consent.

Putting aside the question of whether Google’s Terms were in fact vague or misleading, a key takeaway for businesses from this case should be the importance of educating customers about their data practices. Google was able to avoid certification based on the fact that they offered a variety of other opportunities for their customers to learn more about their services and products. Outside of a website or app terms of use and privacy policy, businesses need to understand that the disclosures they make elsewhere can help to educate and inform users about their practices, e.g., on the websites or apps themselves, a “Help” or “FAQ” section, on advertisements or in promotional emails, or in their subscription or license agreements. Of course, the more disclosures a business offers, the more challenging it can be to make sure that the message being delivered remains consistent. Plus, businesses should revisit their disclosures regularly to make sure that they are clear, conspicuous, current, and forthcoming.

The fact that these cases were brought under wiretapping laws adds another interesting wrinkle. The federal Wiretap Act comes with $100 in statutory damages per day, which could lead to billions of dollars in penalties. Various other web companies have recently faced privacy class actions pursuant to the Wiretap Act over the alleged data mining of user communications, including Yahoo!, LinkedIn and Facebook. We’ll continue to monitor this area closely to see how the recent Google decision might affect this wave of cases.

Original posted in the IAPP Privacy Tracker.

ICO issues updated code of practice on subject access requests

This post was written by Cynthia O'Donoghue.

The UK Information Commissioner’s Office (ICO) has issued an updated code of practice (the Code) on subject access requests, less than a year after releasing its original guidance paper on the topic. The Code is designed to help organisations fulfill their duties under the Data Protection Act 1998 (DPA) and contains guidance in relation to recognising and responding to subject access requests.

The “right of subject access” enables individuals to request from organisations information about what personal data is held about them. The information may include source of the personal data, how it is processed and whether it is passed on to any third parties. The DPA also permits individuals to request a copy of the personal data held. Unless an exemption applies, organisations are under a duty to provide this information when requested.

The Code is not legally binding, but it does demonstrate the steps that the ICO considers to be good practice. The ICO also points out that by dealing with subject access requests efficiently an organisation may enhance the level of customer service offered.

The main recommendations of the Code relate to the handling of a subject access request and cover the following issues:

  1. Taking a positive approach to subject access.
  2. Finding and retrieving the relevant information.
  3. Dealing with subject access requests involving other people’s information.
  4. Supplying information to the requester (not just copies); and
  5. Exemptions.

The Code also contains guidance in relation to “special cases” and enforcement action by the ICO.

UK Information Commissioner's Office and U.S. Federal Trade Commission sign Memorandum of Understanding

This post was written by Cynthia O'Donoghue.

At the beginning of March, the UK Information Commissioner’s Office (ICO) signed a memorandum of understanding (MOU) with the U.S. Federal Trade Commission (FTC) at the IAPP Global Privacy Summit. The memorandum is aimed at increasing cooperation between the agencies, with UK Information Commissioner Graham stating that the arrangement would be “to the benefit of people in the United States and the United Kingdom.”

Whilst the MOU does not create legally binding obligations between the two agencies, it sets out terms for cooperation during investigations and enforcement activities. The FTC and ICO will cooperate on serious violations. The methods for cooperation include:

  • Sharing information, including complaints and personal information
  • A mutual undertaking to provide investigative assistance to the other agency through use of legal powers
  • Coordinating enforcement powers when dealing with cross-border activities arising from an investigation of a breach of either country’s law, where the matter being investigated is the same or substantially similar to practices prohibited by the other country

Measures to encourage cooperation between national regulators have been introduced by several international organisations. For example, in 2010, the Asia-Pacific Economic Cooperation (of which the United States is a member) launched a Cross-border Data Privacy Initiative, recognising that “trusted flows of information are essential to doing business in the global economy.”

The MOU is a joint acknowledgment by the FTC and ICO that consumer protection and data protection require close collaboration, and it serves as a warning to organisations that the agencies will be proactive in carrying out investigations of serious violations of consumer protection and data protection laws.

European Parliament votes in favour of new Data Protection Regulation

This post was written by Cynthia O'Donoghue.

In March, the European Parliament voted overwhelmingly in favour of implementing the draft Data Protection Regulation, making its commitment to reforming the European regime irreversible. In order to become law, the Regulation must now be negotiated and adopted by the Council of Ministers.

Discussions around reform began in January 2012, in recognition of the growing economic significance of personal data. With estimates that by 2020 the personal data of European citizens will be worth nearly €1 trillion annually, it is important that any reform ensures an adequate level of protection for citizens while not overburdening businesses. To this end, Vice-President Viviane Reding has stated that the Regulation will “make life easier for business and strengthen the protection of our citizens.” The Regulation will make four key changes to the data protection regime, which are summarised below:

  1. Equal application in all EU Member States by replacing the “current inconsistent patchwork of national laws”, making compliance easier and cheaper
  2. Creation of a “one-stop shop” for organisations to deal with one data protection authority where their EU QA is located, rather than across various member states, reducing the administrative burden on organisations. EU residents may still bring complaints to the authority in their home country.
  3. Application of the Regulation to any organisation that operates within the single market to ensure that businesses are competing equally
  4. A right of EU residents to request that their data be removed where a data controller no longer has a legitimate reason to retain it

The draft Regulation continues to contain robust sanctioning powers with fines of up to 5% of annual worldwide turnover, a significant increase by the European Parliament on the 2% limit that had previously been recommended.

Despite the Parliament’s vote and ringing endorsement for the draft Regulation, the text is still subject to input from the Council of Ministers, who appear to be taking a more pragmatic approach aimed at promoting the EU Digital Agenda and continued growth in the digital marketplace. The next Council meeting is in June, so we may yet see further revisions to the existing draft.

Edward Snowden submits written testimony to the EU Civil Liberties Commission

This post was written by Cynthia O'Donoghue.

When Edward Snowden alerted the media to the extent of global intelligence surveillance programmes in 2013, he sparked investigations and debate into the gathering of data by intelligence agencies worldwide. He is now contributing to the debate again,submitting written testimony (the Statement) to the investigation of theEU Committee on Civil Liberties (the Committee). 

The Committee’s investigation has involved a broad examination of the ways in which data on EU citizens is collected by both American agencies and agencies in its own “back yard”. In January, the Committee released a draft report on the investigation, with MEPs condemning the “vast, systematic, blanket collection of personal data of innocent people”.

In the Statement, Snowden explains the extent of the data gathered by agencies, stating that while working for the NSA, he could “read the private communications of any member of this committee, as well as any ordinary citizen”. Snowden criticises the use of resources to fund mass, suspicionless surveillance at the cost of “traditional, proven methods”, citing a number of examples of incidents that have not been prevented despite the use of mass surveillance.

The Statement also contains details of cooperation between EU Member States and the NSA’s Foreign Affairs Directorate (FAD), stating that FAD systematically attempts to influence legal reform across the EU. When successful, Snowden claims that FAD encourages states to perform “access operations” which allow it to gain access to bulk communications of telecoms providers within the jurisdiction.

In relation to whistleblowing within intelligence agencies, Snowden points out that the current legal protections in the United States do not apply to the employees of private companies and therefore do not provide a satisfactory level of protection to concerned individuals employed by such organisations. In addition, the Statement indicates that raising concerns internally is ineffective as other employees are fearful of the consequences that may follow.

For businesses, Snowden’s remarks when questioned about industrial espionage are likely to be the most interesting. Snowden states that the fact that “a major goal of the US Intelligence Community is to produce economic intelligence is the worst kept secret in Washington”. In addition, the Statement points out that evidence of industrial espionage can be seen in the press, with an example being recent reports that GCHQ successfully targeted a Yahoo service to gain access to the webcams of devices within citizens’ homes.

The Statement paints a concerning picture of the way in which politics influence the level of protection given to citizens. As Snowden points out, the Statement is limited to information that has already entered the public domain, and so it is unlikely to impact the Committee’s findings. However, with the European Parliament scheduled to vote on the draft data protection regulation and Safe Harbor Program, it will intensify analysis of the legal reforms being implemented in Brussels.
 

Article 29 Working Party and APEC authorities release "practical tool" to map the requirements of the BCR and CBPR regimes

This post was written by Cynthia O'Donoghue.

At the beginning of March, representatives of the EU Article 29 Working Party and the Asia-Pacific Economic Cooperation (which includes, among others, the United States and the People’s Republic of China) announced the introduction of a new Referential on requirements for binding corporate rules (the Referential).

Both the EU and Asia-Pacific Economic Cooperation (APEC) regimes place restrictions on the transfer of data across borders. Under the EU regime, implementing a set of binding corporate rules (BCR) that have been approved in advance by national authorities will allow a company or group of companies to transfer data outside of the EEA without breaching the EU Data Protection Directive. Under the APEC regime, Cross-Border Privacy Rules (CBPR) serve the same purpose, allowing data to be transferred between participating economies. Both regimes require the rules to be approved in advance by regulators before they can be relied on.

The Referential does not achieve mutual recognition of both the EU and APEC systems, but it is intended to be a “pragmatic checklist for organizations applying for authorization of BCR and/or certification of CBPR”. The Referential acts as a comparison document, setting out a “common block” of elements that are shared by both systems, and “additional blocks” which list their differences. For example, while both systems require appropriate training to be given to employees, the EU regime requires only that this training is given to employees with permanent or regular access to personal data. In contrast, the APEC regime appears to extend to all employees.

Work on the referential began early in 2013, with Lourdes Yaptinchay stating that cooperation between APEC and the EU “is an important next step towards better protecting personal data and could provide a foundation for more fruitful exchange between companies with a stake in the two regions.”

The comparative nature of the Referential highlights the challenges that face organisations that want to satisfy both the EU and APEC regimes in a single set of rules. By drafting a set of rules that complies with the most stringent regime on any one point, organisations can use the document to navigate the approval process with more ease.

Information Commissioner's Office issues updated code of practice on conducting Privacy Impact Assessments

In February, the UK Information Commission’s Office (ICO) issued an updated code of practice on conducting Privacy Impact Assessments (PIA), with a six-point process for organisations to follow (the Code).

A PIA is intended to focus the attention of an organisation on the way that data is held and used in any project, and reduce the risk that this creates. A PIA is not a legal requirement, but the Code states that carrying one out will help organisations to make sure that they are complying with the law. Carrying out a PIA can also provide reputational benefits as individuals gain a better understanding of why and how data about them is held. 

The Code is aimed at “organisations of any size and in any sector”, and organisations are encouraged to carry out a PIA early on in the life of a project. The PIA process provided by the Code is designed for use by non-experts, making the process accessible to organisations of all sizes.

The Code recommends that organisations consultation at all stages of a PIA. Consultations should be carried out both with internal colleagues and external people who will be affected by a project. A high-level summary of the six-point process is as follows:

  1. Identifying the need for a PIA.  The Code includes screening questions which are designed to be included in an organisation’s normal project management procedure. By doing this, the ICO intends that the need for a PIA to be carried out will be considered in each project.
  2. Describing information flows.  Organisations should consider and document how and why information travels around an organisation in order to effectively map the risk.
  3. Identifying privacy and related risks.  At this stage, an organisation can understand the risks posed by the data highlighted in steps 1 and 2. The Code encourages organisations to adopt their own preferred method of categorizing the risks that are identified.
  4. Identifying and evaluating privacy solutions. Having identified the risks, organisations should consider ways of mitigating them. The Code states that “Organisations should record whether each solution results in the privacy risks being eliminated, reduced or simply accepted.”
  5. Signing off and recording the PIA outcomes. The Code stresses the importance of keeping a record of the PIA process in order to facilitate the implementation of its findings.
  6. Integrating PIA outcomes back into the project plan.  Having completed a PIA, organisations should implement the measures identified into the project management process.

The Code takes an expansive approach to the process of conducting a PIA, providing several annexes with tools to assist in the process. Organisations should be reassured that by following the provisions of the Code, they are becoming more compliant with the terms of the Act.
 

App Industry meets with the European Commission and National Regulators

This post was written by Cynthia O'Donoghue.

The value of the EU app sector has grown exponentially over the past few years, with a recent EU report estimating that spending on apps in the EU rose to €6.1 billion in 2013, and forecasts that by 2018, the industry could be worth €63 billion per year to the EU economy. However, complaints from consumers across Europe have caused concerns to national regulators about the selling techniques employed in the app industry.

Last month, the European Commission held meetings with representatives of the app industry and national regulators from several European countries. The aim of these meetings was to discuss the concerns of national regulators and draw up a plan to implement solutions within a clear timeframe.

The most pressing concern of the regulators is games being advertised as “free”, when in fact charges for gameplay later become apparent. In the Common position of national authorities within the CPC (the Common position), national authorities state that if the term “free” has the potential to mislead users, it could be incompatible with the law. The term should only be used to describe games that are “free” in their entirety, or for games that are marketed with accurate details of additional costs given upfront. In addition, regulators state that purchasers of apps should be provided with full information about payment arrangements, rather than being debited by default, as has sometimes been the case.

Also discussed at the meeting was the interaction of the app industry with children. In the Common position, national authorities state that games that either target children or that can be reasonably foreseen to appeal to children should not contain direct encouragements to children to buy items. When considering their game, developers should consider the way that the app displays messages.

The encouragement and development of the app industry is of great importance to the EU economy, and its contribution to commerce is set to grow. At this stage, it is important that regulators and lawmakers strike the right balance between allowing developers the autonomy to create products, and appropriate regulation that protects the interests of consumers. The approach of the EU Commission in holding collaborative talks with the industry is a promising sign which indicates that a balanced outcome is achievable.

Hong Kong's Office of the Privacy Commissioner for Personal Data releases Best Practice Guide on Privacy Management Programmes

This post was written by Cynthia O'Donoghue.

Last month, Hong Kong’s Office of the Privacy Commissioner for Personal Data (OPCP) released a Best Practice Guide on Privacy Management Programmes (PMP) (the Guide). Striking a similar chord to the UK Information Commissioner’s Office in the recently released code of practice on conducting Privacy Impact Assessments, the OPCP notes that despite no requirement within the Personal Data (Privacy) Ordinance (the Ordinance) for PMPs, organisations that do adopt them are likely to benefit from increased levels of trust among their customers and employees, as well as demonstrating compliance with the Ordinance.

The Guide does not provide a “one-size-fits-all” solution, and organisations will need to consider their size and nature when developing a PMP. To this end, the Guide addresses both the fundamental components of a PMP and the ongoing assessment and revision.

The Guide notes that implementation of PMPs will require organisations to consider their policies, staff training, and the processes that are followed when contracting with third parties. The Guide states that the key components of a PMP are:

  1. Organisational commitment: this includes buy-in from top management, designating a member of staff to manage the PMP (this could be a full-time member in a large organisation, or a business owner in a small organisation), and establishing reporting lines.
  2. Program controls: an inventory of personal data held by the organization should be made. Internal policies should also be put in place to address obligations under the Ordinance, with risk-assessment tools to allow new or altered projects to be assessed.

The Guide is a welcome development for Hong Kong organisations, which, by following its terms, will be able to demonstrate their compliance with the Ordinance. However, organisations should also note that the Guide indicates that the OPCP expects organisations to take positive steps towards fulfilling their obligations.

Indian Centre for Internet and Society issues call for comments on draft Privacy (Protection) Bill

This post was written by Cynthia O'Donoghue.

A nonprofit research organisation, the Indian Centre for Internet and Society (ICIS), has issued an open call for comments on its draft Privacy (Protection) Bill 2013 (the Bill). Consultations on the Bill started in April 2013, with a series of seven roundtable talks being held in partnership with the Federation of Indian Chambers of Commerce and Industry, and the Data Security Council of India.
ICIS states that it has “the intention of submitting the Bill to the Department of Personnel and Training as a citizen’s version of privacy legislation for India.” India’s current data protection regime, a product of piecemeal development, imposes very limited duties on organisations that collect, process and store data. No national regulator is in place to oversee the data protection regime.

Described by ICIS as “a citizen’s version of privacy legislation for India,” the draft Bill contains provisions for a new Data Protection Authority of India to be established with wide powers of investigation, review and enforcement. Penalties detailed by the Bill for infringement include a term of imprisonment and a fine. In addition, the Bill proposes the introduction of comprehensive regulation, including:

  • Regulation of the collection, processing and storage of data by any person
  • Regulation of the use of voice and data interception by authorities
  • Regulation of the manner in which forms of surveillance not amounting to interceptions of communications may be conducted

If implemented, the draft Bill would be a considerable step forward for the privacy landscape in India, which has so far lacked the impetus provided by international instruments such as the European Data Protection Directive (95/46/EC).
 

The Inevitable - EMV Payments On a Fast Track to Becoming a New Standard in the United States

This post was written by Timothy J. Nagle and Angela Angelovska-Wilson.

Last week, congressional leaders in Washington continued with their focus on the safety of the U.S. payments system in the aftermath of the massive retailer breaches at Target, Neiman Marcus and others. The House Committee on Financial Services held its session March 5, while the House Committee on Science, Space and Technology hearing was held March 6. The message coming out of the hearings was that the adoption of EMV cards, payment cards utilizing smart-chip technology instead of a magnetic stripe, is just one of many steps that need to be taken to secure the U.S. payments system.

Click here to read the full issued Client Alert.

CFTC Issues Recommended Best Practices for Security of Financial Information

This post was written by Timothy J. Nagle, Philip G. Lookadoo, and Christopher Fatherly.

Last week, the Staff of the Commodity Futures Trading Commission (CFTC) issued Staff Advisory 14-21 on the subject of “Gramm-Leach-Bliley Act Security Safeguards.” The CFTC had issued guidance previously in Part 160 of the CFTC’s regulations on “Privacy of Consumer Financial Information” (April 27, 2001). Swap Dealers (SDs) and Major Swap Participants (MSPs) were added to those Part 160 regulatory obligations by the CFTC on July 22, 2011. The fact that the Commission “at this time…believes it important to outline recommended best practices for covered financial institutions” is noteworthy, especially in light of its overburdened staff which has focused on other issues such as electronic or automated trading. It demonstrates that cybersecurity is a significant issue in the financial industry and that the CFTC wants to be relevant to and actively participate in the discussion of cybersecurity.

As noted in Staff Advisory 14-21, its provisions reflect similar guidance from the Federal Financial Institutions Examination Council and the Federal Trade Commission and draft guidance from the Securities and Exchange Commission. The “recommended best practices” include maintaining a written information security and privacy program, designating an employee with management responsibility for security and privacy who “is part of or reports directly to senior management or the Board of Directors,” identifying risks and implementing safeguards to address those risks, training staff, regularly testing such controls as access management, use of encryption, and incident detection and response, retaining an independent party to evaluate the controls on a regular basis, and re-evaluating the program at intervals. Three additional practices that reflect increasing emphasis by other regulators include supervision of third party service providers to include security-related contract requirements, establishing a breach response process and providing an annual assessment of the program to the Board of Directors.

The CFTC’s Division of Swap Dealer and Intermediary Oversight, which issued Staff Advisory 14-21, will “enhance its audit and review standards as it continues to focus more resources on GLBA Title V compliance.” This echoes recent statements from the Financial Industry Regulatory Authority in a January 2014 Targeted Examination Letter on Cybersecurity, and the SEC’s announcement that it will conduct a round table later this month on cybersecurity issues.

The “covered entities” which are subject to Staff Advisory 14-21 (futures commission merchants (FCMs), commodity trading advisors (CTAs), commodity pool operators (CPOs), introducing brokers (IBs), retail foreign exchange dealers, SDs and MSPs) are not consumer-facing but they are part of the financial system. For banks and other large financial institutions, Staff Advisory 14-21 will support the goal of maintaining comprehensive, consistent security and privacy standards throughout the enterprise. Other firms, such as broker dealers, asset managers and insurance companies which have not been subject to the same level of regulation on security and privacy matters as national banks, should see this as just one more indication that all financial institutions will eventually be expected, through regulation or industry practice, to implement and maintain the essential elements of an information security program. In this respect, it would not be surprising to see the SEC re-issue the draft Regulation S-P for public comment and implementation.

For others in the commodities world, which have not yet focused on the security of personal (or proprietary) information, Staff Advisory 14-21 will require additional compliance obligations on top of their other new regulatory responsibilities to the CFTC. For example, in the energy, agriculture and metals commodity trading industries, major players in those industries have only recently begun to register as SDs, MSPs or other registered entities and more registered entities are expected in the near future. In addition to the CFTC’s recordkeeping, reporting and other business conduct obligations these entities have only recently become obligated to embrace, they can now add regulatory compliance obligations related to security of personal (or proprietary) information under Part 160 and the recommended best practices in Staff Advisory 14-21.

While the focus of Staff Advisory 14-21 is on personal information, the recommended practices apply equally to sensitive proprietary information that any financial and commodities firm would want to protect. In the past, a firm may have considered it prudent to implement some level of information security and privacy practices. Now, they can expect to be subject to government audit in those areas.

Data Breach Class Action Settlement Gets Final Approval - Payment to Be Made to Class Members Who Did Not Experience ID Theft

This post was written by Mark Melodia, Steven Boranian and Frederick Lah.  

Last week, a judge for the Southern District of Florida gave final approval to a settlement between health insurance provider AvMed and plaintiffs in a class action stemming from a 2009 data breach of 1.2 million sensitive records from unencrypted laptops. The settlement requires AvMed to implement increased security measures, such as mandatory security awareness training and encryption protocols on company laptops. More notably, AvMed agreed to create a $3 million settlement fund from which members can make claims for $10 for each year that they bought insurance, subject to a $30 cap (class members who experienced identity theft are eligible to make additional claims to recover their monetary losses). According to Plaintiffs’ Unopposed Motion and Memorandum in Support of Preliminary Approval of Class Action Settlement  (“Motion”), this payment to class members “represents reimbursements for data security that they paid for but allegedly did not receive. The true measure of this recovery comes from comparing the actual, per-member cost of providing the missing security measures—e.g., what AvMed would have paid to provide encryption and password protection to laptop computers containing Personal Sensitive Information, and to otherwise comply with HIPAA’s security regulations—against what Class members stand to receive through the Settlement” (p. 16). It’s been reported that this settlement marks the first time that a data breach class action settlement will offer monetary reimbursement to class members who did not experience identity theft. In defending the fairness, reasonableness, and adequacy of the settlement, plaintiffs noted in the Motion, “[b]y making cash payments available to members of both Classes—i.e., up to $30 to members of the Premium Overpayment Settlement Class, and identity theft reimbursements to members of the Identity Theft Settlement Class members—the instant Settlement exceeds the benefits conferred by other data breach settlements that have received final approval from federal district courts throughout the country” (p. 16).

The finalization of this settlement marks the end of a hard fought battle between the parties. After AvMed obtained a dismissal with prejudice in the District Court based on plaintiffs’ failure to allege a cognizable injury, the dismissal was appealed to the Eleventh Circuit. Resnick v. AvMed, Inc., 693 F.3d 1317 (11th Cir. 2012). There, the Eleventh Circuit found that plaintiffs had established a plausible causal connection between the 2009 data breach and their instances of identity theft. The court also determined that plaintiffs’ allegations —that part of the insurance premiums plaintiffs paid to defendant were supposed to fund the cost of data security, and that defendant’s failure to implement that security barred it from retaining the full amounts received—were sufficient to state a claim for unjust enrichment. On remand, AvMed answered plaintiffs’ complaint and filed a motion to strike class allegations, which was denied by the District Court as premature.

We’ve been particularly interested in this case for quite some time. Last year, we blogged about the unique nature of the settlement after the agreement was reached. Class action plaintiffs’ lawyers in the data breach context have often had their cases dismissed on the basis that they are unable to prove the class suffered any sort of injury or loss. With the AvMed settlement now final, we expect plaintiffs’ lawyers to try to leverage similar payment terms into their own data breach class action settlements. As we previously noted, class action settlements are only binding upon the parties that enter into them, but their terms can serve as models for future proposed settlements.

Court of Appeal Confirms a Person's Name Constitutes Personal Data

This post was written by Cynthia O'Donoghue.

A judgment from the Court of Appeal on 7 February 2014 in the case of Edem v The information Commissioner & Financial Services Authority [2014] EWCA Civ 92, has held that “a name is personal data unless it is so common that without further information, such as its use in a work context, a person would remain unidentifiable despite its disclosure” (see paragraph 20 of judgment).

The definition of ‘personal data’ within the meaning of the Data Protection Act 1998 (‘DPA’) is often debated. Section 1(1) of the DPA defines ‘personal data’ as “Data which relates to a living individual who can be identified from those data, or from those data and other data which is in the possession of or is likely to come into the possession of the data controller and includes any expression of opinion about the individual and any indication of the intentions of the data controller or any other person in respect of the individual.”

The Court of Appeal previously interpreted the application of this definition in the case of Durant v Financial Services Act [2003] EWCA Civ 1746 [2011] 1 Info LR 1 (Durant). Paragraph 28 of Auld LJ’s judgment provided two notions to be used to determine if information is personal data. The first was whether the information is biographical in a significant sense, the second was if the information has the data subject at its focus and whether disclosure of such information would affect that data subject’s fundamental right to privacy derived from the European Data Protection Directive 95/46/EC.

The most recent case before the Court of Appeal has now elaborated further on this interpretation, specifically examining whether an individual’s name is automatically deemed personal data. Mose LJ, Beaton LJ and Underhill LJ questioned whether a person’s name is automatically considered personal data simply because it identifies and relates to that individual, or whether it is necessary for that information to arise in some form of context which reveals more information about an individual beyond merely his or her name.

The facts of the case are very similar to Durant, involving an application by Mr Edem under the Freedom of Information Act 2000 (‘FOIA’) for the disclosure of information relating to complaints he had made to the Financial Services Authority (‘FSA’) regarding its regulation of a company. Specifically, Mr Edem sought information about the complaints and the names of the three individuals within the FSA who handled these complaints. The Information Commissioner declined to order the disclosure of these names in response to Mr Edem’s information request, on the grounds of section 40(2) of the FOIA which permits exemption from disclosure of information which is personal data. On appeal, the First Tier Tribunal decided that the names of the officials did constitute personal data and ordered that they be disclosed. However the Upper Tribunal (Administrative Appeals Chamber) reversed this decision preventing the disclosure of the information, leading Mr Edem to appeal to the Court of Appeal.

The Court of Appeal sought to distinguish this case from that of Durant, finding that Auld LJ’s two notions outlined above were not applicable to the facts of this case, where the issue was whether information comprising a person’s name could be automatically considered personal data, rather than the issue of whether information which did not obviously relate to or specifically name an individual could amount to personal data within the meaning of the DPA.

In reaching the conclusion of the judgment and dismissing the application of Auld LJ’s reasoning in the case of Durant, the Court of Appeal reiterated guidance from the Information Commissioners Office, which clarifies “It is important to remember that it is not always necessary to consider ‘biographical significance’ to determine whether data is personal data. In many cases data may be personal data simply because its content is such that it is ‘obviously about’ an individual. Alternatively, data may be personal data because it is clearly ‘linked’ to an individual because it is about his activities and is processed for the purpose of determining or influencing the way in which that person is treated. You need to consider biographical significance, only where information is not obviously about an individual or clearly linked to him.”

Applying this guidance to the facts of the case, the Court of Appeal declared that the names of the individuals did amount to personal data, upholding the decision of the Upper Tribunal (Administrative Appeals Chamber) to prevent the disclosure of such information on the grounds of section 40(2) of the FOIA.

This case is significant because it adds weight to argument that the Durant test for determining if information is personal data within the meaning of the DPA is not definitive and is limited to certain factual scenarios. Furthermore it reconfirms that the Durant test should not be applied in isolation, without consideration of further tests that have proliferated, such as those arising from the case of Kelway v The Upper Tribunal, Northumbria Police and the Information Commissioner (2013) EWHC HC 2575 (Admin) (see our previous blog).
 

Mexican Data Protection Authority Intends to Increase Investigations and Enforcement for 2014

This post was written by Cynthia O'Donoghue.

On February 4, 2014, the Mexican data protection authority, the Institute of Access to Information and Data Protection (IFAI), issued a statement to Bloomberg BNA announcing it anticipates issuing an abundance of fines in 2014 following an unprecedented increase in violations of Mexico’s Federal Law on the Protection of Personal Data in the Possession of Private Parties (the Federal Law).

Article 64 of the Federal Law empowers the IFAI to issue fines from 100 to 320,000 the times the Mexico City minimum wage (approximately US$480 to US$1,534,275 for violation of the Federal Law. In addition to monetary penalties, three months to three years imprisonment may be imposed on data controllers for any security breach of databases under their control. Such sanctions above can be doubled twice again for violations concerning sensitive data.

IFAI President Gerardo Laveaga stated that a number of new investigations have been opened, following a 20% increase in the number of data-protection complaints from individuals from 2012 to 2013. The IFAI issued fines totalling 50 million pesos ($3.7 million) in 2013, a figure that is set to markedly increase for 2014. This included the $1 million fine levied against the bank Banamex and the $500,000 fine imposed on cellular company Telcel. The IFAI reported that it intends to challenge all appealed 2013 fines, which will likely swell the coffers of the IFAI in 2014 even further. Organisations should take heed; evidently the IFAI is increasingly willing to show its teeth to enforce compliance with the Federal Law.

First European Cookie Fine Issued By Spanish Data Protection Authority

This post was written by Cynthia O'Donoghue.

The Spanish data protection authority, the AEPD, has issued the first European cookie fine for the violation of Article 22.2 of Spain’s Information Society Services and Electronic Communications Law 34/2002 (Spanish E-Commerce Act), as amended by Royal Decree Law 13/2012 which implements the e-Privacy Directive (Directive 2002/58).

On 29 April 2013, the AEPD issued guidelines on the use of cookies (Cookies Guide). This guide clarified how to interpret Article 22.2 of the Spanish E-Commerce Act. This guidance recommends that information on the use of cookies must be sufficiently visible, provided in one of the following ways:

  1. In the heading or foot page of the website
  2. Through the website Terms & Conditions
  3. Through a banner which offers information in a layered approach
    • First layer: highlighting essential information about the use of cookies, including the relevant purpose, also detailing the existence of any third-party cookies
    • Second layer: link to a cookies policy with more detailed information on cookie use, specifically the definition and function of each cookie, information about the types of cookies used, information about how to delete cookies and identification of all parties who place cookies

The Cookies Guide also clarifies the way in which consent to cookies must be obtained. This includes:

  • Acceptance of website terms and conditions or privacy policy
  • Configuration of browser functions
  • Feature led when a website offers a new function
  • Download of specific website content
  • Configuration of website functions

Implied consent can only be deemed from a user’s specific action, as opposed to inactivity, such as the use of a scroll bar in the vicinity of where cookies information was highly visible, or otherwise clicking on website content.

In July 2013, four months after issuing the Cookies Guide, the AEPD began investigations into Navas Joyeros S.L and Luxury Experience S.L and their use of cookies for their promotional websites. Article 38.4(g) of the Spanish E-Commerce Act empowered the AEPD to impose monetary penalties totalling €3500 against both companies. In the Resolution No. R/02990/2013, the AEPD declared the companies had failed to provide sufficiently clear and comprehensive information about the use of cookies in violation of Article 22.2 of the Spanish E Commerce Act. Specifically, the information on cookie use was not provided in the layered manner required by the AEPD’s Cookies Guide. Furthermore the notices neglected to detail the cookies used or the types of cookies set, merely specifying broad purposes for the use of cookies and omitting to mention which cookies were controlled by the website or by third parties, and failing to provide website users with information about how to deactivate cookies or revoke consent to their use.

The AEPD’s landmark decision has resulted in the first EU cookie fine being issued, and could well set the precedent for further penalties in the future for website operators with slack cookie practices.

IAB Future of the Cookie Working Group Publishes White Paper on Privacy and Tracking In a Post-Cookie World

This post was written by Cynthia O'Donoghue.

The ‘Future of the Cookie Working Group’, established by the International Advertising Bureau (IAB) in 2012, has published a white paper titled ‘Privacy and Tracking in a Post-Cookie World’, which addresses the limitations of the traditional cookie.

The Future of the Cookie Working Group takes issue with the fact that the cookie often forms the crux of many privacy-related debates. Furthermore, the cookie is increasingly being regarded as a hindrance to our Internet browsing, with cookie-clogging leading to frustratingly slow page-load times. Perhaps the most significant problem rests in the fact that cookie is becoming an outdated tool in our advancing digital environment. Steve Sullivan, Vice President, Advertising Technology, IAB, commented ‘The cookie has been central to the success of Internet advertising. However, the industry has evolved beyond the cookie’s designed capability.’ Anna Bager, Vice President and General Manager, Mobile Marketing Center of Excellence, IAB, added, ‘With the proliferation of Internet-connected devices, cookies are an increasingly unreliable tool for tracking users. Since they cannot be shared across devices, cookies lack the ability to provide users with persistent privacy and preferences as they jump from smartphone to laptop to tablet. At the same time, it leaves publishers unable to seamlessly communicate customized content to their audiences on a variety of screens. This report is the first step in correcting the problem and eliminating one of the biggest limitations impacting mobile advertising today.’

As a way forward, the white paper proposes five solutions that could potentially replace the role of the traditional cookie, including:

  • Device – Use of statistical algorithms to infer a user’s ID from information provided by the connected device, browser app or operating system.
  • Client – A user’s browser app or operating system tracks user information and manages preferences, then passes the information along to third parties.
  • Network – Third-party servers are positioned between the users’ device and publishers’ servers set IDs that are used to track user information and manage preferences.
  • Server – The current approach using cookies to track user information and manage preferences.
  • Cloud – Tracks user information and manages preferences via a centralized service that all parties agree to work with.

The white paper then analyses the feasibility of each of these solutions against IAB Guiding Principles, which identify the core needs of publishers, consumers and industry as set out below:

  • Publishers
    • A single privacy dashboard
    • Comprehensive privacy controls
    • Significantly fewer third-party pixels
    • Improved user tracking and targeting
    • Reduced cost for privacy compliance
    • Ability to detect non-compliant third parties
    • Open competition
    • Minimal deployment overhead
  • Consumers
    • A single privacy dashboard
    • A universal privacy view
    • Comprehensive privacy controls
    • Persistent and universal consumer preferences
    • Ability to detect non-compliant publishers and third parties
    • Free online service
  • Industry
    • Decreased ramp-up time and cookie churn
    • Lower operating cost
    • Better cross-device tracking
    • Better consumer transparency and control
    • High-integrity frequency capping
    • Less redundant data collection and transfer
    • Reduced regulatory threats
    • Clear value to the consumer
    • A non-proprietary solution not limited to one vendor
    • Minimal deployment overhead

The white paper concludes by confirming that all solutions proposed would prove more effective in achieving the objectives in IAB’s Guiding Principles than the current cookies-based approach. Taking this into account, and the fact that the proposed EU data protection regulation intends to impose more stringent rules on profiling, this could signal the demise of the traditional cookie as we know it.

ICO January Updates for 2014

This post was written by Cynthia O'Donoghue.

The ICO has had a busy January with some key updates to note for the start of 2014.

The ICO has produced a series of quarterly reports:

  • Spam text messages
    • The main three topics for the subject of unsolicited marketing text messages were found to be debt management, payday loans and payment protection insurance.
    • Enforcement activity for 2014 will focus on culprits in breach of the Privacy and Electronic Communications Regulations (PECR) 2003.
    • The ICO has lobbied the Department for Culture, Media and Sport to lower the threshold to trigger enforcement by monetary penalty fines for violation of PECR, submitting the case that the trigger of demonstrating substantial damage or distress is too high, with too many organisations sending unsolicited marketing texts slipping through the grasp of the ICO’s enforcement powers.
  • Marketing calls
    • The top three subjects of live sales calls covered payment protection insurance, accident claims and energy.
    • The level of complaints about cold calls is at its lowest level since October 2012, totalling 4,996 in December 2014.
    • The ICO correlates the decline in complaints as a direct result of fines issued in 2013, such as that against DM Design Bedrooms Ltd of £90,000 for making 2000 unsolicited marketing calls in breach of PECR.
  • Cookies
    • The ICO received 53 complaints during the period of October-December 2013 about cookies via the ICO website.
    • The ICO is focusing enforcement on the most visited UK websites, which have taken no steps to raise awareness about cookies or sought to gain user consent.
    • The ICO has now written to a total of 265 organisations about compliance with cookie rules.

The ICO has experienced mixed fortunes with enforcement action. On January 24, the ICO successfully sentenced six investigators of ICU Investigations Ltd for conspiring to unlawfully obtain personal data about its clients, finding the two managers of the company guilty of a criminal offence under section 55 of the Data Protection Act 1998, and fined the investigators a total of £37,107. Furthermore, back in December 2013, the ICO issued a fine of £175,000 against payday loan company First Financial UK for sending millions of unauthorized marketing text messages. In juxtaposition to this, the First Tier Tribunal (Information Rights) overturned a £300,000 monetary penalty notice issued against Tetrus Telecoms for sending unsolicited text messages to consumers. In spite of this, the ICO is keen to stress it will be appealing this decision further to demonstrate that breaches of the PECR will not be tolerated.

The ICO has also issued a report analysing the strengths and weaknesses of data-processing activities in GP practices involving sensitive patient data. The report identifies a series of recommendations to improve existing practices, including ensuring all data breaches are reported, improving the way in which patients are informed about how their data will be used, greater awareness about the risks of using fax machines to process patient data, and more careful management of large volumes of patients paper records. This report will likely be particularly potent in light of NHS England’s latest plans as part of its care.data scheme, scheduled to be launched this March, and which will create a central database for all patient records in the UK.

Finally, the latest draft guidance from the ICO, ‘Data Protection and Journalism – a guide for media’, has also been issued for public consultation. The guide has emerged in the context of finding from the Leveson Inquiry into the Culture, Practices and Ethics of the Press in November 2012, which highlighted the need for the ICO to issue good practice guidelines to ensure appropriate standards of data processing are adhered to by the press and media. The deadline for public responses on the draft is 22 April 2014.

NHS Advocates Selling Confidential Patient Data For Secondary Purposes

This post was written by Cynthia O'Donoghue.

Latest plans announced by the UK’s Health and Social Care Information Centre (HSCIC) have resulted in a flurry of media controversy condemning NHS England (NHS) for advocating the sale of patient data to third parties for profitable gain.

HSCIC, together with the NHS, has pioneered a new scheme, known as the ‘care.data’. From March 2014, patient data from GP practices will be extracted, anonymised and aggregated in a central database for sale to third parties such as drug and insurance companies. Such data will include information about every hospital admission since 1980, family history, vaccination records, medical diagnoses, referrals, health metrics such as BMI and blood pressure, as well as all NHS prescriptions. This information will be combined with other confidential patient data, such as date of birth, postcode, gender, and NHS number, to allow the NHS to assess patient care. The NHS then intends to sell such pseudonymised information to any organisation which can meet certain questionable criteria for conditions of release. These include broad circumstances such as for health intelligence, health improvement, audit, health service research and service planning. Critics have condemned such moves as highly controversial, considering that most patients believe any information shared with their GPs is given in the strictest confidence; yet this will be shared automatically as part of the care.data scheme unless patients explicitly opt-out. 

The British Medical Associations supports the initiative, which advocates the secondary use of patient data. Interestingly, the scheme has also received approval from the ICO, on the grounds that the Health and Social Care Act 2012 permits the NHS to extract patient data under the care.data scheme, which provides a lawful basis for processing data for the purposes of the Data Protection Act 1998. The NHS insists that the data will only be used for the benefit of the health and care system to improve the quality of care delivered to patients.

In spite of these reassurances, privacy critics fear that the scheme will result in patients losing track of their data, with no information about whom their information has been shared with, and for what purposes it may be used. Mark Davies, Public Assurance Director of the HSCIC, has also raised concerns by commenting that there is a small risk that patients could be re-identified, given the potential for third parties to match the pseudonymised patient data against their own records.

To ease anxiety, the NHS is in the process of sending out leaflets titled ‘Better Information Means Better Care’ to 26 million households as part of an awareness campaign about the scheme. Critics have similarly condemned this campaign, for failing to clearly explain the privacy risks to patients and inadequately highlighting the right to opt out of the scheme. Ultimately the scheme opens up a chasm of uncertainty about the confidentiality of patient data entrusted to the NHS in its capacity as a data controller.
 

EU Research Group Condemns EU Regulation for Restricting Growth in Life Sciences Sector

This post was written by Cynthia O'Donoghue.

The Wellcome Trust has collaborated with a number of leading medical research organisations to lobby the European Parliament and the Council of Ministers against amendments to the proposed EU Regulation, which could severely restrict the future growth of the life sciences sector in the EU.

The lobby group comprises of the European Organisation for Research and Treatment of Cancer, the Federation of European Academies of Medicien, France’s Institut Pasteur, Sweden’s Vetenskapsradet, Germany’s VolkswagenStifung and ZonMw, and the Netherlands Organisation for Health Research and Development. The group intends to urge the European Parliament to reject amendments to the proposed General Data Protection Regulation (the Regulation) previously successfully voted on by the European Parliamentary Committee on Civil Liberties, Justice and Home Affairs (LIBE) in October 2013 (see our previous blog.)

The original draft proposal for the Regulation, first released in January 2012, included a requirement for specific and explicit consent for the use and storage of personal data for secondary purposes; however, provided an exemption from this requirement for research purposes subject to stringent safeguards. As a result, this initial draft was considered “measured and sensible and struck the right balance between protecting the individuals and making possible huge benefits for all our health,” commented Wellcome Trust Director Jeremy Farrar in a statement to Bloomberg BNA 29 January.

However, LIBE’s amendments, if approved by the European Parliament and Council of Ministers, propose to remove the exemption from the consent requirement making the use of pseudonymised health data in research without specific consent illegal at worst, and unworkable at best. In effect, this could make it difficult, if not near on impossible, for research bodies such as the Wellcome Trust to use pseudonymised health data for secondary research purposes without specific consent. This could impose severe restrictions on the biotechnology industry, preventing growth in clinical trials and scientific research for the benefits of the health of European Citizens. The coalition lobby group aims to convince MEPs that health data is a vital resource for scientific breakthroughs, which will become impossible if the current draft of the Regulation is not challenged further.

One key argument of the lobby group is that the current position under the EU Directive 95/46/EC already offers a sufficiently robust governance framework which ensures an individual’s data is only used for research in the public interest and within the constraints of strict confidentiality measures. Furthermore in reality, the majority of participants of research studies already voluntarily provide their consent, rendering the requirement for specific consent under the Regulation superfluous. Reform of the EU data protection law as currently drafted therefore represents the worst case scenario for bodies such as the Wellcome Trust.

Reinforcing the coalition group’s campaign, UK Advocacy Group, the Fresh Start Project released a report titled ‘EU Impact on Life Sciences’ which similarly deplores the draft EU Regulation as exemplifying a biotech hostile regulatory framework. It condemns the Regulation for leaving member states with little room for manoeuvre to determine their own policies for data protection. It is perceived that if amendments to the Regulation are approved, this will constrain growth in health and scientific research, creating a ‘global slow lane for biotechnology’ and undermining Europe as a hub of biotechnology. This could effectively force Europe to take a backseat in a biotechnology revolution, inhibiting the chances of securing future investment for economic growth. Furthermore, this will put at risk significant European investments currently in place, including ‘The European Prospective Investigation Into Cancer and Nutrition Study’ involving more than half-a-million European citizens, not to mention plans for the European Medical Information Framework project worth €56 million, due to link together existing health data from sources across Europe to provide a central bank of information available to researchers for vital studies.

It remains to be seen whether the European Parliament will listen to concerns of the lobby group and examine the provisions of the draft Regulation in more detail. The fear is that such lobbying will go unheard. This is in light of recent comments from Vice President Viviane Reding, indicating that European Parliament is keen to adopt the current draft of the Regulation as approved by LIBE, in order to push forward full speed ahead for the much-anticipated EU data protection reform in 2014.
 

Cyber-Security in Corporate Finance

This post was written by Cynthia O'Donoghue and James Wilkinson.

The ICAEW has partnered with a task force, including the Law Society, the London Stock Exchange, the Takeover Panel and the Confederation of British Industry, to publish a guide on ‘Cyber-Security in Corporate Finance’ for 2014.

Please click here to read the issued Client Alert.

 

 

The Final NIST Cybersecurity Framework Document Is Out: Now What?

This post was written by Timothy J. Nagle.

The year-long process – led by the National Institute of Standards and Technology (NIST) and the Department of Homeland Security (DHS) – of conducting outreach to the private sector, issuing drafts, receiving and evaluating input, and facilitating interagency coordination, ended with the publication last week of the “Framework for Improving Critical Infrastructure Cybersecurity” (Version 1.0). It is a comprehensive document that was initiated by Executive Order 13636 (“Improving Critical Infrastructure Cybersecurity”), and draws heavily from existing standards such as NIST 800-53, ISO 27001, COBIT and others. The Framework represents significant effort by NIST, sector-specific agencies, industry organizations and individual companies to provide an approach for managing cybersecurity risk “for those processes, information, and systems directly involved in the delivery of critical infrastructure services.” This last quote from the “Framework Introduction” section states the purpose and scope of the document. What remains to be seen is the process for implementation, extent and variety of adoption across sectors and industries, and assertion as a “standard” outside of the critical infrastructure context.

Please click here to read the issued Client Alert.

 

Report Released on Coordinating Standards for Cloud Computing in Europe

This post was written by Cynthia O'Donoghue.

The European Commission has announced that the European Telecommunications Standards Institute (ETSI) has finally released areport titled ‘Cloud Standards Coordination’. This report marks an important step in materialising the European Cloud Computing Strategy ‘Unleashing Potential in the Cloud,’ first published in 2012.

The European Commission tasked ETSI to ‘cut through the jungle of standards’ that have proliferated for cloud computing services. Interestingly, the ETSI report states that cloud standardization is far more focused than originally anticipated. Furthermore, it has confirmed that while the Cloud Standards landscape is complex, ‘it is not chaotic and by no means a jungle.’

The report usefully sets out the following:

  • A definition of the key roles in cloud computing and illustrative diagram of the roles played by the Cloud Service Customer, Cloud Service Provider and Cloud Service Partner
  • An analysis and classification of more than 100 cloud computing use cases across three phases, including acquisition, operation and termination of cloud services
  • A list of more than 20 relevant organisations involved in cloud computing standardization, including, for example, the European Union Agency for Network and Information Security, the International Organisation for Standardization, and the National Institute of Standards and Technology
  • A map of core cloud computing documents including a selection of more than 150 resource documents, such as standards and specifications, as well as reports and white papers related to different activities to be undertaken by Cloud Service Customers and Cloud Service Providers over the cloud service life-cycle

The report also lists a series of recommendations, including:

  • Interoperability in the cloud requires standardization in APIs, data models and vocabularies
  • Existing security and privacy standards must keep pace with technological advances in the cloud industry, and must develop a common vocabulary
  • More standards must be developed in the area of service level agreements for cloud services, including an agreed set of terminology and service-level objectives
  • The legal environment for cloud computing is the key barrier to adoption. Given the global nature of the cloud and its potential to transcend international borders, there is a need for an international framework and governance, underpinned by agreed global standards.

Neelie Kroes, European Commissioner for the Digital Agenda, commented, “I am pleased that ETSI launched and steered the Clouds Standards Coordination (CSC) initiative in a fully transparent and open way for all stakeholders. Today’s announcement gives a lot of hope as our European Cloud Computing Strategy aims to create 2.5 million new European jobs and boost EU GDP by EUR 160 billion by 2020.”

Director General at ETSI, Luis Jorge Romero, added, “Cloud computing has gained momentum and credibility…in this perspective, standardization is seen as a strong enabler for both investors and customers, and can help increase security, ensure interoperability, data portability and reversibility.”

The report concludes by recommending that the European Commission should task ETSI to work on an updated version of the report in 12 to 18 months, considering that the rapid maturation of standardization will likely be significant over this period.

LIBE Publishes Amendments to Draft Proposal for a Network and Information Security Directive

This post was written by Cynthia O'Donoghue.

The Committee on Civil Liberties, Justice and Home Affairs (LIBE) of the European Parliament has published the latest draft of the proposed Network and Information Security (NIS) Directive (the ‘Directive’) following a series of amendments by MEPs. The proposal for the Directive was first published by the European Commission 7 February 2013 as part of the EU Cyber Security Strategy (see our previous client alert). Recital 30(a) of the latest draft estimates that cybercrime causes estimated losses of €290 billion each year, while Recital 31(b) states 1.8% of EU citizens have been victims of identity theft, and 12% have been victims of online fraud. These facts and figures only reinforce the argument that the need for a coordinated EU security strategy is more prevalent than ever.

However, the UK’s Information Commissioner’s Office previously criticised the proposed draft Directive (see our previous blog), specifically the provisions governing data breach notifications. The ICO was particularly reluctant to take on the responsibility of becoming the UK’s national competent authority (NCA) to handle a potential abundance of notifications concerning network information security incidents, unrelated to personal data, in which it has no expertise or experience. The UK Government Department for Business, Innovation & Skills was similarly critical following an impact assessment, which revealed the extortionate costs that will be disproportionately imposed on organisations to comply with the proposed Directive (see our previous blog).

The latest draft from the European Parliament includes a series of new amendments, in particular the following:

  • The obligation for each Member State to nominate an NCA responsible for coordinating NIS issues remains, with the additional obligation to establish a cooperation network to share information and ensure a harmonious implementation of the Directive
  • Each Member State must set up at least one Computer Emergency Response Team (CERT) to be responsible for handling incidents
  • Organisations must consider protection of their information systems as part of their ‘duty of care’
  • Organisations must implement appropriate levels of protection against reasonably identifiable threats and areas of vulnerability, the standard for which will differ depending on the nature of risk for each organisation
  • Member States will not be prevented from adopting provisions to ensure a higher level of security than that offered under the Directive, though maintaining measures that conflict or diverge from the minimum expectations enshrined in the Directive will not be permissible
  • Each Member State will be required to draft a national NIS strategy within 12 months of the adoption of the Directive
  • The threshold which triggers notification is to be defined in accordance with ENISA technical guidelines on reporting incidents for Directive 2009/140/EC
  • Each Member State will be obliged to notify the relevant competent authority about both the incident and the threat information having an impact of the security of the core services they provide. Notification must be complete and must be within a timeframe without measureable delay.
  • Organisations will be obliged to report and announce any incidents involving their corporation in their annual business report
  • The penalties under Article 17 will only be imposed in circumstances of gross negligence or an organisation’s intentional failure to fulfil any obligations under the Directive

However, perhaps the most significant amendment to note is that which states implementation of the Directive will be postponed until after the anticipated reform of the EU data protection framework, upon adoption of the General Data Protection Regulation. Judging from recent comments from the European Commission, this could be a long time coming.

Full Speed Ahead for EU Data Protection Reform

This post was written by Cynthia O'Donoghue.

Coinciding with ‘Data Protection Day’ on 27 January 2014, the European Commission released a memorandum confirming the status of the anticipated reform of the European data protection framework. The promised overhaul of the 1995 EU Data Protection Directive (95/46/EC) has certainly not been as rapid as hoped, with publication of the memorandum marking exactly two years since reform was first proposed in January 2012. Over this time, we have monitored and reported on the frustrating to-ing and fro-ing in discussions among the EU’s 28 member states, which has led reform to be significantly delayed (see our previous blog).

Eager to push forward, Vice President Viviane Reding has commented, “Europe has the highest level of data protection in the world. With the EU data protection reform, Europe has the chance to make these rules a global gold standard. The European Parliament has led the way by voting overwhelmingly in favour of these rules. I wish to see full speed on data protection in 2014.”

To finalize the reform, European Parliament and the EU Council must separately agree their positions on the draft proposals, before proceeding to negotiating the final outcome. The EU Council is expected to finalize its position by mid-2014, with aim to strike a deal with the Parliament by the end of 2014.  Reform certainly seems to be a priority for the new Greek Presidency, who convened a meeting in Athens on 22 January 2014 with the European Commission and two European Parliament Rapporteurs, Jan-Philipp Albrecht and Dimitrios Droutas, and with Italy, the next EU Presidency, to agree a road map for swift data protection reform in 2014.

European Parliament spokeswoman Natalia Dasilva has commented that as part of the April 2014 Plenary session,  Parliament is expected to adopt the LIBE version of the draft regulation, which was successfully voted on back in October 2013 (see our previous blog). A summary of some of the key points under the LIBE draft included:

  • The right to be forgotten
  • The right to data portability
  • Explicit consent requirements
  • Notification of serious data security breaches to data subjects and supervisory authorities within 24 hours
  • One continent, one law: single pan-European law for data protection to replace current inconsistent patchwork of national laws
  • One-stop-shop: organisations will only have to deal with one single supervisory authority
  • Data protection authorities to have strong enforcement powers, including ability to fine companies up to 2%-5% of their global annual turnover
  • Regime for notifications to supervisory authorities will be scrapped

While certain aspects of the LIBE draft remain controversial, if Parliament should proceed to adopt this version, it will avoid starting the whole procedure from scratch, and will eliminate the fear that details of the text could be reopened for discussion causing even further delay.
 

Google To Get Grilling Before UK Courts for Covert Safari Browser Tracking

This post was written by Cynthia O'Donoghue 

High Court Judge Mr Justice Michael Tugendhat has declared in the case Vidal-Hall & Ors v Google Inc. [2014] EWHC 13 (QB) (16 January 2014) that infamous U.S. corporation Google Inc. (‘Google’) will face the scrutiny of the UK courts in a privacy claim brought by three British Internet users (the ‘Claimants’), who have started a group known as the ‘Safari Users Against Google’s Secret Tracking’ (the ‘Claimants’).

Click here to read the issued client alert.

No Harm, Big Foul: With Spokeo, Ninth Circuit Finds Willful FCRA Violation Is Sufficient for Suit, With or Without Actual Injury

This post was written by Paul Bond and Christine Czuprynski.

On February 4, the Ninth Circuit ruled that a plaintiff need not show actual harm to have standing to sue under the Fair Credit Reporting Act (FCRA); a violation of the statutory right is a sufficient injury in fact to confer standing. The case, Robins v. Spokeo, Inc., may open the door for plaintiffs to get past the motion to dismiss stage in FCRA cases, as well as potentially in other cases that involve violations of statutory rights.

Here, the plaintiff alleged that Spokeo posted inaccurate credit-related information on its website in a willful violation of the FCRA. After his original complaint was dismissed by the Central District of California for failure to allege actual or imminent harm, the plaintiff amended his complaint to include an allegation that Spokeo’s posting of false information caused harm to his prospects for employment, and caused him anxiety and stress about his lack of employment prospects. The district court denied Spokeo’s motion to dismiss the amended complaint, but reconsidered its ruling after Spokeo moved to certify an interlocutory appeal. Upon reconsideration, the district court dismissed Robins’ complaint because Robins lacked Article III standing.

In reversing the district court’s ruling, the Ninth Circuit found that the FCRA cause of action does not require a showing of actual harm when the suit alleges a willful violation. The court agreed with a 2009 Sixth Circuit case, Beaudry v. Telecheck Services, Inc., that violations of statutory rights created by FCRA are the kind of de facto injuries that Congress can elevate to legally cognizable injuries. The court found that Robins satisfied the two constitutional limitations on congressional power to confer standing – he alleged that Spokeo violated his specific statutory rights, not just the rights of other people, and his personal interests in handling his credit information were individualized rather than collective. By surviving the motion to dismiss, Robins can now focus on the merits of his claim.

Many class actions arising from the loss or theft of financial information are pleaded under the FCRA, even where the status of the defendant as a consumer reporting agency is far from clear. In addition, though the Robins v. Spokeo case – and the Beaudry case it cites – are specific to the FCRA, the decision could potentially implicate standing arguments in cases alleging other statutory violations.

Google Exposed as in Breach of Dutch Data Protection Law

 This post was written by Cynthia O'Donoghue.

The Dutch data protection authority, the College Bescherming Persoonsgegevens (CBP), has released a report following a seven-month investigation examining Google’s changes to its privacy policy. CBP’s report condemns Google for violating Dutch data protection law, the Wet bescherming persoonsgegevens (Wbp).

Controversially in March 2012, Google made changes to its privacy policy (GPP2012) to allow the combination of data collected from all of its services (including Google Search, Google Chrome, Gmail, Google DoubleClick advertising, Google Analytics, Google Maps and YouTube, as well as cookies via third-party websites).  Most significantly, CBP found that Google failed to demonstrate that adequate safeguards had been put in place to ensure the combination of data in this manner was limited to that which was strictly necessary, and Google was therefore in breach of Article 8 Wbp.

CBP also found that in breach of Article 33 & 34 Wbp, GPP2012 demonstrated a lack of information as to Google’s identity as data controller, and the types and extent of data collected or the purposes for which Google needs to combine this data. GPP2012 states that the purpose of its data-processing activities is ‘the provision of the Google service’. CBP found this statement to be ambiguous and insufficiently specific. CBP held without any legal grounds for processing, that Google had no legitimate purpose to collect data in this manner and was therefore found in breach of Article 7 Wbp. 

Furthermore, and specifically in relation to Google’s data-processing activities associated with tracking cookies, CBP declared Google in breach of Article 11.7a of the Dutch telecommunications act Telecommunicatiewet (Tw), which requires unambiguous consent. CBP found that Google failed to offer any prior options to consent, reject or later opt out of such data-processing activities. CBP reiterated it was insufficient for Google to claim that acceptance of its general terms of service and privacy policy amounted to consent.  

Jacob Konstamm, CBP Chairman, commented, “Google spins an invisible web of our personal data, without consent. That is forbidden by law.” In response, Google commented, “Our privacy policy respects European law and allow us to create simpler, more effective services…We have engaged fully with the Dutch DPA throughout this process and will continue to do so going forward.”

California Senate Passes SB 383 Expanding The Song-Beverly Credit Card Act to Online Transactions of Downloadable Content

This post was written by Lisa Kim and Jasmine Horton.

On January 30, 2014, the California Senate approved SB 383, which amends the Song-Beverly Credit Card Act (Song-Beverly Act) to apply to online credit card transactions of electronic downloadable content (e.g., music, videos). Originally crafted to apply to all online credit card transactions, the bill has been resurrected in pared-down form from its death in the Senate last May.

The revised SB 383 allows online merchants to collect personal information, such as zip codes and street addresses, in connection with online credit card transactions of electronic downloadable products, provided that the information is: (1) used only for fraud detection and prevention purposes, (2) destroyed after use, and (3) not shared unless obligated by law to do so. The bill also allows for the collection of additional personal information only if the consumer elects to provide it, and if s/he is informed of the purpose and intended use of the requested information, and has the ability to opt out before the online transaction is complete.

The Song-Beverly Act, as it currently stands, prohibits merchants from asking for any personal identification information, other than a form of personal identification (e.g., driver’s license), in order to complete a credit card transaction. While there are specific exceptions to this rule, such as allowing zip codes at gas pumps and personal information when it's incidental to the transaction (i.e., for shipping and delivery purposes), it is unclear whether such prohibitions apply to online transactions where there is no actual human interaction. Indeed, in February of last year, the California Supreme Court held that Song-Beverly did not apply to online transactions involving downloadable products. See Apple Inc. v. Superior Court, 56 Cal.4th 128 (2013).

SB 383 is in direct response to the Apple case, but given its narrow application to just downloadable products, it still does not answer the question of whether the Act applies to other online transactions, such as those where the product is mailed to the consumer or picked up in the store. Many trial courts are holding that it does not, and plaintiffs are challenging these decisions in the appellate courts. See e.g., Salmonson v. Apple, Cal. Court of Appeals, Case No. B253475 (appealing court decision that Song-Beverly Act did not apply to online transactions picked up at store); Ambers v. Buy.com, 9th Circuit Case No. 13-55953 (appealing court’s decision that Song-Beverly Act did not apply to online purchase shipped to customer). Arguably, since the Legislature had the opportunity in the original bill to apply the Act to all online transactions and yet chose not to do so, online merchants may have some additional legislative history to assist them in upholding these rulings.

We will be keeping our eyes on this bill as it moves through the Assembly. It will be interesting to see whether the pending appeals impact the development of this legislation, and vice versa.

China Drafts Rules on Administration of Personal Health Data

This post was written by Cynthia O'Donoghue and Zack Dong.

For the first time in China, draft measures for the administration of personal health data have been introduced by the National Health and Family Planning Commission (NHFPC). The NHFPC released the draft November 19, 2013, and invited public commentary on its website.

Under the measures, ‘personal health information’ is broadly defined to include:

  • Population information (including family composition and family planning)
  • Electronic health archives (health records)
  • Electronic medical records (generated by medical personnel)
  • Other information generated for management and administration of health institutions

The main requirements of the rules are:

  • Only approved health and family planning institutions may collect personal health information to the limited extent required to carry out their duties and responsibilities
  • Health data cannot be collected or used for commercial purposes
  • Individuals must be informed of the purpose for collection and their consent must be obtained
  • Amending, deleting, duplicating or disclosing health data without consent of the data subject is not permissible
  • Cross-border transfers are restricted
  • Health data shall not be used for purposes beyond those indicated at the time of collection without authorisation
  • Storing personal health information in any server located outside of China is prohibited

The rules will take immediate effect upon final publication. However, the measures fail to implement any fine or sanction for violation of the rules; therefore, it remains to be seen how effective they will be in practice.
 

LIBE Committee Report on U.S. Surveillance Activities Calls for an End to EU-U.S. Data Transfers

This post was written by Cynthia O'Donoghue.

Recently leaked, the LIBE Committee draft report on surveillance activities signals a dim future for the international free flow of data in the eyes of the European Parliament. The report despairs of the recent revelations by whistle-blowers about the extent of U.S. mass surveillance activities, causing the trust between the EU and the United States to be profoundly shaken. LIBE argues that the magnitude of blanket data collection goes beyond what would be reasonably expected to counter terrorism and other security threats. LIBE condemns the deficiencies of international treaties between the EU and the United States, and the inadequate checks and balances in place to protect the rights of EU citizens and their personal data.

LIBE proposes a controversially drastic solution to the vulnerabilities exposed by NSA surveillance activities in the United States. Contrary to the ideal of achieving the international free flow of data in our digital society anticipated by European data protection reform, LIBE proposes to shut down all trans-Atlantic data flows, effectively isolating Europe.

Critics have argued that the following measures proposed by LIBE are wholly disproportionate and unrealistic:

  • EU member states and U.S. authorities should prohibit blanket mass surveillance activities and the bulk processing of personal data.
  • EU and U.S. authorities should take appropriate steps to revise legislation and existing treaties to ensure that the rights of EU citizens are protected.
  • The United States should adopt the Council of Europe’s Convention 108 with regard to the automatic processing of personal data.
  • The Commission Decision 520/2000, which declared the adequacy of Safe Harbor as a mechanism for EU-U.S. transfers, should be suspended, and all transfers currently operating under this mechanism should stop immediately.
  • The adequacy of standard contractual clauses and BCRs in the context of mass surveillance should be reconsidered, and all transfers of data currently authorised under such mechanism should be halted.
  • The status of New Zealand and Canada as adequate protection countries for data transfers should be reassessed.
  • The adoption of the whole Data Protection Package for reform should be accelerated.
  • The establishment of the European Cloud Partnership must be fast-tracked.
  • A framework for the protection of whistle-blowers must be established.
  • An autonomous EU IT capability must be developed, including ENISA minimum security and privacy standards for IT networks.
  • Commission must present an EU strategy for democratic governance of the Internet by January 2015.
  • EU member states should develop a coherent strategy with the UN, including support of the UN resolution on ‘the right to privacy in the digital age’.

The report concludes by highlighting a priority plan with the following action list:

  • Adopt the Data Protection Package for Reform in 2014
  • Conclude an EU-U.S. Umbrella Agreement ensuring proper redress mechanisms for EU citizens in the event of data transfers to the United States for law enforcement
  • Suspend Safe Harbor mechanism and all data transfers currently in operation
  • Suspend data flows authorised on the basis of contractual mechanism and Binding Corporate Rules
  • Develop a European Strategy for IT independence

Critics have condemned LIBE’s report as a step backwards, and suggest it should be considered as a call for action rather than a realistic solution.
 

A new "target" on their backs: Target's officers and directors face derivative action arising out of data breach

This post was written by David Z. Smith, Christine Z. Czuprynski, Carolyn H. Rosenberg and J. Andrew Moss.

In the wake of its massive data breach, Target now faces a shareholder derivative lawsuit, filed January 29, 2014. The suit alleges that Target’s board members and directors breached their fiduciary duties to the company by ignoring warning signs that such a breach could occur, and misleading affected consumers about the scope of the breach after it occurred. Target already faces dozens of consumer class actions filed by those affected by the breach, putative class actions filed by banks, federal and state law enforcement investigations, and congressional inquiries.

This derivative action alleges that Target’s board members and directors failed to comply with internal processes related to data security and “participated in the maintenance of inadequate cyber-security controls.” In addition, the suit alleges that Target was likely not in compliance with the Payment Card Industry’s (PCI) Data Security Standards for handling payment card information. The complaint goes on to allege that Target is damaged by having to expend significant resources to: investigate the breach, notify affected customers, provide credit monitoring to affected customers, cooperate with federal and state law enforcement agency investigations, and defend the multitude of class actions. The derivate action also alleges that Target has suffered significant reputational damage that has directly impacted the retailer’s revenue.

Target announced the breach December 18, 2013, stating that 40 million credit and debit card accounts may have been affected, and notified its customers via email shortly thereafter. Though PINs were not thought to have been part of the breach, on December 27, Target announced that encrypted PINs had also been accessed. In January, the retailer began offering credit monitoring to affected individuals. On January 10, 2014, Target announced that it uncovered a related breach of customer information – name, address, phone number, and/or email address – for up to 70 million customers. With that announcement, many news outlets are reporting that the total number of affected individuals is 110 million.

This lawsuit is part of a growing trend of derivative and securities fraud complaints based on alleged lack of internal controls over data security and privacy that have been filed against companies like Google, Heartland Payment, ChoicePoint, TJX, and Sony. We previously blogged about the Google derivative suit here.

The prevalence of these suits highlights the fact that insurance is an important protection that should not be overlooked. What follows are key Rules for the Road:

  • Derivative suits against directors and officers are typically covered under a D&O policy. However, other relevant policies to review may include cyberliability/data privacy, professional liability (E&O) coverage, and fiduciary liability (FLI) coverage (if the company’s employee benefit plans allow investment in the company’s own securities).
  • Notice should be given timely to all primary and excess insurers pursuant to the policy provisions.
  • D&O policies typically provide that the insureds must defend the claim, subject to obtaining the insurer’s consent to the defense arrangements. Accordingly, it is important to obtain the insurer’s consent to proposed defense arrangements that consent should not be unreasonably withheld.
  • Potential exclusions or other terms and conditions impacting coverage should be analyzed. Some may apply, if at all, only to a portion of a claim. Others may not apply to defense costs, and others may not apply unless and until there is a “final adjudication” of the subject matter of the exclusion. It is important to carefully review the coverage defenses raised, and push back on the carriers’ coverage challenges.
  • If settlement is being considered, review the policies’ provisions regarding cooperation, association in the defense and settlement of the case, and requirements to obtain the insurer’s consent to a settlement. Carefully review coverage for all components of a settlement, including settlement amounts, plaintiffs’ attorneys’ fees, interest, and defense costs.
  • Review the policy’s dispute-resolution provisions so that in the event of a coverage challenge, the insureds understand whether there is a policy requirement or option to mediate or arbitrate. Consider the provisions in excess policies as well.

Though it is tempting to conclude that Target is being attacked from all sides – including this most recent attack from a shareholder – because of the size of the breach, these kinds of responses from consumers, banks, regulatory agencies, legislative bodies, and shareholders are becoming all too common in the aftermath of many security breaches. It is an important reminder of the need for strong data security, internal controls, insurance protection, and compliance with all relevant processes and procedures.

ENISA Publishes Report & Good Practice Guide on Government Cloud Deployment

This post was written by Cynthia O'Donoghue.

The EU Agency for Network and Information Security (ENISA) announced in a press release that it has produced a report titled ‘Good Practice Guide for Securely Deploying Governmental Clouds’, which analyses the current state of play regarding governmental Cloud deployment in 23 countries across Europe, categorised on a scale of “Early adoptors”, “Well-Informed”, “Innovators” or “Hesitants”.

A high-level summary of the results for each category were as follows (country-specific analysis is available in full in the report):

  • Early adoptors: UK, Spain and France
  • These countries have a Cloud strategy in place and have taken place to implement the governmental Cloud
  • Well Informed: The Netherlands, Germany, Republic of Moldova, Norway, Ireland, Finland, Slovak Republic, Belgium, Greece, Sweden and Denmark
  • These countries have strategy but are yet to take steps to implement the governmental Cloud
  • Innovators: Italy, Austria, Slovenia, Portugal and Turkey
  • These countries do not have a Cloud strategy but may have a digital agenda that considers adoption of Cloud computing, but already have some Cloud services running based on bottom-up initiatives. Cloud implementation is forthcoming but will need to be supported by national/ EU level regulation.
  • Hesitants: Malta, Romania, Cyprus and Poland
  • These countries are planning to implement governmental Cloud in the future to boost competitive business, but currently have no strategy or Cloud initiatives in place

The report also sets out 10 recommendations for the secure development of governmental Clouds. These include:

  1. Support the development of an EU strategy for governmental Clouds
  2. Develop a business model to guarantee sustainability, as well as economies of scale for government Cloud solutions
  3. Promote the definition of regulatory framework to address the locality problem
  4. Promote the definition of a framework to mitigate the loss-of-control problem
  5. Develop a common SLA framework
  6. Enhance compliance to EU and country specific regulations for Cloud solutions
  7. Develop certification framework
  8. Develop a set of security measures for all deployment models
  9. Support academic research for Cloud computing
  10. Develop provisions for privacy enhancement

The Executive Director of ENISA, Professor Udo Helmbrecht, commented, “This report provides the governments the necessary insights to successfully deploy Cloud services. This is in the interest of both the citizens, and for the economy of Europe, being a business opportunity for EU companies to better manage security, resilience, and to strengthen the national cloud strategy using governmental Clouds.”

EU Nominates Expert Group To Develop Standard Cloud Computing Contract

This post was written by Cynthia O'Donoghue.

The European Commission announced that the European Cloud Partnership facilitated the meeting of an expert group of lawyers, cloud service providers and customers on November 20, 2013 to “cut through the jungle of technical standards on cloud computing” by setting down safe and fair terms and conditions, and develop an exemplary template contract for cloud computing in accordance with the 2012 European Cloud strategy ‘Unleashing Potential in the Cloud’.

The hot topics on the agenda:

  • Data preservation after termination of the contract
  • Data disclosure and integrity
  • Data location and transfer
  • Ownership of the data
  • Direct and indirect liability
  • Change of service by cloud providers
  • Subcontracting

Commission Vice President Viviane Reding commented “The group's aim is to provide a balanced set of contract terms for consumers and small to medium-sized businesses to support them to use Cloud computing services with more confidence.”

The outcome of the meeting of the experts will be produced in a report due to be published by the Commission in early 2014.
 

European Commission Aims for Europe to be World's Leading 'Trusted Cloud Region'

This post was written by Cynthia O'Donoghue.

Developing on the European Cloud strategy, ‘Unleashing the potential of Cloud Computing in Europe’ released in 2012, the European Commission has released a memo to foster greater support for cloud computing services in Europe, with the ambition for Europe to become the world’s leading trusted cloud region and a harmonious single market for cloud computing known as ‘Fortress Europe’.

The Commission demands faster widespread adoption of cloud computing to improve productivity levels in the European economy, even in spite of recent doubts about cloud security in the context of revelations about PRISM and other surveillance activities. To relay concerns for security, the Commission established the European Cloud Partnership Steering Board. Furthermore, to restore trust in cloud services, the Commission call for greater transparency, specifically by government bodies.

The Commission highlights that the recent proposal for a new EU data protection regulation scheduled to be adopted in 2015 will provide a uniform legal base for the protection of personal data across Europe.  Furthermore, the European Telecommunications Standards Institute has been working with ENISA and the Select Industry Group to develop EU-wide voluntary certifications schemes to help cloud computing suppliers demonstrate to customers that they adhere to high standards of network and information security.

The Commission concludes Europe can pride itself on high standards for data protection and data security, and be reassured that this provides the strong foundation for secure cloud computing. The memo therefore implores Europe to ‘embrace the potential economies of scale of a truly functioning EU-wide single market for cloud computing where the barriers to free data-flow around Europe would be reduced providing a massive boost to competitiveness.’

Mexican Data Protection Authority Issues New Data Security Guidelines

This post was written by Cynthia O'Donoghue.

The Mexican data protection authority, the Institute of Access to Information and Data Protection (the IFAI), has issued data security guidelines for businesses to ensure measures are implemented to comply with the data security provisions of the Mexican data protection law, the Federal Law on the Protection of Personal Data in the Possession of Private Parties (the Federal Law).

Mexico’s Data Protection Secretary, Alfonso Onate-Laborde, commented, “Although the Mexican Data Protection Law required companies to implement a minimal set of security measures by 21 June 2013, many companies have not done so and stay at a low level of compliance with the rules. The Guidelines will provide useful advice for companies on how to implement security rules into their operating processes.”

To ensure compliance with Article 19 of the Federal Law in particular, the IFAI guidelines recommend that companies adopt a Safety Management System of Personal Data based on a four-step process ‘Plan-Do-Check-Act’ ( the PDCA cycle), which can be summarised as follows:

  1. Plan - identify key security objectives, examine data flows within the organisation and conduct a risk analysis
  2. Do - implement the necessary policies, procedures and plans to help achieve data security objectives
  3. Check - audit and evaluate whether policies, procedures and plans are achieving security objectives
  4. Act - take corrective action and other remediation measures to continually improve security, including training relevant personnel

While adoption of the guidelines is voluntary and not mandatory, companies are warned that the IFAI has the power to issue fines of up to $3 million to penalise incidents involving data security breaches. The IFAI is set to hire third-party contractors to conduct data security inspections to reinforce the IFAI’s increasingly punitive enforcement reputation of recent months, such as the €1 million fine against Banamex, the Mexican division of Citibank.

Alfonso Onate-Laborde commented, “An increasing number of Mexican companies are taking affirmative steps to improve their data security, realising there is no more time left to postpone compliance...the IFAI will focus on enforcement and conduct data security audits of companies to determine compliance with the guidelines.”

 

Setting Higher Standards for Payment Card Data Security

This post was written by Cynthia O'Donoghue.

To enhance security standards to protect customer payment data in the context of increasing e-commerce, the Payment Card Industry (PCI) Security Standards Council has announced it has releasedversion 3.0 Payment Application Data Security Standards (PA-DSS) and version 3.0 of the PCI Data Security Standard (PCI-SS), which will become effective from 1 January 2014. The package of standards set key requirements for the storage and processing of customer payment card data to prevent cardholder data security breaches.

Details of the changes from version PCI-SS 2.0 to 3.0 can be read here. In summary, the new key requirements are:

  • Evaluate evolving malware threats for any systems not considered to be commonly effected
  • Combine minimum password complexity and strength requirements into one, an increased flexibility for alternatives
  • For service providers with remote access to customer premises, use unique authentication credentials for each customer
  • Where other authentication mechanisms are used (e.g., physical security tokens, smart cards or certificates), these must be linked to an individual account and ensure only the intended user can gain access
  • Control physical access to sensitive areas for onsite personnel, including a process to authorize access, and revoke access immediately on termination
  • Protect devices that capture payment card data via direct physical interaction with the card, from tampering and substitution
  • Implement a methodology for penetration testing and including any segmentation methods used to isolate cardholder data
  • Implement a process to respond to any alerts generated by the change detection mechanism
  • Maintain information about which PCI DSS requirements are managed by each service provider
  • For service providers, provide written agreement to their customers

Full details of the updates to the PA-DSS can be read here. A summary of the new requirements include:

  • Payment application developers must verify the integrity of source code during the development process
  • Payment applications must be developed according to industry best practices for secure coding techniques
  • Payment application vendors must incorporate risk assessment techniques into their software development process
  • Application vendors must provide release notes for all application updates
  • Vendors with remote access to customer premises for maintenance must use unique authentication credentials for each customer
  • Organisations must provide information security and PA-DSS training to vendor personnel with PA-DSS responsibility annually

The Payment Card Industry (PCI) Security Standards Council commented the package of standards “will help organisations make payment security part of their business-as-usual activities by introducing more flexibility, and an increased focus on education, awareness and security as a shared responsibility.”

Organisations should be reminded that failure to adhere to the PCI standards could result in enforcement by the ICO. In August 2011, the ICO made an example of retailer LUSH following a security lapse which resulted in hackers being able to access the payment details of 5,0000 customers of the company’s website, with 95 customers victims of card fraud. As a consequence, the ICO demanded LUSH sign an undertaking to ensure future customer credit card data must be processed in accordance with the PCI-SS.
 

New UK Cyber Security Principles Released

This post was written by Cynthia O'Donoghue.

Back in 2011, the Cabinet Office launched a cyber security strategy outlining steps the UK Government would take to tackle cyber crime by 2015. The National Cyber Security Programme invested £650 million funding to support the strategy ‘Protecting and Promoting the UK in a digital world’. Measures proposed by the strategy included:

  • Reviewing existing legislation, e.g., Computer Misuse Act 1990, to ensure remains relevant and effective
  • Pioneering a joint public-private sector cyber security to allow exchange of data on cyber threats across sectors to manage response to cyber attacks
  • Seeking to agree a voluntary set of guiding principles with Internet Service Providers
  • Developing kite marks for approved cyber security software
  • Encouraging UK courts to enforce sanctions for online offences under the Serious Crime Prevention Order
  • Creating a new national cyber crime capability as part of the National Crime Agency
  • Creating a single reporting system for cyber crime using the Action Fraud portal run by the National Fraud Authority
  • Strengthening the role of Get Safe Online to raise awareness and education about online security

In line with the third proposal of the strategy, the Department for Business, Innovation and Skills has now issued new guiding principles developed and agreed between government and leading Internet Service Providers (ISPs), such as ISPA, BT, Sky, Talk Talk, Vodafone and Virgin Media, to promote cyber security and protect ISP customers from online threats. 

The first section of the principles propose ISPs must:

  • Increase customer awareness of cyber security issues (including by directing to Get Safe Online and other national campaigns), and educate customers on the basic online threats, how to practise safe online behaviour, and how to spot cyber crime and report through Action Fraud
  • Empower customers to protect themselves from online threats through providing tools such as anti-virus software, anti-spyware, anti-spam, malware protection or firewall protection
  • Provide clear mechanisms to encourage customers to report compromises or threats to minimise the impact of cyber threats

The second section mandates government must:

  • Continue to make businesses aware of cyber threats and educate them how to respond through guidance, e.g., Cyber Security Guidance for Business issued 2012 and Small Business Guidance for Cyber Security 2013
  • Advise nationally on improving cyber security, e.g., Get Safe Online
  • Increase enforcement of online threats through the national crime capability of the National Crime Agency

The guidelines conclude by highlighting cyber security issues the government and ISPs will partner to resolve jointly going forward to achieve the aims of the UK cyber security strategy.
 

UK Government to Adopt New Cyber Security Standard

This post was written by Cynthia O'Donoghue.

On 28 November 2013, UK Government Department of Business, Innovation and Skills (BIS) announced, following a report on “UK Cyber Security Standards”, that a new cyber security standard is to be created based on ISO27000-series. This new standard will be created after BIS reviews the more than 1,000 separate cyber security standards that are currently in operation globally. The announcement came as a surprise as it had previously been suggested that the government would endorse an existing standard; however, BIS has concluded that there is no single standard or ‘one size fits all’ that fully met its requirement for effective cyber risk management.

The main findings of the BIS report were:

  • 52% of organisations at least partially implement a standard relevant to cyber security, but only 25% implement it fully, and of those businesses only 25% seek external certification of compliance with those standards.
  • 7/10 was the average level of importance placed on cyber security certification, with 10/10 representing the highest importance
  • Cost is the main barrier to adoption of cyber security standards and investment in external certification with no financial incentive to invest
  • Only 35% of organisations plan an increase in cyber security spending
  • 48% of organisations implemented new policies to mitigate cyber security risks
  • 43% conducted cyber security risk assessments and impact analysis
  • 25% of organisations believe standards to not be important at all

BIS called for support in 2013 for a new cyber security standard, and business groups that responded overwhelmingly supported the ISO27000-series of standards. However, BIS has rejected a straight adoption of those standards in light of flaws it has identified with that framework. To this extent, BIS commented, “ISO27000-series of standards have perceived weaknesses in that implementation costs are high and that due to their complexity SME’s sometimes experience difficulties with implementation…the fact that in previous versions businesses were free to define their own scope for which area of their business should be covered by the standard can also make auditing ineffective and inconsistent.” However, despite these flaws, the report proposes that a new implementation profile security standard will be based on key ISO27000-series standards, and that this will be the government's preferred standard. So far, BIS has support for the new standard from key industry players such as BAE Systems, BT, Lockheed Martin, Ernst & Young, GlaxoSmithKline and British Bankers Association.

To support the new profile standard, the government intends to create a new assurance framework whereby organisations that have passed their audit will be able to publicly state that their cyber risk management satisfies the government's preferred standard. This will act as an accreditation for businesses to promote themselves and assure others that they have achieved a certain level of cyber security.

BIS anticipates the new standard to be launched in early 2014. BIS commented, “This will do more than fill the accessible cyber hygiene gap that industry has identified in the standards landscape…it will be a significant improvement to the standard currently available in the UK. We view the use of an organisation standard for cyber security as enabling businesses and their clients and partners to have greater confidence in their own cyber risk management, independently tested where necessary.”

ENISA Releases Reports on EU Cyber Security Measures

This post was written by Cynthia O'Donoghue.

ENISA, the European Union Agency for Network and Information Security, has released a series of reports and guidance tackling the topic of cyber security.

  • ENISA Threat Landscape (ETL) Report 2013
    The report reviews more than 250 incidents of cyber attacks that took place in 2013.  A table in the report analyses fluctuations in the top 10 threat trends, including trojans, code injection, exploit kits, botnets, identity fraud, phishing, spam, data breaches, to name a few. The findings show that threat agents have increased the sophistication of their attacks with migration to the mobile eco system and the emergence of a digital battlefield relating to big data and the Internet. To counter these threats, ENISA highlights the successes achieved by cyber security officials and law enforcement authorities in 2013, as well as increases in reports of attacks facilitating greater threat analysis. The report ultimately calls for greater sharing of security intelligence, speed in threat assessment, and elasticity in IT architectures to ensure they remain robust against innovative cyber tactics.
  • Updates to Cyber Security Strategies Map
    ENISA lists the countries that have adopted the National Cyber Security Strategies (NCSS) across the world. The latest countries to adopt NCSS include Belgium, the Netherlands, Poland, Slovenia and Spain, with updates that Montenegro, Ghana and Thailand are planning to develop NCSS soon.
  • Feasibility Study on European Information Sharing and Alerting System (EISAS)
    EISAS is meant to increase awareness about IT security issues among citizens and SMEs, and foster a collaborative information-sharing network to improve capability to respond to network security threats. In 2009, ENISA published the EISAS RoadMap with a deployment plan to implement this concept.  In 2012, ENISA also published a Basic Toolset for the large-scale deployment of EISAS across Europe by 2013. The feasibility study is the last stage in the implementation of EISAS. The study includes a three-year action plan for deployment, and examines which entities could commit to leading EISAS network, as well as what operational measures would need to be implemented, and the funding required to ensure sustainable success of the infrastructure.
  • Report on supervisory control and data acquisition (SCADA) programs and Guide on Mitigating Cyber Attacks On ICS
    Much of Europe’s critical infrastructure is controlled by SCADA systems, a subgroup of Industrial Control Systems (ICS). The report recognises that in the past decade, SCADA technology has transformed from isolated systems into standards technologies that are highly interconnected with corporate networks. Simultaneously, SCADA systems have become increasingly vulnerable to attack. The report recommends the implementation of patching management strategies by way of software upgrades to tackle this.
     
    Like SCADA technology, ICS are equally vulnerable to cyber attack and are seen as lucrative targets for intruders. The guide aims to provide good practices for entities that are tasked to provide ICS Computer Emergency Response Capabilities (ICS-CERC).
  • Report on National Roaming for Resilience to Cyber Attacks
    In the context of more than 79 incidents of network outage occurring across the EU in 2012, ENISA’s report discusses the potential for mobile roaming to be used as a resource to improve the resilience of mobile communications networks. The report also proposes recommendations to mitigate the impact of network outages, including:
    • Service prioritizations in outages
    • Open Wi-Fi as alternative solution for data connectivity
    • Establish an M2M inventory of all SIMS per service and provider to assess the possible impact and strategy in case of outage
    • Identify key people within Critical Infrastructure Services to be prepared for eventual mobile network outage
  • CERT Guidance and Updated Training Materials
    ENISA has published guidance for government on mechanisms available to support CERTs via organisations such as TF-CSIRT TI, FIRST, The Internet Engineering Task Force, CERT Coordination Center and the International Organisation for Standardisations. Complimentary to this, ENISA have also expanded the breadth of training materials for CERTS to include 29 scenarios such as recruitment of CERT staff, incident handling and cooperation with law enforcement agencies – all available with downloadable handbooks and toolsets and online training presentations.
     

UK Data Protection Watchdog Launches Public Consultation on Future Governance Strategy, 'A 2020 Vision for Information Rights'

This post was written by Cynthia O'Donoghue.

The UK Data Protection Watchdog, the Information Commissioner’s Office (ICO), has launched a public consultation on their future governance strategy, the ’2020 Vision for Information Rights’. The ICO is being challenged by significant changes in the regulatory landscape triggered by imminent reform of EU data protection law. Simultaneously, the UK regulator is facing cutbacks in grant-in-aid, resulting in a funding crunch with resources being stretched to the maximum. Meanwhile, the public perception of the importance of information rights is growing; therefore the ICO has rightly recognised it must find a way to ‘do better for less’.

The public consultation sets out the ICO’s mission to ‘uphold information rights in the public interest’, and a vision ‘to be recognised as the authoritative arbiter of information rights – a good model for regulation’. The ICO’s goal is to achieve a society in which organisations collect personal information responsibly, all public authorities are transparent, and people understand how to protect their personal information and feel empowered to enforce their information rights.

The ICO set out five aims for the next years:

  1. Educate –
    Issue further guidance for organisations; influence advice at EU level; work with other regulators and sectoral bodies to secure compliance; and embed information rights within school’s curriculum
  2. Empower –
    Provide more guidance for citizens; develop privacy seals, kitemarks and accreditation schemes to make privacy rights more prominent; make reporting concerns easier with online mechanisms
  3. Enforce –
    Focus more intently on organisations that have significant breaches; collaborate with sectoral regulators in enforcement actions
  4. Enable –
    Demystify information rights to ensure data protection law is not seen as a roadblock to information sharing in the public interest
  5. Engage –
    Be up to date with developments in business and technology nationally and internationally to keep informed and to influence areas of reform

Overall, the ICO intends to be more outcome focused, prioritising only the highest information-rights risks, and reducing casework and response to individual enquires to give greater attention to wider compliance issues. Furthermore, it intends to engage in greater dialogue to coordinate with government policy makers, sectoral bodies and other international regulators. The ICO seems likely to restructure in order to increase reliance on such partnerships to help provide a more sustainable funding model for the future.

Public consultation is due to close 7 February 2014. Responses can be made by completing this form and emailing to consultations@ico.org.uk. The ICO anticipate they will publish their final strategy in light of responses from public consultation by March 2014, along with a corporate plan for three years 2014-17.

Maximum administrative fine issued by the CNIL against Google: More to come?

This post was written by Daniel Kadar.

After almost two years of back and forth with Google, the French CNIL has, similarly to the Spanish Data Protection authority (€900,000 fine), sanctioned Google with a €150,000 fine, as Google refused to review its integrated platform and to modify its privacy policy as requested by the Working Party 29.

In addition to this fine, the CNIL has ordered Google to post a warning on Google’s French home page within eight days after the CNIL’s notification, and during two days reflecting this condemnation.

Google has decided to appeal the CNIL’s condemnation in front of the French Council of State (“Conseil d’Etat”), France’s highest administrative jurisdiction, in order to obtain the cancellation or reversal of the decision.

One could wonder why Google puts so much energy into trying to reverse a condemnation that is “bearable” from a financial point of view: there are at least two reasons for this.

First, the warning to be posted being the “real” condemnation – as it is deemed to be displayed to millions of Google users – Google has no other choice but to appeal the decision in order to avoid it. And as the appeal does not hinder the immediate enforceability of the sanction, Google had simultaneously introduced a petition for suspension before the Conseil d'Etat.

The hearing is scheduled to take place February 6, 2014.

Second, and more importantly, this condemnation could be the first step before criminal penalties this time: the French criminal code provides that failure to comply with the French Data Protection Act shall be punished per infringement with a fine of €300,000 and imprisonment of up to five years.

These sanctions can only be ordered if the CNIL has issued before an “administrative” sanction it has alone the power to take. This is what happened earlier this month.

Note that according to article 131-38 of the French Criminal Code, if a legal person is being convicted, the amount of the fine is multiplied by five, and in addition by two in case of recidivism.

Therefore, on that basis, Google could face a risk of being convicted to a fine of €1.5 million or even €3 million for recidivism, per infringement.

There lies a real financial threat since, after this first “administrative” fine has been ordered by the CNIL, a criminal case could now follow.

The legal proceedings against Google in France may only have commenced.

 

The implementation of the French transparency regulation: first good news?

This post was written by Daniel Kadar.

French health care companies have faced hard times over the past months with their new transparency obligations. They have been required to declare the equivalent of 18 months (!) of agreements and benefits in a very short period of time.

They were scheduled to disclose this information to the unique state portal the French government had announced, but which ultimately was not in place in time.

As a consequence, the government issued “transitory provisions,” according to which health care companies acting in France had to disclose their information:

  • To the National Medical Association (there are seven of them…)
  • On a dedicated company website some international companies had to put in place on purpose

As a cherry on the cake, the French medical association set up its own template, making compliance again more difficult. Without surprise, as of today, only half of the declarations transmitted were compliant with the French Medical Association’s template.

Things should be changing now as the unique state portal is finally up and running. After a first registration, disclosure of information should become easier.

Registration and authentication

When first connecting to the unique state portal, companies will have to register. They will be required to provide different details such as information on their headquarters, company registration, and contact information, as well as the procedure an HCP will have to follow in order to modify the displayed data. (Note that the transparency disclosure being mandatory, no right of opposition is granted to the HCP as data owner, contrary to general principle of data protection.)

After this first registration, a unique pair user ID/password will be assigned to the company.

Information will remain available on the public state portal for five years, but will be securely stored by the government for 10 years.

Disclosure of information

This new state portal seems to be more “customer friendly” as three possibilities are set up for the disclosure and transfer of data:

  • An online script can be filled in online
  • A specific formatted document can be transferred directly to the website
  • An automatic sending through a web service can also be set up

The transmission is deemed secure and done on an https website. The unique website will in addition have to comply with the French Data Protection Authority-CNIL’s provisions, and avoid any indexing by external search engines.

The specific format that has been set up for the unique state portal is quite similar to the one already established by the French Medical Association. It will almost allow companies to continue working on their previous template.

This unique state portal will be officially launched April 1, 2014 at the latest. It is already accessible to companies at the following address:

https://www.entreprises-transparence.sante.gouv.fr/flow/login.xhtml

Health care companies do not have to continue to add information on their own dedicated transparency website that will have to be maintained for the data disclosed during the “transitory period.”

However, this new state portal should not make companies forget that they need to comply with data protection regulation regarding their obligation to inform the HCPs they are contracting with or paying advantages to, that they are due to disclose data in that respect.

 

Information Rights Tribunal Rules Self Reporting Breaches To ICO Does Not Provide Immunity From Fines

This post was written by Cynthia O'Donoghue.

A judgement of the Upper Tribunal of the UK Information Rights Tribunal (the Tribunal), in the case of Central London Community Healthcare Trust v Information Commissioner [2013] UKUT 0551 (AAC), has ruled that organisations which voluntarily report incidents of data security breaches to the ICO do not gain automatic immunity from penalty fines in relation to that breach.

The Tribunal rejected the appeal of the Central London Community Healthcare Trust (the Trust) against an ICO decision to serve a monetary penalty notice of £90,000 in 2012. The monetary penalty notice was issued following a data breach which involved 45 separate fax messages containing lists of palliative care inpatients, including particularly sensitive and confidential data like medical diagnoses, being sent to the wrong recipient – a member of the public – instead of a hospice, over a period of two months. While the Trust did not deny the breach, they argued the ICO was wrong to issue a monetary penalty notice on the grounds that it had self-reported the breach notifying the ICO.

Upper Tribunal Judge Nicholas Wikeley ruled, “The logical implication of the Trust’s construction of the legislative scheme is that a data controller responsible for a deliberate and very serious breach of the DPA would be able to avoid a monetary penalty notice by simply self-reporting that contravention and co-operating with the Commissioner thereafter. Such an offender would be in a better position than a data controller acting in good faith, but unaware of a breach, who could be subject of a monetary penalty notice because a third party reported the matter to the Commissioner. Such an arbitrary outcome would necessarily undermine both the effectiveness of, and public confidence in the regulatory regime.”

Commentators have been quick to point out that in spite of this ruling, the benefits of informing the ICO about serious data breaches continue to significantly outweigh the risks associated with being served a fine. Deputy Information Commissioner David Smith commented that the UK regulator does look favourably on companies that self-report data breaches even though the act of reporting does not give automatic immunity from fines. Furthermore, informing the ICO directly gives organisations the chance to justify their case and have some influence over the rectification measure the ICO may impose through their enforcement regime. To this extent, self-reporting must be seen as a mitigating factor that the ICO consider when determining the level of monetary penalty notices they issue.

Regulations Released to Implement Peru's Personal Data Protection Law

This post was written by Cynthia O'Donoghue.

The PeruvianLaw 29733 for Personal Data Protection (the Law) was enacted in July 2011 however it was only recently in May 2013, two years on, that the law’s implementing regulations were approved through Supreme Decree No.003-2013-JUS (the Regulations). This blog intends to provide more details on the scope of the Regulations which we provided a high level summary of in our previous blog when the Regulations were first released.

The Law and Regulations will apply to any processing of data by an establishment in Peru, by a holder of a database in Peru or even where the holder of the database is not located in Peru but uses means located in Peru for the purposes of processing data. Processing for personal purposes relating to family or private life will not be regulated.

The key provisions of the Law to note are as follows:

Consent

  • Processing of personal data requires prior express and unequivocal consent of the data subject that is obtained freely without bad faith or fraud.
  • Sensitive data requires consent in writing by a handwritten signature, a fingerprint, or electronic digital signature.
  • Consent must be informed including details of the objective purpose, recipients, the database, identity of the database owner, intended transfers or disclosure to third parties, the consequences or providing their information or failure to do so, rights of the individual available under the Law. Informing by publication of privacy policies is acceptable.
  • Children over 14 and under 18 may consent to processing without parental authority which is compulsory for children under 14.
  • Exceptions to the consent requirement include when data is
    • related to a person’s health;
    • in the public domain;
    • related to financial solvency;
    • necessary for  the execution of a contractual relationship

Notification

  • All databases must be registered with the public National Registry for the Protection of Personal Data. Any subsequent amendments to notifications require cancellation of the prior registration and submission of a new registration.

Security

  • Security measures must be established to ensure the confidentiality and integrity of data stored implementing the following Peruvian technical standards:
    • NTP-ISO/IEC 177799: 2007 EDI Technology of Information Code of Good Practice for the Good Management of the Security of Information
    • NTP-ISO/IEC 27001: 2008 EDI Technology of Information Code of Good Practice for the Good Management of the Security of Information Requisites.

Data Transfers

  • Transfers of data within an organisational group are permitted provided there is an internal code of conduct regulating the protection of personal data with the group and processing in accordance with the Law and Regulations.
  • International transfers must be notified to the DPA and requires prior consent and can only be made to countries with adequate levels of protection for personal data similar to that offered under the Law and the recipient guarantees to provide the same level of protection
  • Cloud computing is permitted provided the service provided guarantees compliance with the Law and Regulations and any subcontracting must be reported.
  • Data processors by subcontract to third parties provided an agreement is entered into and prior consent of the database holder must be obtained.

Subject Access Requests

  • A holder of a database will have the following time limits to respond to data subject requests:
    • Information request - 80 days
    • Access request - 20 days
    • Requests for correction or deletion -10 days

There will be a two year transition period for owners of existing databases to comply with the provisions of the Law implemented by the Regulations, however the obligation to register all databases with the DPA will take immediate effect. Violation of the Law or Regulations can result in a fine ranging from US $7,150 to $142,000.
 

ICO Enforcement Powers Challenged as Tribunal Overturns £300,000 Monetary Penalty Notice

This post was written by Cynthia O'Donoghue.

The First Tier Tribunal (Information Rights) granted appeal against a monetary penalty notice of £300,000 issued by the Information Commissioner in the case of Christopher Niebel v The Information Commissioner (EA/2012/2060), ruling that the penalty notice should be cancelled.

The monetary penalty notice had been issued against Christopher Niebel, owner of Tetrus Telecoms, for sending unsolicited ‘spam’ text messages seeking potential claims for mis-selling PPI loans or accidents. The message were sent from unregistered sim cards that allowed Mr. Niebel to conceal himself as the sender. The ICO found Mr Niebel's actions to be in breach of the Privacy and Electronic Communication (EC Directive) Regulations 2003 (the PECR Regulations). Under Regulation 22, it is unlawful to use text messages for direct marketing unless the recipient has either asked for or specifically consented to such a communication. Regulation 23 also requires that the identity of the sender should be clear, and the text must contain a valid address which permits the recipient to contact the sender or request that the text messages stop. The ICO found that Mr. Niebel obtained neither permission nor consent to send the text messages, and that he withheld his name and address.

The PECR Regulations incorporate s.55A of the Data Protection Act 1998. This section gives ICO enforcement powers to impose monetary penalties for breach of the PECR Regulations up to a maximum of £500,000, provided the contravention is serious and of a kind likely to cause a victim substantial damage or substantial distress.

In the First Tier Tribunal, NJ Warren found that the ICO monetary penalty issued to Mr Niebel over-exaggerated the nature and scale of his contravention of the PECR Regulations to involve hundreds of thousands of messages, when in fact it only related to 286 texts. Furthermore, NJ Warren did not consider the minor irritation of having to delete a spam text or the small charge to respond ‘STOP’ enough to be considered likely to cause a receiver ‘substantial damage’ or ‘substantial distress’. As a result, it was held that s.55A (1)(b) of the DPA had not been satisfied, and therefore the ICO had insufficient grounds to issue the monetary penalty notice. NJ Warren therefore made decision to cancel the penalty notice in this case.

 

Big Data Is Better, Urges EU Commissioner

This post was written by Cynthia O'Donoghue.

At Europe’s biggest digital technology event ICT 2013, Vice President of the European Commission Responsible for the Digital Agenda, Neelie Kroes, made a speech despairing that Europe is lagging behind the rest of the world in taking advantage of opportunities presented by big data. Kroes recognised it would be beneficial to “put the data together…the value of the whole is far more than the sum of its parts… to create a coherent data ecosystem.”

The speech highlights that plans under new EU data protection legislation will ensure more public data is shared, and this will be supported by the creation of a new pan-European open data portal. To support this vision, Kroes pleads for a big data partnership to be formed between public and private-sector organisations, adding “a European public-private partnership in big data could unite all the players who matter.”

Kroes is quick to point out that the benefits of big data do not have to come at the cost of privacy. She comments, “for data that does concern people, we need firm and modern data protection rules that safeguard this fundamental right…we need digital tools that help people take control of their data so that they can be confident to trust this technology.”

Overall, the EU Commissioner proposes Big Data can become a fashionable slogan – “a recipe for a competitive Europe.”

OIG Report Indicates OCR Not Overseeing and Enforcing HIPAA Security Rule

This post was written Nancy E. Bonifant and Brad M. Rostolsky.

A November 21, 2013 report published by the Office of the Inspector General (OIG) concluded that The Department of Health & Human Services (HHS) Office for Civil Rights (OCR) is not fully enforcing the HIPAA Security Rule and layed out recommendations for the OCR to implement. The OIG’s report also concluded separately that OCR is not in full compliance with the cybersecurity requirements in the National Institute of Standards and Technology (NIST) Risk Management Framework, to which OCR responded describing the actions it has taken since May 2011 in regards to OIG concerns.

Click here to read more on our sister blog, Life Sciences Legal Update.

 

Theft of Unencrypted Flash Drive Causes OCR to Issue Settlement and Corrective Action Plan for Physician Practice

This post was written by Brad M. Rostolsky and John E. Wyand.

The Department of Health and Human Services’ Office for Civil Rights (OCR) opened an investigation of Adult & Pediatric Dermatology, P.C. (APDerm) after a report was made regarding the theft of an unencrypted flash drive. To settle potential violations of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy, Security, and Breach Notification Rules, APDerm is to pay $150,000 to OCR and will be required to put into effect a corrective action plan to amend shortcomings in its HIPAA compliance program.

Click here to read more on our sister blog, Life Sciences Legal Update.