Public health surveillance in India: concerns of an individual’s liberty and privacy amid a pandemic

(This article extensively borrows from another article that authors wrote for and first published on the Leaflet)

The world is grappling with the kind of situation that it has never seen before. The rapid pace of COVID-19 spread made it necessary for the governments around the world to use extreme means and measures that would otherwise be considered Orwellian. These emergency measures by the governments are attempts to effectively enforce a lockdown and strictly prohibit movement of the citizens in a bid to break the chain of infection.

As Governments are attempting to contain the contagious virus, the use of technology for monitoring people undergoing quarantine has doubled in order to combat the spread of the virus. Ordinarily, under such developing Orwellian state of affairs, civil liberty activists and privacy advocates stir commotion; considering the scale of the crisis, they seem to tacitly embrace these measures. It is obvious that this pandemic is reshaping our relationship with surveillance technology, albeit to the fear of some the surveillance that could become a norm.

World under surveillance

Across the globe, countries are expansively deploying tech-enabled surveillance infrastructure of Face Recognition Technology (FRT) based CCTVs, drones and cell phone tracking devices for contact tracing and enforcing quarantine. Growing number of countries such as Israel and South Korea are ‘contact tracing’ using mobile applications or cell phone records. It is a process of mapping travel history of an infected person by analyzing location records of the cell phones. It is followed by pinpointing the other contacts for quarantine that might have come in contact with such a person. Meanwhile, Taiwan has gone a step further in quarantining the traced contacts by deploying an ‘electronic-fence’. If a mobile user’s SIM card is tracked beyond the reach of a network station or found to be switched off, law enforcement authorities quickly approach the suspect.

In India, law enforcement authorities across the nation are increasingly using technology to monitor and restrict the spread of the virus. In several states such as Rajasthan, Punjab and Delhi, local authorities have published a list of personal details, in online media and newspaper, of those suspected or infected of COVID-19. The Karnataka government has taken this to an inordinate level by mandating all quarantined persons to send a selfie with geo-tags through an official app named ‘CoronaWatch’ every hour, except between sleeping time 10 PM to 7 AM. Now, the Ministry of Electronics and Information Technology (MeitY) has also launched an app- ‘Aarogya Setu’, which uses Bluetooth and GPS to alert an individual if they come within six feet of a Covid-19 infected person.

The case of “Public Health Surveillance”

Law enforcement agencies of different countries are carrying out tech-enabled surveillance on their citizens to ensure compliance with the rules of social distancing and lockdown. In normal times, such measures are targeted against terrorists or criminals; while also scrutinized vide privacy and civil liberty concerns.

However, even the World Health Organisation (WHO) has sought to play down privacy concerns in these unprecedented times, by terming the measure as “public health surveillance”. The WHO has simply legitimized the governments’ argument that the extraordinary situation of COVID-19 pandemic necessitates the use of an extraordinary measure of mass surveillance. The public health emergency of such magnitude is being touted as a valid justification for deploying tech-enabled mass surveillance and subversion of individual rights.

Is surveillance a matter of concern for India?

There are certain unique reasons due to which implementation of these emergency measures, in India, are worrisome.

No clarity on the legal basis for surveillance measures

Firstly, in India, neither the central government nor the state governments have provided any legal basis for directing such tech-enabled surveillance measures. For instance, neither of the official press release of the Aarogya Setu app and Karnataka’s ‘mandatory selfie direction mention any legal grounds for such directions nor have they provided any privacy policy with it. The absolute abandonment of civil liberties and privacy in the interest of public health, without the bare minimum legal foundation, portends negative consequences

The government has invoked the Epidemics Diseases Act, 1897 and Disaster Management Act (DMA), 2005 to deal with the COVID-19 outbreak. Both, the colonial era Epidemics Diseases Act and NDMA, do not cover surveillance in their scope. Although, there is an argument that basic residuary power to take ‘necessary’ steps to curb the spread of virus, under the mentioned laws accord a legitimate authority to government for surveillance.

It is unclear why the government has not availed these very basic residuary powers to also notify the standing rules on privacy or lawful manner of deployment of tech-enabled surveillance measures. As a natural consequence, government directives infringing an individual’s right to privacy cannot be tested for their legality without any standing rules for arbitrariness and lack of accountability. This is particularly dangerous in a country like India where a data protection statute does not exist.

The use of unregulated novel technologies for surveillance provides no legal checks and oversight

Secondly, the details regarding the technological capabilities of the government for surveillance are largely a secret. It is the sudden outbreak of pandemic that has forced the government to openly introduce a deluge of unregulated, contemporary and emerging technologies for mass surveillance. There is a growing concern among certain privacy advocates that the tech-enabled surveillance could persist beyond the pandemic once it gets accepted and normalized in the present emergency times. History is witness that world’s most dictatorships and authoritarian regimes emerge amid the crises.

There is no information available about the extent and scope of the government’s capability and techniques. The secrecy about the techniques of surveillance impedes the legislative checks or institutional audits. If the public is unaware of how a technology works (due to non-disclosure by the Executive), the said manner of surveillance then cannot be even challenged in a court of law. Therefore, such secrecy is nullifying the system of checks and balances in favor of the ever-augmenting executive power.

Several surveillance techniques are disproportionate and unnecessary

Thirdly, due to the use of technologies of varying level of invasiveness, there are doubts regarding the necessity and proportionality of such measures in relation to the right to privacy and individual liberty.

The Puttaswamy (I) judgment upheld, explicitly recognized in reference to public health, that to legitimately restrict fundamental rights such as privacy and liberty for implementing a measure, such measure should be proportionate in nature. In the case, the SC held that a government measure is proportionate if it satisfies following four criteria: 1) that the measure should pursue legitimate purpose; 2) that the measure should be rationally connected to the purpose; 3) that there should no less intrusive alternative measure available; 4) that the measure should accrue public benefit greater than the extent of infringement of a constitutional right.

More than half of the population of the country doesn’t have access to the internet services. In the context of such a scenario, how is surveillance through mobile application is a necessary measure? Further, several state governments are taking extreme measures of disclosing the home addresses and other personal details of infected and suspected persons, which grossly fall afoul of three prongs of the constitutional test upheld in the Puttaswamy I judgment. An obviously lesser intrusive measure such as informing at a locality level about the presence of infected cases in areas could have sufficed. Allahabad HC also held such practices, publishing personal details of anti-CAA protestors in public, of the UP government as “arbitrary invasion of privacy”.

Karnataka has rolled out a mobile application which comprehensively discloses the location history and home addresses of persons infected and quarantined. Also, some of the states are publicly listing such details wide in social media channels. Such invariable disclosure of private information of infected and suspected persons has prompted concerns and possibilities of social intimidation.

There have already been reports from across the nation of infected and suspected patients facing the stigmatisation, and various forms of discrimination which are further resulting in a negative social impact. For instance, in Maharashtra, public listing of coronavirus suspects on social media led to several cases of forceful eviction of quarantined people by their landlords.

Such events question the proportionality and necessity of the measure as it would have been a satisfactory measure if the government has alternatively chosen a lesser intrusive measure.

Ways to resolve the concerns

There is no denying that certain limitations can be imposed on civil liberties given the urgency of the COVID-19 crisis. However, in a democratic set up like India it is expected from the government that its actions should be transparent and provide a window to the public to assess the government’s accountability. All the worrisome aspects related to public health surveillance measures can be subdued by making concerted efforts to introduce legal backing for its actions, to establish institutional oversight and to use the least intrusive means.

For providing the legal basis, the government can issue the standing rules that would lay down the legal and accountability measures for the responsible local authorities undertaking public health surveillance. The governments should avail the residual powers under the NDMA and the Epidemic Diseases Act to also issue the ad-hoc rules and guidelines in addition to the emergency surveillance measures. These rules and guidelines will provide the mechanism under which surveillance can be carried out without causing deterrence to an individual’s privacy and liberty.

The government can presently provide such ad-hoc rules for privacy protection based on similar principles as delineated in the Personal Data Protection Bill 2019 (“PDPB 2019”) for the data collection during health emergencies. Clause 12 of the PDPB 2019 exempts the data fiduciaries from taking consent under urgencies like pandemic, but strictly imposes requirements of data minimization or purposes limitation, lawful processing, transparency and accountability. Introduction of such principles will ensure that the information collected surveillance is being handled under the constitutional checks to maintain privacy as much as possible

Such ad-hoc rules will obligate the government as a data fiduciary to follow principle of purpose limitation such that the authorities should only collect the minimum possible data which is sufficient for tracing contact, enforcing quarantine and any other lawful and specific purpose. The government shall use the anonymised data only and adopt all security measures to prevent leaks and maintain confidentiality of personal data of data subject. The rules will also mandate the government to delete the collected data at the earliest after it has been used for the specified purpose. This will automatically shun away the emerging concern that the surveillance’s effect could persist beyond pandemic. Further, it will inhibit the misuse of personal data and abuse of surveillance measures.

The surveillance measures aim to keep people in quarantine and check the spread of infection for their benefit, therefore it is suggested that the government should hold no secrets about its surveillance techniques and manners. It should adopt a method of “Public Notice” system such that the local district administration has to notify the model of surveillance to the public before conducting surveillance.

At the very least, this notice should disclose the legal rules governing the tech-enabled surveillance measure, and its purpose. It should be clear on the authorization required for the retention, access, and use of information collected through the use of such novel technology. Such a notice would provide the transparency in the process of imposition of surveillance and allow the legislature and public to exercise meaningful control and oversight over the manner of deployment of unregulated technologies for surveillance.

Parting note

Unarguably, the present situation calls for the governments to take substantial measures to protect the lives and health of public at large, but this should not happen in the utter disregard of constitutionally recognized rights to privacy and individual liberty. The policies and techniques of government should be legitimate and proportionate in order to maintain the democratic principles of public trust and transparency. There is no hard choice between public health and individual’ right to privacy and liberty. Both can mutually co-exist under the legal framework that guarantees the challenge to unnecessary expansion of the surveillance regime.

As pointed out by Deborah Brown, senior digital-rights researcher at Human Rights Watch, “surveillance measures should come with a legal basis, be narrowly tailored to meet a legitimate public health goal, and contain safeguards against abuse”.

Therefore, the government should definitely focus on the situation of urgency for many, instead of investing focused efforts in ensuring rights for few but should not absolutely ignore its accountability towards any section of the community. These fundamental rights are lung to the edifice of our entire constitutional system. The government should make efforts to prevent any injuries to it as much as possible.

Simplifying FinTech and FinTech Laws: Key Takeaways for Indian FinTech Industry

The significant advancements in Fintech are directly impacting on the traditional financial sector. The regulators had to be cautious in order to not miss the train and should jump on the wagon of promoting financial innovation and stiff competition in the sector. The newcomers in the sector should be provided certain leniency in form of exemptions from a number of strict compliances which are used to curb the malpractices of the big corporations, for the sake of promoting competition in the market. This post is dealing with key takeaways from reports of different regulators’ committees in India. This is the last post in the series of ‘Simplifying FinTech and FinTech Laws’.

Fintech charged firms and businesses must work in tandem with the regulated entities, e.g. banks and regulated finance providers. The businesses that a bank can undertake are provided under Section 6 of the Banking Regulation Act, 1949 and there is no business outside Section 6 that can operate as the bank. Such provisions, therefore, incentivize banking companies to make fintech innovations in a narrower scope relevant to their operations. The archaic laws make it difficult for banks to undertake fintech innovations that can be of significant utility but are beyond the scope of financial regulation.

The Watal Committee Report noted this, that:

“The current law does not impose any obligation on authorised payment systems to provide open access to all PSPs. This has led to a situation where access to payment systems by new non-bank payments service providers, including FinTech firms, is restricted. Most of them can access payment systems only through the banks, which are also their competitors in the payments service industry. This, according to the Committee, has restricted the fast-paced expansion of digital payments in India by hindering competition from technology firms.”

Forming a comprehensive and non-discriminatory regulatory approach

Regulators and legislators are required to realign their legal approach to the Fintech services. There is a requirement of developing a deeper understanding of various Fintech services and their interaction in a financial environment with other fintech services. To provide the fintech space to work utmost to its potential, it is needed that it gets a level playing field in relation to the traditional banking and non-banking players. The practise of restricting the access of non-bank institutions to payment infrastructure, such as AEPS, has to be reevaluated and the proper steps to be taken. It is required from the end of Government and Regulatory bodies that they should adopt necessary measures in order to provide accessibility to national payment infrastructure and facilities to all fintech firms without any discrimination.

Providing Standards for Data Protection and Privacy

All the fintech companies are required to invest significantly in self-regulating policies to prevent privacy risks. Fintech companies should be provided with the standards of data protection as soon as possible by government and regulators. It is evident that the provisions of the Personal Data Protection Bill, 2019 can significantly affect the growth of Fintech companies. Therefore, the standards adopted for fintech companies by regulators should be reviewed with respect to data protection and privacy concerns. The government and regulators specific to finance of the country should start focusing on the valuation of data that is processed by banking companies and recommend practices to safeguard consumer interests.

Open Data principles should govern the financial sector in order to enhance Competition

The regulators should pay heed to the open data policy among participants of a fintech sector. The regulators should begin with the mandatory norms directing financial service companies to encourage banking institutions to enable participants to access the databases of their rejected credit applications on a specific platform on a consensual basis. The practice of the UK with respect to Open Data Regulations in Banking can be adopted, where banking institutions on the basis of consent framework allow data to be available to banking partners in order to foster competition. Even the RBI Steering Committee on Fintech recommended:

“It also recommends that all financial sector regulators study the potential of open data access among their respective regulated entities, for enhancing competition in the provision of financial services.”

The KYC process should be reformed with respect to the Supreme Court’s Judgment on Aadhaar’s validity

Fintech businesses are the most affected entities due to the striking down of Section 57 of the Aadhaar Act as it invalidated the online KYC process. The online KYC and authentication provided the required efficiency and convenience to fintech firms with respect to their endeavours of on-boarding as many as consumers on their digital platform. It is recommended that alternatives to the mandatory linking to Aadhaar should be adopted in the form of possible video-based KYC, such that the documents as verified must be protected and processed with the prior consent of the consumer.

Other key recommendations

1. It is recommended that the adequate cybersecurity, anti-money laundering and fraud control measures should be adopted by investing in technologies and guidelines that can prevent fraud.

2. Technical innovations should be monitored with respect to the potential risk that innovation carries in operation under the contemporaneous legal landscape of the country.

3. A self-regulatory body to facilitate the needs of fintech is much needed as for the RBI it is still turning out to be difficult to replace the existing regulatory structure. A regulatory mechanism allowing the broader participative consultation approach should be adopted.

4. Regulators should invest in Reg-Tech (“Reg Tech is a sub-set of FinTech that focuses on technologies that facilitate the delivery of regulatory requirements more efficiently and effectively than existing capabilities. In July 2015 the FCA issued a call for input entitled ‘Supporting the development and adoption of Reg Tech’.”)

5. The majority of economies have adopted the practice of setting up of the regulatory sandboxes catalyzing the fintech innovations. It is recommended that RBI should continue with the introduction of the mechanisms, like regulatory sandboxes, enabling the adaptation of regulatory initiatives which will play a key role in maintaining India’s competitive edge.

Delhi HC has expanded the scope of injunction orders in Internet jurisdiction: Geo-blocking to Global-blocking in IT law

This post has borrowed extensively from an earlier blog-publication by Aryan Babele on Tech Law Forum @ NALSAR.

On 23rd October 2019, the Delhi HC has delivered an impactful judgment authorizing Indian courts to issue “global takedown” orders to Internet intermediary platforms like Facebook, Google and Twitter against illegal content as uploaded, published and shared by their users. The Delhi HC delivered the judgment on the plea filed by Baba Ramdev and Patanjali Ayurved Ltd. requesting the global takedown of certain videos which are defamatory in nature.

The Court passed the order in the context of its observation that there is a ‘hare and tortoise race’ between technology and law such that the ‘technology gallops, the law tries to keep pace’. Such observation reflects that the Court’s intention is to interpret IT law in the manner which will ensure the effective implementation of the judicial orders throughout the internet jurisdiction and mitigate the circumvention of such orders by use of the advanced technology.

However, the Court’s order is attracting criticism globally from several internet-freedom activists. It seems that the Court has made a hasty attempt to win the ‘hare and tortoise race’ and has missed on considering the far-reaching implications of it on the IT law jurisprudence and conflict of law provisions. This article aims to analyze and indicate the significant points in the Delhi HC’s judgment, which the Court lacked in considering while relying on the unsettled jurisprudence of global injunction orders.

Background- The case of Swami Ramdev v. Facebook

In Swami Ramdev v. Facebook [CS (OS) 27/2019 – Delhi HC], Swami Ramdev (a prominent yoga guru and public figure) filed a case before the Court against Facebook, Google, YouTube and Twitter, inter-alia, praying for the global take down of defamatory contents (videos) as uploaded, published and shared by users of these intermediary platforms.

The given case stems out of the publication of videos on defendants’ platforms, which are based on those particular offending portions of the book titled “Godman to Tycoon: The Untold Story of Baba Ramdev’ by Priyanka Pathak Narain, which are already undergoing an ad-interim injunction as granted by the Court in Swami Ramdev v. Juggernaut Books [CM (M) 556/2018] in May 2018.

Subsequently, in January 2019, the Court passed an interim injunction against the defendants’ platforms to disable access to the offending URLs and weblinks for the Indian domain as per Section 79 of the Information Technology Act, 2000, [hereinafter referred as IT Act 2000] i.e. ordered geo-blocking.

However, the plaintiff argued that the geo-blocking is an ineffective solution as the objectionable content is widely available on the global internet and internet users in India can still access such content using VPNs and other such mechanisms. Therefore, the only effective remedy, according to the submission of plaintiff, is to issue a global blocking order.

Internet intermediaries have contended against such a global take down mechanism as it poses a number of technical and legal difficulties for them. Firstly, cross-jurisdictional laws vary in standards for determining defamation, and hence disabling access globally will breach the principles of international comity. Secondly, in order to globally disable access to the content, the intermediary platforms have to monitor every upload on their platforms which is technically difficult and legally wrong.

The Delhi HC’s Judgment

The Court agreeing with the plaintiffs’ submission went on to held that the online intermediary platforms can be ordered to take down content globally by a competent court in India, as the content is published on their global services. It observed that the complete removal is needed because there are easy –to-use technology applications available widely that helps local users in circumventing the geo-blocking and render the take-down order useless. Therefore, an absolute removal globally is an absolute remedy, as per the Court’s observations.[1]

Further, the following directions, hereby in brief, have been put forth by the Court to support its order:

  • The Court broadened the interpretation of Shreya Singhal v. Union of India: As per the Court, Section 79 of the IT Act 2000 provides that in order to avail the safe-harbor immunity, “intermediaries have to take down and disable access to the offending material residing in or connected to a computer resource in India”. It interpreted the definition of ‘Computer Resource’ as given in the IT Act, such that the “Computer Resource” as per the judgment “encompasses within itself a computer network, which would include a maze or a network of computers. Such a computer network could be a global computer network”.[2]
  • Global take downs are technologically possible: The Court held that whenever any content violates the community standards of the internet intermediary platforms, such content is taken down globally by the platform on its own. Therefore, it observed that it is technologically possible for the platforms to take down content globally on the orders of the competent courts as well.
  • Application of IT Act in extra-territorial jurisdiction: In order to justify the global take down, the Court explained that, “a perusal of Section 75 of the Act shows that the IT Act does have extra territorial application to offences or contraventions committed outside India, so long as the computer system or network is located in India”.[3] Therefore, the Court held that as long as the content has been uploaded from the Computer Resource located in India, Indian courts will be competent to pass the global injunction/ take down orders.
  • Allowing the direct ‘Notice-and-Takedown’ mechanism for the future uploads of the objectionable content: The Court has held that the plaintiffs can approach the intermediaries directly if it finds the publication of the questionable content again on their online platforms in future. However, the Court has provided an option of the counter-notice system for intermediaries, by opting which the intermediaries can refute claims of illegality and shift the onus of proof back on plaintiffs, such that after which the plaintiffs will have to approach the Courts for an appropriate remedy.

Observations: the Loopholes, Unsettled Jurisprudence and the Comment

The Loopholes

It is completely understandable that the Court is favouring the global take-down order to make its injunction orders against global services more effective. Unfortunately, in its broad evaluation of legal feasibility of the global injunction order and technological capabilities of intermediaries to obey the same, the Court missed on considering certain very significant arguments[4]:

  • Use of VPNs another way around: The Court agreed to the plaintiffs’ argument that due to the wide availability of the easy-to-use applications like VPN, the geo-blocking is circumvented. However, it didn’t consider the circumvention in the case other way around, in which the user can upload the content using VPN and other web proxy services, and can further easily fake the IP address to make it look like as if the content is being uploaded from outside India, negating the Court’s jurisdiction. Therefore, global takedown order, even at prima facie, doesn’t seem to be the appropriate remedy.
  • In denial of the principle of international comity and right to information: The cross-jurisdictional defamation laws vary on a large scale. If global takedown was mandated, the platforms will be wary of falling foul of the law in other countries. For eg., if Indian courts mandate the global takedown of the content which is not at all questionable as per the laws of certain countries, the takedown order will be in contravention of the right to information of citizens of that country. Not respecting the laws of other country amounts to the breach of the principle of international comity and conflict of laws.[5]
  • Without due consideration to the rights to free speech and privacy: The Court failed to understand the technicalities that involved in the operation of global take down orders, the intermediary platforms have to start monitoring each and every content that is being uploaded in order to stop the dissemination globally. This will further impose the risk of private censorship on the Internet and affect the right to free speech and privacy of users. The constant and close monitoring has been held as not warranted by law as per various precedents of Indian courts.[6]
  • Shifting away from the law established by the Manila Principles on Intermediary Liability and Shreya Singhal case: The Court has allowed plaintiffs to directly approach the intermediary platforms in case of re-uploading of the objectionable content in future. This is a great shift away from the existing process under Section 79 of the IT Act, 2000 as established by the Supreme Court’s landmark judgment in the Shreya Singhal case, which requires intermediaries to take down or disable the access to the content only in cases of receiving an order from either the government or the Court to do so. The same is considered global best practice according to the Manila Principles on Intermediary Liability.
  • The question of extraterritorial application of the IT Act in the present case: As per the Section 75 of the IT Act 2000, it is clear that the Act applies extra-territorially to certain offences or contraventions committed outside of India if the same is committed using “a computer, computer system or computer network located in India, the contraventions as contemplated under the Act are provided for in Sections 43, 43A, 66A, 66B, 66 66E and Section 66F.” Defamation is not covered in any of these provisions.[7]

Heavy reliance on the unsettled jurisprudence

The Court has heavily relied on certain foreign judgments while reaching the conclusion in its own judgment. The issue with the same is that the jurisprudence around geo-blocking and global injunctions is unsettled and still developing; with the Delhi HC’s order adding more confusion to the same.

The Court has relied on the case of Google Inc. v. Equustek Solutions Inc., which is the living proof of the unsettled jurisprudence.[8] The Supreme Court of Canada ordered Google to de-index listings from its search results in order to provide protection to trade secrets of a subject from Google globally. While, the Supreme Court of Canada upheld a global injunction against Google, the US Court sided with Google ruling that the Canadian order “threatens free speech on the global internet”.

The Court also relied on the case of Eva Glawischnig-Piesczek v. Facebook Ireland Limitedin which the CJEU ordered Facebook and other platforms to remove questionable content, copies of the same and block the access to the same, globally. While emphasizing on the case, the Delhi HC didn’t consider at all the CJEU decision in the case of Google v. CNIL[9], in which it was held that the Google is not required to de-reference listings from its global service, just because the content has been declared to be illegal by an EU member state.

Comment

It is clear that the Delhi HC left a lot to consider before delivering the judgment such that from the complexities of territorial jurisdiction to the difference in nature of cross-jurisdictional laws. In the present case, the Court mainly failed to understand the varying nature of defamation laws across jurisdictions— such that in the UK, the burden of proof is on the defendants to prove that the content is not defamatory, while in the US, a heavy onus of proof is placed on the plaintiff.

The Court also failed to consider certain very important foreign judgments which have specifically highlighted the issue of difference in the nature of law. In Google v. CNIL, CJEU held that the ‘right to be forgotten’ (which was the main issue in the case) has differences in standards for its application and interpretation around the world. Therefore, it agreed that it is enough for Google to block access to the questionable content from the EU domain only. Further, in Bachchan v. India Abroad Publications Inc.[10], the Supreme Court of New York County refused to enforce a defamation judgment awarded by the High Court of Justice in London, England, ruling that it will be a threat to the free speech protections as offered by the First Amendment to the US Constitution.

Unarguably, internet jurisdictions have always been a challenge for the courts and governments. Courts have always been behind the technology in the race and unable to assert absolute jurisdiction. This makes the internet risks become a proverbial ‘wild west’ with no single comprehensive applicable law. The fact that injunction against an intermediary, on a global scale, doesn’t make it necessarily invalid and aggressive. After all, the limited denial of access in the local domain is not protecting the underlying rights at stake; global takedown seems the right method to ensure effectiveness. But all of this is required to be done while mediating the conflicting interests as well as recognizing the protection to certain forms of speech.

As Gautam Bhatia said in the context of Swami Ramdev v. Juggernaut Books last year, “Indian courts seem to increasingly view freedom of speech as a mere annoyance to be brushed aside when confronted with competing claims”. If global take-down orders will become mainstream, the regressive laws on freedom of speech and expression online will become a norm. The Courts and governments, in order to win this ‘hare and tortoise race’, shall not ignore the countervailing arguments in relation to freedom of speech and right to privacy. These rights shall not be considered under-weighed against the values like national integrity, security interests, etc., rather an effort shall be made to strike the balance between both the sides.

The judgment is under challenge now by Facebook before a Division Bench, and the matter is listed for final hearing on January 31, 2020. The Court must set a precedent in the unsettled jurisprudence that will consider the free speech and privacy rights in the world of internet at the intersection of technology and laws such as defamation law.

References:

[1] Para. 87, Swami Ramdev v. Facebook [CS (OS) 27/2019 – Delhi HC]

[2] Para. 78, Swami Ramdev v. Facebook [CS (OS) 27/2019 – Delhi HC]

[3] Para. 86, Swami Ramdev v. Facebook [CS (OS) 27/2019 – Delhi HC]

[4] Apoorva Mandhani, Why Baba Ramdev’s win against Facebook, Google in Delhi HC only adds to judicial confusion, The Print, https://theprint.in/india/governance/judiciary/why-baba-ramdevs-win-against-facebook-google-in-delhi-hc-only-adds-to-judicial-confusion/312403/.

[5] Balu Nair, Delhi HC Gives Expansive Interpretation to Section 79 of IT Act: Issues Global Blocking Order Against Intermediaries, SpicyIP, https://spicyip.com/2019/11/delhi-hc-gives-expansive-interpretation-to-section-79-of-it-act-issues-global-blocking-order.html.

[6] Delhi High Court Approves Take Down of Content Globally, SFLC, https://sflc.in/del-hc-orders-global-take-down-content.

[7] Para 16, Swami Ramdev v. Facebook [CS (OS) 27/2019 – Delhi HC]

[8] Google Inc. v. Equustek Solutions Inc., Cambridge Core, https://www.cambridge.org/core/journals/american-journal-of-international-law/article/google-inc-v-equustek-solutions-inc/E667668ED944EBE52233E17320478448/core-reader.

[9] Google v. CNIL, CJEU Case C-507/17.

[10] Bachchan v. India Abroad Publications Inc., 154 Misc 2d. 228, 585 N.Y.S.2d 661.

S-E Asia gearing up for Data Protection: Sri Lankan Framework on Data Protection Legislation

The Sri Lankan Ministry of Digital Infrastructure and Information Technology introduced the framework for the proposed Personal Data Protection Bill on June 12, 2019. ‘Data Protection Legislation’ is an important public policy consideration for the Sri Lankan government in the context of “digital transformation taking place in Sri Lanka with government agencies, Banks, Telco’s, ISPs and private sector collecting personal data via the Internet,” according to the official press release. It is also important as “the Right to Information Act (2016) is currently being implemented in Sri Lanka, pursuant to Article 14A of the Constitution, where the right to privacy is an exception”.

To draft the legislation, the Drafting Committee looked at international best practices, such as the EU General Data Protection Regulation as well as the laws enacted in other jurisdictions, “such as Australia, Singapore and the Indian Draft Legislation”.

The Framework has been introduced for the stakeholder comments and will now be subjected to an Independent Review Committee.

The objective of the Framework

As per the Preamble, the Framework aims to:

  1. Protect the personal information while ensuring the rights of natural persons with regard to the processing of such information
  2. Improve consumer confidence and ensure the growth of digital democracy and innovation and promote both the protection of personal data and its use in Sri Lanka while respecting domestic laws and regulations and international standards
  3. Enable the Government to regulate the processing of personal data and to ensure confidence in the privacy and security of online transactions and information networks and actively participate in an information-driven global economy
  4. Improve interoperability among privacy frameworks as well as strengthen cross-border co-operation among enforcement authorities and provide clear guidance and direction to entities located or operational in Sri Lanka on generic data protection issues and their impact.

What is ‘Personal Data’?

‘Personal Data’ means any information whether true or not, relating to an identified or identifiable natural person, that is, data subject.

‘Personal Data Breach’ means any act or omission that consequently results in accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to personal data of the data subject.

What are ‘Special Categories of Data’?

Any personal data that reveals “racial or ethnic origin, political opinions, religious or philosophical beliefs, financial data, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning natural person’s sex life or sexual orientation, personal data relating to offence, criminal proceedings and convictions, personal data relating to a child” and any other personal data that the Minister may determine upon the recommendation of the Data Protection Authority (DPA) as established  from time to time by Regulation in accordance with the proposed Framework.

What is the ‘Data Protection Authority’ (DPA)?

Part VII of the Framework provides for the establishment of the Data Protection Authority (the “Authority”)  of Sri Lanka. It will be the apex body for all matters related to data protection and for implementation of the proposed Act. It will be responsible for maintaining the Register of controllers, and giving directions, issuing guidelines and undertaking training for controllers.

Following are certain significant powers vested with the Authority, inter alia:

  1. To enforce its orders or determinations made under this Act against a controller
    or processor through prosecution;
  2. Data Protection Authority has power and has a duty to prosecute for the offences
    under this Act;
  3. The Authority may carry out periodic audits in relation to any processing activity carried out by a controller or processor to ensure compliance with this Act.

“For the purpose of investigating into a complaint received by the Authority,

holding an inquiry in relation to an appeal or making an order under section 38:

  1. require any person to appear before it;

  2. examine such person under oath or affirmation and require such person where necessary to produce any information related to processing

  3. to inspect any information strictly related to the processing in question that is held or controlled by a controller or processor by an officer authorized on that behalf by the Authority. In any event, such officer shall be a senior staff member of the Authority having relevant expertise to conduct such inspection.

  4. make a determination in accordance with the provisions of this act with due consideration of the information available to it.”

Application of the proposed legislation

Part I says that the proposed legislation applies to the processing of data that will take place:

  1. wholly or partly within Sri Lanka; or
  2. by a controller or processor which is resident, incorporated or subjected under Sri Lankan law, or a controller or processor which is offering “goods/services to data subjects in Sri Lanka”, or “who monitors the behaviour of data subjects in Sri Lanka including profiling in so far as such behaviour takes place in Sri Lanka”.

However, the provisions will not apply to the processing of data that is for “purely personal or household purposes” or when the data is anonymised. Also, it will not apply to the processing of data which is done by any government department, provincial council or any other regulatory body for lawful purposes.

Data Protection Principles

Part II of the proposed legislation provides that processing and controlling of data will be lawful only when it is done in accordance with the following principles:

  1. Personal data shall be processed lawfully, fairly and in a transparent manner;
  2. Personal data shall be collected only for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with the said purposes;
  3. Processing shall be adequate, relevant, necessary, proportionate to the purposes for which the personal data is processed;
  4. The controller shall ensure that personal data that is processed is accurate and, where necessary, kept up to date with every reasonable step being taken to ensure that any inaccurate personal data are rectified or erased without delay;
  5. Personal data may be kept in a form which permits the identification of data subjects for such period as may be necessary for the purposes for which the personal data is processed; and
  6. Personal data shall be processed in a manner that ensures appropriate security of personal data using appropriate technical or organisational measures.

Rights of Data Subjects

Part III lays out the following rights of Data Subject, inter alia:

  1. Data Subject shall have the right to withdraw its consent for the processing of its personal data. Data Subject can request the controller for the withdrawal of consent in writing.
  2. The Framework entitles Data Subjects to obtain access to their personal data and information at any time they request. Data subjects shall also have the right to request for rectification of any inaccurate personal data that has been processed.
  3. The Data Subject can also request from the controller for erasure/deletion of the personal data which has been unlawfully processed, or processed pursuant to a legal obligation, or processed when such processing is no longer necessary or processed when such processing is no longer legitimate.
  4. The Framework enables Data Subjects to claim their aforementioned rights by way of directly approaching controller of the personal data and in cases in which controller restricts the request of Data Subject, through the appeal to the Authority.

Scope of Controllers and Processors of the Data

Registration requirements

Part IV of the Framework obligates controllers and processors to register themselves with the Authority. They have to apply for registration in the prescribed form, which will require complete details related to the processing of the personal data and safeguards adopted by them to protect such personal data, within the prescribed time period.  The Authority shall keep and maintain a Register of the registered controllers in such form and manner as may be prescribed.

The Framework also requires the controller and processor to designate a Data Protection Officer. A holding company may appoint a single data protection officer for all its subsidiaries. The Officer will advise on applicable data processing requirements and data protection impact assessment, ensure the compliance with the applicable law, and cooperate with the Authority for controllers and processors.

Duties and obligations

The Framework imposes certain duties and obligations on the controller such that, inter alia:

  1. The controller shall implement appropriate technical and organisational measures such as encryption, pseudonymisation, anonymisation, data minimisation techniques, privacy-by-design techniques, adopt privacy enhancing technologies as applicable, to ensure and to be able to demonstrate that processing is done in accordance with the provisions of this Act;
  2. Conduct privacy impact assessments when required by this Act and in accordance with the provisions of this Act;
  3. Implement internal oversight mechanisms and integrate such mechanisms into its governance structure;

“Where processing is to be carried out by a processor on behalf of a controller:

  1. the controller shall use only processors providing sufficient guarantees to implement appropriate technical and organisational measures in such a manner that processing will meet the requirements of this Act and ensure the protection of the rights of the data subject as guaranteed by this Act;
  2. Any processing by a processor on behalf of the controller shall be governed by a contract or any other written law that is binding on the processor that sets out the subject-matter and duration of the processing, the nature and purpose of the processing, the type of personal data and categories of data subjects and the obligations and rights of the controller.”

The Framework further provides the duties and obligations of processor such that it can only process the personal data in accordance with the documented instructions from the controller.

The Framework obligates the processor, inter alia:

  1. to ensure that its personnel are bound by contractual obligations on confidentiality and secrecy (personnel means any employee, consultant, agent, affiliate or any person who is contracted by the processor to process personal data);
  2. assists the controller by appropriate technical and organisational measures for the fulfilment of the controller’s obligation to respond to requests for exercising the data subject’s rights laid down in this Act;
  3. assists the controller in ensuring compliance with the obligations under this Act.
  4. allow for and contribute to audits, including inspections upon the controller’s request.

The processor shall remain liable to the controller for the performance at all times even when the processor appoints the ‘sub-processor’.

Data breach notifications

The controller shall without undue delay and in any event of a personal data breach within the prescribed time and in such manner and form as prescribed by the Authority inform the Authority of becoming aware of a personal data breach.

Data protection impact assessments

The Framework makes it mandatory for the controller to carry out a privacy impact assessment whenever a type of processing is likely to result in a high risk to the rights of the Data Subject. The controller shall seek the advice of the data protection officer, where designated when carrying out a data protection impact assessment. Such an impact assessment is mandatory in cases where there is:

  1. a systematic and extensive evaluation of personal data such as profiling;
  2. processing on a large scale of special categories of data;
  3. monitoring of publicly accessible areas or telecommunication networks or any other processing activity as prescribed under the proposed Act.

The Authority will provide the guidelines through official gazette regarding the form and manner in which the privacy impact assessments are to be carried out by the controller.

Certain exceptions

Part V provides certain exceptions to the protection of personal data as provided by law for “the protection of national security, defence, public safety, economic and financial wellbeing [sic] of Sri Lanka, the impartiality and independence of the judiciary or the prevention, investigation and prosecution of criminal offences and the execution of criminal penalties, and other essential objectives of general public interest”, and for the protection of “rights and fundamental freedom” of Data Subject and others, “notably freedom of expression and right to information”.

Cross-border flow of personal data

Part VI lays out the rules for the cross-border flow of personal data:

  1. A controller and processor can only process the data at a location outside Sri Lanka if the location has been prescribed by the Minister as a place which ensures an adequate level of protection for personal data in accordance with the provisions of this proposed Act.
  2. Otherwise, the controller and processor have to provide safeguards and ensure the effective remedies for Data Subjects in order to process the data at a location outside Sri Lanka.
  3. DPA will by rules prescribe the conditions under which a controller or processor has to take the prior authorization of the Authority in order to process data outside Sri Lanka.

Use of personal data for direct marketing

Part VIII defines how personal data may be used for direct marketing.

 

‘Direct marketing communications’ means any form of advertising, directly or indirectly, whether written or oral, sent to one or more identified or identifiable end-users via electronic or digital communication or telecommunication services or any other means including the use of automated calling and communication systems with or without human interaction, electronic mail, SMS, etc.

 

Any natural or legal person who wants to use electronic or digital communication and any other services for sending direct marketing communications to end-users of such services has to ensure “unambiguous consent” of such end-users. However, with each such direct communication, end-user will be provided with the right to object. If an end-user claims the right to object then the natural or legal person has to ensure that they comply with such request.

 

Imposition of penalty

In Part IX, the Framework prescribes the penalty that will be imposed upon a person who fails to comply with the proposed Act while considering the nature and gravity of relevant non-compliance.

It provides the penalty that will not exceed 2% of its total worldwide turnover or rupees 25 million, whichever is higher. If a person doesn’t conform to the provisions of the proposed Act even after getting penalized once, then he/she will “be liable to the payment of an additional penalty in a sum consisting of double the amount imposed as a penalty on the first occasion”.

Such imposition of penalty will not preclude a supervisory authority from taking any regulatory or disciplinary measures (cancellation of license, suspension, etc.) against such a controller or processor.

Facebook’s Clampdown on the business of generating fake likes and followers: ‘Inauthentic Behavior’ on Instagram

Facebook has announced in a blog release titled “Preventing Inauthentic Behavior on Instagram” that Facebook and Instagram have sued a company and three individuals based in New Zealand for making a business of selling fake likes, views and followers on Instagram. It has filed a lawsuit in US federal court alleging that “the company and individuals used different companies and websites to sell fake engagement services to Instagram users”.

It said it issued warnings to the company and suspended company’s associated accounts for violating Facebook’s Terms of Use, but the activities persisted. By filing the lawsuit Facebook wants to send a message that fraudulent activity is not tolerated and it will protect the integrity of its platform.

The lawsuit

The lawsuit asks the Court to prevent the defendant company from “engaging and profiting in the sale of fake likes, views and followers on Instagram”. It also seeks to prevent a “violation of its Terms of Use and Community Guidelines”. Further, it aims to prevent a “violation of the Computer Fraud and Abuse Act and other California laws for distributing fake likes on Instagram in spite of Facebook suspending their accounts and revoking access”.

The Lawsuit details that company called Social Media Series has various websites and services to generate fake likes and followers for Instagram users who wanted to inflate their followers. Customers paid ranging $10 to $99 per week depending on the number of likes they want to purchase for their accounts which then generate almost within seconds of posting a new photo.

The lawsuit says that “through their business, Defendants [Social Media Series Limited and its directors] interfered and continue to interfere with Instagram’s service, create an inauthentic experience for Instagram users, and attempt to fraudulently influence Instagram users for their own enrichment”.

As the lawsuit further claim, the company and its directors has “unjustly enriched themselves at the expense of Facebook and Instagram in the amount of approximately $9,430,000”, since July 2018.

Inauthentic experience

Facebook said in the blogpost that “Inauthentic activity has no place on our platform”. It claims that the social media giant “devote significant resources” to detect and stop the inauthentic behavior. This includes “blocking the creation and use of fake accounts, and using machine learning technology to proactively find and remove inauthentic activity from Instagram”.

It further said that, “today’s lawsuit is one more step in our ongoing efforts to protect people and prevent inauthentic behavior on Facebook and Instagram”. Facebook expects to be paid unspecified damages for manipulating Instagram’s platform.

Clamping down on “Inauthentic Behavior”

Facebook now has multiple lawsuits in the works relating to individuals or companies that sell fake engagement on its social media platforms. Facebook recently removed or unpublished over 1,000 Facebook pages and Instagram accounts from India and Pakistan for ‘inauthentic behavior’. It filed a lawsuit in March 2019  against several companies and individuals based in China claiming that they are engaged in selling of fake accounts, likes, and followers on Facebook and Instagram. In November 2018, Instagram warned users to avoid inauthentic follows and likes generated by third-party apps and services, as reported by Cult of Mac.

A Step Ahead: Analysing Indian Arbitration Law in the Context of International Technology Disputes

[This article was first published on the Mapping ADR Blog as authored by Aryan Babele,  you can read this article at http://mappingadr.in/a-step-ahead-analysing-indian-arbitration-law-in-the-context-of-international-technology-disputes/]

Technology-based enterprises are becoming the leaders of the global market in its every aspect. No industry has experienced such explosive growth as has been experienced by the industry of technology-based enterprises; especially in the context of globalization of the economy and the complementary expansion in international trade in recent years. The technology industry is indeed an international sphere due to its components, viz international supplying and distributing networks that have enabled manufacturers to provide their technology products/services to consumers at a global scale. For instance, Biotechnology is high in demand at global scale due to its influence in multiple spheres- medical, environmental, industrial etc., which are facilitated by processes like manufacturing, licensing and distributing. The global economy has given a significant boost to the demands on flexible dispute resolution, including international arbitration, as a means for resolving technology business disputes. This characteristic of technology business has become one of the main driving forces to the fact that the technology industry is progressively adopting arbitration as a dispute resolution method for international transactions where the base of its customers, suppliers and resources is established across multiple jurisdictions. As the competition to become a leader for the proper seat of technology-arbitration is becoming stiffer among nations, it is interesting to note that why arbitration is better than litigation for technology disputes. Further, considering India’s huge Information Technology industry, it is important to analyse the preparedness of the arbitration law of India to handle the international technology arbitrations.

In the technology industry, the contracts between two parties are most often based on the objective to provide services such as to acquire, sell or finance a high-tech business or project; manufacture, distribute and/or deliver; license patents or other intellectual property rights (IPRs); and purchase insurance policies covering risks associated with the production or operation of high-tech assets.[1] Therefore, difficulty with litigation in technology disputes, that arises out of a contract is that it involves multi-faceted issues related to different rights- acquisition, patent, know-how, trade secrets, etc. Fulfilment of liabilities established by such rights needs certain assurances from the national law regarding neutrality, speedy and flexible procedures, fulfilment of intentions and needs of the parties, confidentiality protection, experts’ decision etc.

DISADVANTAGES OF LITIGATION IN INTERNATIONAL TECHNOLOGY DISPUTES

In litigation, the major disadvantage to the parties in a technology dispute is the decision by an inexpert who is not able to appreciate the technicalities of a scientific testimony with little or no knowledge of relevant legislation and regulations. Players in fast-paced technology markets cannot afford to have progress stalled for lengthy and expensive litigation due to unexpected adjournments and options of appeal to higher courts.[2] Public nature of judicial proceedings makes the preservation of confidentiality a problematic task in litigation, which is extremely significant for technology-based enterprises. In litigation there arises a situation where legal actions for a dispute are submitted across multiple jurisdictions simultaneously, leading to uncertain and risky results. In such a scenario, even the litigators find it very uncomfortable to litigate abroad surrounded by unfamiliar foreign laws, regulations, customs, or language. Therefore, given the risks, it is not reasonable for technology-based enterprises to always opt for litigation in order to get resolve the international business disputes.

ARBITRATION: A SUITABLE DISPUTE RESOLUTION MECHANISM

As litigation is not always the best resolution method for the disputes which involve technology-based enterprises, there is a need to explore an alternative dispute resolution mechanism. In an international dispute, the greatest concern for both the parties is the favourability of the substantive and procedural laws of a particular jurisdiction to one or the other party. International arbitration provides the autonomy to the parties to decide the law and the forum which will govern the procedural and substantive aspects of the dispute resolution. As the pace of settling a dispute is given major consideration in commercial disputes, especially for technology business, the inherent characteristic of arbitration proceedings being a cheaper and a quicker process makes it an attractive approach to resolve disputes. As there is no appeal on the merits in arbitration, it is another reason for it being a swifter process than litigation.[3] Even if such an appeal to the Courts for the enforcement of the final award, that is too a streamlined process with time limits on the decision.[4] Arbitration, in contrast to the litigation, assures the confidentiality privilege pursuant to the agreement as it is a private procedure. Further, the availability of uniform rules for international commercial arbitration better meets the requirement of parties in the context of international technology disputes. For example, the success of the New York Convention, 1958, that has been ratified by 145 nations, boosts the confidence in the parties to afford the international arbitration as a mechanism for dispute resolution. Therefore, it is amply clear that all the usual advantages of arbitration in commercial disputes are applicable to the technology disputes making it a more efficient and effective dispute resolution alternative to litigation.

ANALYSING THE ARBITRATION AND CONCILIATION ACT, 1996 IN THE CONTEXT OF INTERNATIONAL TECHNOLOGY DISPUTES

After a lot of serious bureaucratic deliberations and political-intellectual debates, comprehensive overhauling of the Arbitration Act of 1940 resulted in the enactment of the Arbitration and Conciliation Act, 1996. With recent amendments in 2015, considering judgements of the Apex Court of India in cases of Bhatia International v. Bulk Trading[5]and BALCO v. Kaiser[6]it consolidated the domestic as well as international law of arbitration to make it suitable to the international commercial disputes in a better way than ever before. It further reflects the standards of the UNCITRAL model law on international commercial arbitration to promote the neutral and independent arbitral proceedings in India. Almost every provision of the Act 1996 takes into consideration the intent of the parties in one form or another. It also stresses the significance of arbitration agreement providing the instrument for parties to choose the expert decision makers and their powers as arbitrator in the arbitral proceedings.[7]

Privacy is a major concern for the technology disputes and constant interference of the national courts in arbitral proceedings subject it to broader public scrutiny, due to which not only confidentiality but also flexibility, procedural predictability, and informality of arbitral proceeding gets fragmented. The Act of 1996 provides a very limited number of circumstances in which national courts can intervene in arbitral proceedings, allowing arbitrations to take place according to its natural flow. The Act considers the principle of ‘party autonomy’ as the essence of the arbitration. It provides for an arbitral tribunal with power to rule on its own jurisdiction and determine the rules of proceedings[8], in order to ensure proper and expeditious conduct of the arbitration between parties, preserving the party autonomy. It further excludes the intervention from national courts by allowing the continuation of arbitral proceedings and making of final awards, thereby eliminating prospects for the delay.[9] To keep parties responsible towards the arbitral proceedings, the tribunal has powers to further delineate procedural duties on each party that comes with certain obligations such as security for costs and dismissal of the claim, in order to avoid any inordinate delay.[10]

In technology disputes, one of the most sought-after reliefs is interim reliefs as most of the international technology disputes arise from the contracts/license agreements. In such disputes, at the time of any deadlock, it is the aim of the licensor to pause the exploitation of technology and trade secrets. The Act has given the power to the tribunal to order such relief on a provisional basis but that is highly subjected to the scrutiny of national courts.[11] Also, it provides power to the tribunal to make an interim arbitral award at any time of the proceedings.[12] For technology arbitrations, there is a requirement of more specific and clearer provisions. The Act is supposed to confer such power on the arbitral tribunal with more substantiated provisions elaborating on the situations and conditions in which the tribunal can grant such injunctive reliefs.

Further, the Act is also required to make broader provisions regarding the protection of privacy and confidentiality of the details of parties and subject of the arbitral proceedings. It is certainly a big requirement for technology disputes that there be a provision related to blanket cover for issues of protecting the confidentiality. Therefore, particularly in relation to international technology disputes, the Act 1996 is needed to provide greater assurance and discretion to parties in terms of choice of arbitration institution, choice of an expert for arbitration (especially expertise in Telecommunication, Media and Technology laws), and IP infringement disputes.[13]

CONCLUSION

Technology-based enterprises drive the major transformations of the world by providing solutions to some of the greatest conventional anomalies. As there will be more and more research and development of technologies, they would result in more commercial contracts. Trodding the same path, there would soon be a separate pile-up of technology disputes in national courts. Hence, it is the need of the hour for parties to consider other dispute resolution mechanisms that are more expeditious and flexible.

Technology companies are themselves starting to anticipate this and increasingly choosing arbitration to resolve the international disputes, as stated by SVAMC as well.[14] Western nations and developed nations from Pacific Rim t have also understood the significance of resolving the technology disputes in a more speedy manner and have already taken specific actions for rectifying the concerns.

For India, there is a stiff competition ahead to stand as a centre for the arbitration in international technology disputes. There is no doubt that the Arbitration and Conciliation Act, 1996 provides an attractive framework for the resolution of international commercial disputes; but for technology disputes, there is a need to take a step ahead and to incorporate in the Act broader provisions relevant to interim reliefs, more flexible confidentiality clauses, e-case management, efficient e-disclosure review etc. Given India’s established law of arbitration in place and booming IT industry, there is a great opportunity for India to play a leader’s role in resolving international technology disputes.

 

 

[1] Raymond G. Bender, Arbitration- An Ideal Way to Resolve High-Tech Industry Disputes, Dispute Resolution Journal Vol. 65 (4), https://svamc.org/wp-content/uploads/2015/08/Arbitration-An-Ideal-Way-to-Resolve-High-Tech-Industry-Disputes.pdf.

[2] Sandra J. Franklin, “Arbitrating Technology Cases—Why Arbitration May Be More Effective than Litigation When Dealing with Technology Issues,” Mich. Bar J. 31, 32 (July 2001).

[3]Supra note 1.

[4]§34, The Indian Arbitration and Conciliation Act, 1996, Act no. 26 of 1996, Acts of Parliament. (India). https://indiankanoon.org/doc/536284/

[5] (2002) 4 SCC 105. https://indiankanoon.org/doc/110552/

[6] (2012) 9 SCC 552.https://indiankanoon.org/doc/173015163/

[7]§§ 7, 11, Supra note 5.

[8] §§ 16, 19, id.

[9] § 16(5), id.

[10] §§ 9(ii)(b), 4(b), id.

[11] §17, id.

[12] § 31(6), id.

[13] Norton Rose Fulbright, Arbitration in technology disputes, International Law Office, (Nov. 9, 2017), https://www.internationallawoffice.com/Newsletters/Arbitration-ADR/International/Norton-Rose-Fulbright-US-LLP/Arbitration-in-technology-disputes

[14] Gary L. Benton, Technology Dispute Resolution Survey Highlights US and International Arbitration Perceptions, Misperceptions and Opportunities, Kluwer-Arbitration Blog, http://arbitrationblog.kluwerarbitration.com/2017/10/28/technology-dispute-resolution-survey-highlights-us-international-arbitration-perceptions-misperceptions-opportunities/

The Road to GDPR: Historical Context behind the European Data-Protection Laws

Since the last few months, internet users are receiving hundreds of emails or pop-ups from different websites regarding the frequent updates in their privacy policies. It is a formal process that most of the Europe based firms and service providers are completing, in order to become compliant with the most-debated General Data Protection Regulations (GDPR). It was on 25th May 2018, that the European Union’s GDPR came into force, providing significant upgrades to the E.U. data protection regulatory framework. It is a regulatory policy enhancement over the EU Directives 95/46/EC on Data Protection, adopted 20 years ago, which was centered on the protection of personal data of individuals in the era of early users of Internet that were engaged in processing and free movement of such data situated in various cyber-cafes. The directives later became the in-hand limitations that directed the internet service providers with a procedure that is to be adopted before handling data-processing of personal information of users. After 20 years, the Internet is ubiquitous in our lives as its application is prevalent around us everywhere. Therefore, recent GDPR requirements are going to massively impact the data-usage practices of both the consumers and the companies.

2016-01-30_GDPR_history

GDPR is a very much talked about topic these days as there is a lot of confusion surrounding that what is covered by GDPR and what not. The debate on the acceptance of GDPR became more heated as a string of Small and Medium Enterprises withdrawn from the EU market or shut down operations entirely in order to avoid the hefty costs of compliance. Such events itself tells that the GDPR is a strict law. GDPR is a far-reaching and multifaceted regulation, requiring the companies to provide significant control to consumers over their personal-data including establishing new rights for the individual (right of data portability, right to be forgotten, data localisation etc.). Another stringent check on companies is the debated-introduction of fines up to €20 million or 4 percent of the company’s turnover in case of breach of data-privacy by the company. Unarguably this makes EU a regulatory superpower, leading the pack of stricter regulations, on data-protection. Why EU is so adamant to afford such stricter regulations that can break up the global internet into regional or national chunks? The seriousness of the penalties reflects a European approach to privacy that can be traced back, in large part, to the history of its members’ experiences with personal data being used for certainly wrong purposes. To have a clear focus on GDPR and European approach to data protection, it is important to explore the dark past related to data protection in Europe.

The causes for adopting a very strict approach can be traced back to the Europe of World War II era, during which the Nazis in Germany consistently abused private data and personal information in order to create profiles of citizens and identify Jews and other minority groups. During the Nazi regime, the state’s control of market brought with it control of information technology as well. The access to such information-data also provided a door to the census information that indicated residents’ nationalities, native languages, religion, and profession. The punch cards that were used to feed in this information are the early data processors known as Hollerith machines, allegedly manufactured by IBM’s German subsidiary at the time Deutsche Hollerith Maschinen GmbH (Dehomag), as also mentioned in the book titled IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s most powerful Corporation. The use of census data to create a database of personal profiles according to which a broad level of discriminatory policies can be imposed- is a disturbing fact related to dark past of free movement of data.

Exploitation of private data didn’t end in Germany with the WWII coming to the end, but it was continued in the East German state as to keep in track the pro-Nazi agenda and later, in cold war era, spies of West German states. This was the first kind of mass surveillance by any state in the history through screening of private communications, periodical searching of houses, etc. The state kept the details of each and every personal data in their database from people’s friends to their sexual habits. Stasi, East German secret police force became most famous due to carrying out of such practices. As the Stasi started cross-border surveillance, in response, in 1970 West Germany approved what’s considered the country’s first modern data privacy legal framework concerning public sector data in the West German state of Hesse. This was followed by a 1977 Federal Data Protection Act designed to protect resident “against abuse in their storage, transmission, modification, and deletion.” West Europe’s push on privacy-related matters rendered the right to privacy a legal imperative in the Data Protection Convention (Treaty 108), as adopted by the Council of Europe.

Such concerns related to the exploitation of census data led to a landmark German Federal Constitutional Court’s judgment that the right of “self-determination over personal data” is a fundamental right. Later, this became the cornerstone of the EU’s view today. With the wave of European countries debating on the issue of the importance of personal information-data of citizens, the first data protection legislation was introduced into the Irish domestic law was the Data Protection Act of 1988, along with many commonwealth countries adopting such comprehensive legislation into their domestic law. The end of Cold War coincided with the rise in data transfers throughout Europe in the ‘90s. This is how migrating market throughout the European continent became a threat to the personal data of citizens of individual European states. Therefore, in order to establish a single market EU also included a 1995 E.U. data protection regulation, and cautious attitudes about privacy became a European norm. The European Data Protection Directive is created, reflecting technological advances and introducing new terms including processing, sensitive personal data, and consent, among others.

The 1995 Directive was implemented as EU further adopted the Directive on Privacy and Electronic Communications in 2002. In 2006, the EU Directive on the retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks is adopted. Although it was declared invalid by a Court of Justice ruling in 2014 for violating fundamental rights. By 2009, the EU Electronic Communications Regulations in response to email addresses and mobile numbers evolved as becoming prime currency in conducting marketing and sales campaigns. Perhaps most famously, in 2014 Europe’s top court, the Court of Justice of the European Union, affirmed the so-called right to be forgotten and ruled that Google has to abide by user requests to take down “data that appear to be inadequate, irrelevant or no longer relevant” — and since then, Google has received 655,000 requests to remove about 2.5 million links, and complied with 43.3% of those requests. (Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (es), Mario Costeja González, ECLI:EU:C:2014:317)

Given such a complex historical backdrop, the European data-protection legislations are intuitively more appealing and less subject to resistance. Europe has been always the most active regime in terms of enactments related to protections on privacy that tend to apply all sectors of the economy. To this legacy, GDPR is just a significant upgrade to that 1995 law. In the light of Cambridge Analytica’s Facebook data breach and the Equifax hack, such upgrade is being considered as a step that will reinforce consumer confidence with an assurance of protection of their personal data. Other regulations will require an update in alignment with GDPR, such as the ePrivacy Directive and Regulation 45/2001, which applies to the EU institutions when they process personal data. Member states are entitled to provide specific rules or derogations to the GDPR, where freedom of expression and information is concerned, or in the context of employment law or the preservation of scientific or historical research.

Balancing the Regulation and the Innovation: GDPR and AI

Artificial Intelligence (AI) sector is promising an altogether new generation of technological advancement being highly disruptive and productive for the Industry 4.0. AI is a constellation of technologies performing different cognitive functions- data analysis to language learning that assists a machine to understand thoughts, experiences and senses. The major functioning of A.I is to analyse the data and provide responses in accordance to the collected intelligence, basically AI provides a sui generis ability to analyse the big data applications in its various dimensions. Therefore, AI is most about the computer-generated behaviours which is considered intelligent in human beings. The concept of AI has existed for some time now, and contemporarily it is a reason of rapidly increasing computational power in industry (a phenomenon known as Moore’s law) [i] leading to the point where AI market will surpass $100 billion by 2025.[ii] AI is significant as it will transform the medium of interaction between humans and technology resulting in overall societal advantages such as inventiveness, innovation and confidence.

With all the advancement that AI will bring in the industry, it brings a lot of concern for regulators across the different jurisdictions. One of the major concerns with the application of AI is its character of feasting on large amount of data and hence its impact on data-privacy. This is making the regulators hesitant in order to allow AI start-ups to initiate any kind of large-scale activities based on AI technology. AI start-ups are soon going to hit a major impediment as the European Union’s General Data Protection Regulation (GDPR) is in effect now. The GDPR, adopted in April 2016, is being considered as the intention of European Union (EU) to form a strengthened, integrated and unified data-privacy mechanism within the EU. It aims primarily to provide the EU citizens an instrument of more control over their personal data and its protection. It provides a framework in which individuals will have liberty to ask questions that how the companies or institutions are processing and storing their personal data. The challenge of full accountability to consumer as strictly put mentioned by the GDPR makes the collection of data by more difficult impacting the AI start-ups which are absolutely dependant on varieties of personal data for machine-learning initiatives.

When it comes to knowing the specific limits that GDPR will put on AI start-ups and services then it can be explained in two-fold impacts. Firstly, processing of data has direct legal effects on the customer, such as credit applications, e-recruiting, or workplace monitoring, the GDPR will completely limit the usefulness of AI or these purposes as the Article 22 and Recital 71[iii] strictly provides for the requirement of explicit consent for each and every unit of data that is used making the functioning of the market slower. Secondly, the algorithms that the AI developers use for the application evolve themselves making it later not at all understandable, and this data combination becomes very complex to regulate.[iv]

The way out for AI start-ups seems to be in the organisational procedures that can standardise the obtaining of consent for the governance of the data within a well-structured data management framework. To be in compliance with the GDPR while processing the huge amount of data it is required that AI developers provide a fixed policy of filing an automated appeal to consumers. Illustrating this it is required that if a consumer is denied the service by any AI application, developers should provide a chance to know the reason to that consumer i.e. an appeal. It is worth mentioning that it is humans that have created, modified and implemented AI technology and they also have the potential to make it compliant and moderate according to the reasonable considerations of regulators. GDPR is not an evil for AI applications but it is just a regulatory initiative with which if AI technology develops, it will get more confidence of the potential consumers.

[i] ICO, Big Data, Artificial Intelligence, Machine Learning and Data Protection, Information Commissioner’s Office, https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection.pdf.

[ii] Todd Wright and Mary Beth, The GDPR: An Artificial Intelligence Killer?, Datanami, https://www.datanami.com/2018/02/27/gdpr-artificial-intelligence-killer/.

[iii] David Roe, Understanding GDPR and Its impact on the Development of AI, CMS wire, https://www.cmswire.com/information-management/understanding-gdpr-and-its-impact-on-the-development-of-ai/.

[iv] David Meyer, AI Has a Big Privacy Problem and Europe’s New Data Protection Law Is About to Expose It, Fortune, http://fortune.com/2018/05/25/ai-machine-learning-privacy-gdpr/.

Understanding the ‘Technology of Regulation’: Regulating the Scientific Advancements

Regulations are most often considered as adversaries of technological changes. The position of technology is to stimulate the growth of the enterprises, markets, and industries, while the periodical regulations as issued by the government, represents the limits that are imposed on this growth. This is the general conception of regulations that is no doubt everyone has regarding the regulation of technology since the 1970s when the debate started which was focused on controlling the nation-states expedition of nuclear energy, supersonic transport, and food additives. Today, the debate continues as the fears of technologies such as dark web, genetically modified foods etc. calling for regulations as precautionary measures. And to an extent, the conflict is unavoidable.

The dynamics that are induced by the technology revolution are credited with half or more productivity growth. The process of ‘creative destruction’ by entrepreneurs who devise new ways of producing goods and services is potentially a far more potent source of progress that is short-term price competition, as pointed out by Schumpeter. However, regulation can retard all of Schumpeter’s three stages of technological change: invention, innovation, and diffusion.

Every negative in the whole story is just not about the regulations. An anxiety amounts when there is talk about driverless cars, artificial intelligence, and social media, regulation is the only way to relax the stress of uncertainties that these technological changes will bring in lives of humans. These are not the views of legislators only, but also from the people who are driving these technologies and people who are driven by these technologies.

Is there a way to balance regulation and technology? The way seems to be accepting the change in the technology of regulation. Regulations are being imposed in traditional ways only such that considered to be of one type and of effecting in one way only. However, there is a way to explore more in this regard, just as there are many different types of technologies, there are many different types of regulations. Different technology instruments, such as technical requirements, performance standards, taxes, allowances, and information disclosure, can have very different effects on technological change and other important consequences.

One of the main reasons that the present regulatory technology is not rendering desired results is that the state regulators are not dedicating the time, energy, or funding to the regulations in the way the technology is developed. The key to bringing in the same creativity and inspiration into the regulations, such that the incentivized-approach must be followed, is to allow the private regulators to build the regulatory systems of the digital age.

The drivers of this shift are often ultimately regulated companies themselves- looking to define a reasonably reliable playing field on which they and their competitors meet. Private regulators are already regulating to a certain extent by having autonomy over the governance of choosing their terms and conditions of the ‘agreement’ which is the main source of the entire corporate control. Another compelling reason for bringing up the private regulators in the game is that the private entities are closer to what is happening, at increasingly high speed, on the ground, and in the cloud is not going to go away till the time they are responsible for developing new technologies.

It is very important to create a supervised cohort of private regulators. This gets the best of both worlds: the regulations that follow the incentivized approach and being accountable to the government and the understanding of these regulations to the market players in very clear terms. The question of arbitrariness because of these regulators cannot creep in as the licenses to regulate will always be in the hands of the government. Further, they have to keep their regulatory clients happy by developing easier, less costly and more flexible ways of implementing regulatory controls.

The sooner we adopt the new technology of regulation and move beyond the idea that conventional regulation can handle the challenges of our powerful new technologies, the better. The idea to regulate the innovative and disruptive technologies is a useless idea unless we figure out how to harness the power of markets, and new approaches to government accountability, to that task.

(This blog series will explore and cover all the areas of regulations that are present and required for adjusting the balance with certain scientific advancements. Suggestions and Improvements are invited from readers)