Comments on the NITI Aayog’s draft ‘Guiding Principles’ for the ‘Regulation of Online Fantasy Sports Platforms in India’

On 5th December 2020, NITI Aayog released a draft for discussion titled ‘Guiding Principles for the Uniform National-Level Regulation of Online Fantasy Sports Platforms in India’ (“Draft Report”), seeking comments from different stakeholders of fantasy sports industry. The Draft report hits two birds with one stone; firstly, it proposes to establish a single Self-Regulatory Organization (SRO) for Online Fantasy Sports Platforms (OFSP) so as to enable ‘light touch’ regulatory framework, secondly, these guidelines also act as a ‘regulatory sandbox’ for OFSP.  

A brief summary of our submission to NITI Aayog with comments, concerns and recommendations in relation to the Draft Report are as follows: 

Recognition for all categories of “pay-to-play” online games

Apart from online fantasy sports, there are many other pay-to-play format of online games like rummy, cricket simulation etc. that are offered using the same digital interface through which they offer online fantasy sports contests. For instance, Paytm First Games and Mobile Premier League, to name a few. We have raised the concern that governing only OFSP could result in complex situation for online gaming industry in general and such all-in-one online gaming platforms in particular. We recommend that by virtue of these guidelines all “pay-to-play” formats of online games should be recognised.

Specify definition and extent of the term ‘fantasy sports’

The Draft Report neither defines the term neither ‘fantasy sports’ nor enlists activities that might constitute the same under the proposed framework. The framework proposes that “all formats” of fantasy sports offered by OFSP must be skill-predominant. There is no clarity whether ‘free to play’ formats, which doesn’t involve any stake of players and are risk-free, are also required to be game of skill. In our comments, we have formulated an element-wise definition of ‘fantasy sports’ wherein we have specifically pleaded that the definition should exclude free to play format specifically from the definition of fantasy sports.

The proposed framework requires a platform to take approval from SRO if offering a fantasy format different from judicially determined game of skill. There are three HCs which have analysed the Dream 11’s format as game of skill and no definitive criteria have been laid down by any of them for determining whether a fantasy format is game of skill or not. Therefore, we believe that ‘judicially determined’ format of fantasy sports is subjective and the framework should itself provide objective test in the Draft Report itself.   

Uniform and diverse representation in the SRO

The Draft Report prescribes that only a fantasy sports industry body, which have as members OFSPs with registered user base, in aggregate, equivalent to at least 66 percent of registered users of online fantasy sports in India, could be recognised as SRO by the Government. This is an absurd eligibility criterion as the concentration of users is not uniform across OFSPs. In such a scenario, there is a risk of disadvantage to the interests of OFSPs with small user base.

The proposed model of membership of SRO leaves aside many other participants of the fantasy sports industry like advertisers, payment service providers, consumer bodies etc. We recommend that the eligibility criterion for recognition of an industry body as SRO must be based on diversity and number of members rather than the strength of user base of its members. This will lead to a holistic and pervasive regulatory framework.

Requirement of minimum safeguards in the organizational framework of SRO

Three internal bodies have been envisaged within the proposed SRO: an independent oversight board, a grievance redressal mechanism and an evaluation committee. We recommend that a governing body, in addition to the internal bodies, must be constituted. Further, basic principles and minimum safeguards must be incorporated in the framework to ensure independence of oversight board, transparency in working of grievance redressal body and evaluation committee, etc.

Clarity on how safe-harbour exemption will be implemented

The guiding principles proposed in the Draft Report grant safe-harbour exemption or a criminal immunity to all the member-OFSPs of the SRO. As “gambling and betting” is a subject of the state list, it is recommended that a clarificatory note be released by the NITI that fantasy sports be construed as a class apart from gambling rather than exception. In short, fantasy sports should be governed by the Union using its residuary powers under Entry 97 of List I.

(Authored by Eukti Garg, Volunteer-Researcher at LawforIT, with inputs from Aryan Babele)

Public health surveillance in India: concerns of an individual’s liberty and privacy amid a pandemic

(This article extensively borrows from another article that authors wrote for and first published on the Leaflet)

The world is grappling with the kind of situation that it has never seen before. The rapid pace of COVID-19 spread made it necessary for the governments around the world to use extreme means and measures that would otherwise be considered Orwellian. These emergency measures by the governments are attempts to effectively enforce a lockdown and strictly prohibit movement of the citizens in a bid to break the chain of infection.

As Governments are attempting to contain the contagious virus, the use of technology for monitoring people undergoing quarantine has doubled in order to combat the spread of the virus. Ordinarily, under such developing Orwellian state of affairs, civil liberty activists and privacy advocates stir commotion; considering the scale of the crisis, they seem to tacitly embrace these measures. It is obvious that this pandemic is reshaping our relationship with surveillance technology, albeit to the fear of some the surveillance that could become a norm.

World under surveillance

Across the globe, countries are expansively deploying tech-enabled surveillance infrastructure of Face Recognition Technology (FRT) based CCTVs, drones and cell phone tracking devices for contact tracing and enforcing quarantine. Growing number of countries such as Israel and South Korea are ‘contact tracing’ using mobile applications or cell phone records. It is a process of mapping travel history of an infected person by analyzing location records of the cell phones. It is followed by pinpointing the other contacts for quarantine that might have come in contact with such a person. Meanwhile, Taiwan has gone a step further in quarantining the traced contacts by deploying an ‘electronic-fence’. If a mobile user’s SIM card is tracked beyond the reach of a network station or found to be switched off, law enforcement authorities quickly approach the suspect.

In India, law enforcement authorities across the nation are increasingly using technology to monitor and restrict the spread of the virus. In several states such as Rajasthan, Punjab and Delhi, local authorities have published a list of personal details, in online media and newspaper, of those suspected or infected of COVID-19. The Karnataka government has taken this to an inordinate level by mandating all quarantined persons to send a selfie with geo-tags through an official app named ‘CoronaWatch’ every hour, except between sleeping time 10 PM to 7 AM. Now, the Ministry of Electronics and Information Technology (MeitY) has also launched an app- ‘Aarogya Setu’, which uses Bluetooth and GPS to alert an individual if they come within six feet of a Covid-19 infected person.

The case of “Public Health Surveillance”

Law enforcement agencies of different countries are carrying out tech-enabled surveillance on their citizens to ensure compliance with the rules of social distancing and lockdown. In normal times, such measures are targeted against terrorists or criminals; while also scrutinized vide privacy and civil liberty concerns.

However, even the World Health Organisation (WHO) has sought to play down privacy concerns in these unprecedented times, by terming the measure as “public health surveillance”. The WHO has simply legitimized the governments’ argument that the extraordinary situation of COVID-19 pandemic necessitates the use of an extraordinary measure of mass surveillance. The public health emergency of such magnitude is being touted as a valid justification for deploying tech-enabled mass surveillance and subversion of individual rights.

Is surveillance a matter of concern for India?

There are certain unique reasons due to which implementation of these emergency measures, in India, are worrisome.

No clarity on the legal basis for surveillance measures

Firstly, in India, neither the central government nor the state governments have provided any legal basis for directing such tech-enabled surveillance measures. For instance, neither of the official press release of the Aarogya Setu app and Karnataka’s ‘mandatory selfie direction mention any legal grounds for such directions nor have they provided any privacy policy with it. The absolute abandonment of civil liberties and privacy in the interest of public health, without the bare minimum legal foundation, portends negative consequences

The government has invoked the Epidemics Diseases Act, 1897 and Disaster Management Act (DMA), 2005 to deal with the COVID-19 outbreak. Both, the colonial era Epidemics Diseases Act and NDMA, do not cover surveillance in their scope. Although, there is an argument that basic residuary power to take ‘necessary’ steps to curb the spread of virus, under the mentioned laws accord a legitimate authority to government for surveillance.

It is unclear why the government has not availed these very basic residuary powers to also notify the standing rules on privacy or lawful manner of deployment of tech-enabled surveillance measures. As a natural consequence, government directives infringing an individual’s right to privacy cannot be tested for their legality without any standing rules for arbitrariness and lack of accountability. This is particularly dangerous in a country like India where a data protection statute does not exist.

The use of unregulated novel technologies for surveillance provides no legal checks and oversight

Secondly, the details regarding the technological capabilities of the government for surveillance are largely a secret. It is the sudden outbreak of pandemic that has forced the government to openly introduce a deluge of unregulated, contemporary and emerging technologies for mass surveillance. There is a growing concern among certain privacy advocates that the tech-enabled surveillance could persist beyond the pandemic once it gets accepted and normalized in the present emergency times. History is witness that world’s most dictatorships and authoritarian regimes emerge amid the crises.

There is no information available about the extent and scope of the government’s capability and techniques. The secrecy about the techniques of surveillance impedes the legislative checks or institutional audits. If the public is unaware of how a technology works (due to non-disclosure by the Executive), the said manner of surveillance then cannot be even challenged in a court of law. Therefore, such secrecy is nullifying the system of checks and balances in favor of the ever-augmenting executive power.

Several surveillance techniques are disproportionate and unnecessary

Thirdly, due to the use of technologies of varying level of invasiveness, there are doubts regarding the necessity and proportionality of such measures in relation to the right to privacy and individual liberty.

The Puttaswamy (I) judgment upheld, explicitly recognized in reference to public health, that to legitimately restrict fundamental rights such as privacy and liberty for implementing a measure, such measure should be proportionate in nature. In the case, the SC held that a government measure is proportionate if it satisfies following four criteria: 1) that the measure should pursue legitimate purpose; 2) that the measure should be rationally connected to the purpose; 3) that there should no less intrusive alternative measure available; 4) that the measure should accrue public benefit greater than the extent of infringement of a constitutional right.

More than half of the population of the country doesn’t have access to the internet services. In the context of such a scenario, how is surveillance through mobile application is a necessary measure? Further, several state governments are taking extreme measures of disclosing the home addresses and other personal details of infected and suspected persons, which grossly fall afoul of three prongs of the constitutional test upheld in the Puttaswamy I judgment. An obviously lesser intrusive measure such as informing at a locality level about the presence of infected cases in areas could have sufficed. Allahabad HC also held such practices, publishing personal details of anti-CAA protestors in public, of the UP government as “arbitrary invasion of privacy”.

Karnataka has rolled out a mobile application which comprehensively discloses the location history and home addresses of persons infected and quarantined. Also, some of the states are publicly listing such details wide in social media channels. Such invariable disclosure of private information of infected and suspected persons has prompted concerns and possibilities of social intimidation.

There have already been reports from across the nation of infected and suspected patients facing the stigmatisation, and various forms of discrimination which are further resulting in a negative social impact. For instance, in Maharashtra, public listing of coronavirus suspects on social media led to several cases of forceful eviction of quarantined people by their landlords.

Such events question the proportionality and necessity of the measure as it would have been a satisfactory measure if the government has alternatively chosen a lesser intrusive measure.

Ways to resolve the concerns

There is no denying that certain limitations can be imposed on civil liberties given the urgency of the COVID-19 crisis. However, in a democratic set up like India it is expected from the government that its actions should be transparent and provide a window to the public to assess the government’s accountability. All the worrisome aspects related to public health surveillance measures can be subdued by making concerted efforts to introduce legal backing for its actions, to establish institutional oversight and to use the least intrusive means.

For providing the legal basis, the government can issue the standing rules that would lay down the legal and accountability measures for the responsible local authorities undertaking public health surveillance. The governments should avail the residual powers under the NDMA and the Epidemic Diseases Act to also issue the ad-hoc rules and guidelines in addition to the emergency surveillance measures. These rules and guidelines will provide the mechanism under which surveillance can be carried out without causing deterrence to an individual’s privacy and liberty.

The government can presently provide such ad-hoc rules for privacy protection based on similar principles as delineated in the Personal Data Protection Bill 2019 (“PDPB 2019”) for the data collection during health emergencies. Clause 12 of the PDPB 2019 exempts the data fiduciaries from taking consent under urgencies like pandemic, but strictly imposes requirements of data minimization or purposes limitation, lawful processing, transparency and accountability. Introduction of such principles will ensure that the information collected surveillance is being handled under the constitutional checks to maintain privacy as much as possible

Such ad-hoc rules will obligate the government as a data fiduciary to follow principle of purpose limitation such that the authorities should only collect the minimum possible data which is sufficient for tracing contact, enforcing quarantine and any other lawful and specific purpose. The government shall use the anonymised data only and adopt all security measures to prevent leaks and maintain confidentiality of personal data of data subject. The rules will also mandate the government to delete the collected data at the earliest after it has been used for the specified purpose. This will automatically shun away the emerging concern that the surveillance’s effect could persist beyond pandemic. Further, it will inhibit the misuse of personal data and abuse of surveillance measures.

The surveillance measures aim to keep people in quarantine and check the spread of infection for their benefit, therefore it is suggested that the government should hold no secrets about its surveillance techniques and manners. It should adopt a method of “Public Notice” system such that the local district administration has to notify the model of surveillance to the public before conducting surveillance.

At the very least, this notice should disclose the legal rules governing the tech-enabled surveillance measure, and its purpose. It should be clear on the authorization required for the retention, access, and use of information collected through the use of such novel technology. Such a notice would provide the transparency in the process of imposition of surveillance and allow the legislature and public to exercise meaningful control and oversight over the manner of deployment of unregulated technologies for surveillance.

Parting note

Unarguably, the present situation calls for the governments to take substantial measures to protect the lives and health of public at large, but this should not happen in the utter disregard of constitutionally recognized rights to privacy and individual liberty. The policies and techniques of government should be legitimate and proportionate in order to maintain the democratic principles of public trust and transparency. There is no hard choice between public health and individual’ right to privacy and liberty. Both can mutually co-exist under the legal framework that guarantees the challenge to unnecessary expansion of the surveillance regime.

As pointed out by Deborah Brown, senior digital-rights researcher at Human Rights Watch, “surveillance measures should come with a legal basis, be narrowly tailored to meet a legitimate public health goal, and contain safeguards against abuse”.

Therefore, the government should definitely focus on the situation of urgency for many, instead of investing focused efforts in ensuring rights for few but should not absolutely ignore its accountability towards any section of the community. These fundamental rights are lung to the edifice of our entire constitutional system. The government should make efforts to prevent any injuries to it as much as possible.

Simplifying FinTech and FinTech Laws: Key Takeaways for Indian FinTech Industry

The significant advancements in Fintech are directly impacting on the traditional financial sector. The regulators had to be cautious in order to not miss the train and should jump on the wagon of promoting financial innovation and stiff competition in the sector. The newcomers in the sector should be provided certain leniency in form of exemptions from a number of strict compliances which are used to curb the malpractices of the big corporations, for the sake of promoting competition in the market. This post is dealing with key takeaways from reports of different regulators’ committees in India. This is the last post in the series of ‘Simplifying FinTech and FinTech Laws’.

Fintech charged firms and businesses must work in tandem with the regulated entities, e.g. banks and regulated finance providers. The businesses that a bank can undertake are provided under Section 6 of the Banking Regulation Act, 1949 and there is no business outside Section 6 that can operate as the bank. Such provisions, therefore, incentivize banking companies to make fintech innovations in a narrower scope relevant to their operations. The archaic laws make it difficult for banks to undertake fintech innovations that can be of significant utility but are beyond the scope of financial regulation.

The Watal Committee Report noted this, that:

“The current law does not impose any obligation on authorised payment systems to provide open access to all PSPs. This has led to a situation where access to payment systems by new non-bank payments service providers, including FinTech firms, is restricted. Most of them can access payment systems only through the banks, which are also their competitors in the payments service industry. This, according to the Committee, has restricted the fast-paced expansion of digital payments in India by hindering competition from technology firms.”

Forming a comprehensive and non-discriminatory regulatory approach

Regulators and legislators are required to realign their legal approach to the Fintech services. There is a requirement of developing a deeper understanding of various Fintech services and their interaction in a financial environment with other fintech services. To provide the fintech space to work utmost to its potential, it is needed that it gets a level playing field in relation to the traditional banking and non-banking players. The practise of restricting the access of non-bank institutions to payment infrastructure, such as AEPS, has to be reevaluated and the proper steps to be taken. It is required from the end of Government and Regulatory bodies that they should adopt necessary measures in order to provide accessibility to national payment infrastructure and facilities to all fintech firms without any discrimination.

Providing Standards for Data Protection and Privacy

All the fintech companies are required to invest significantly in self-regulating policies to prevent privacy risks. Fintech companies should be provided with the standards of data protection as soon as possible by government and regulators. It is evident that the provisions of the Personal Data Protection Bill, 2019 can significantly affect the growth of Fintech companies. Therefore, the standards adopted for fintech companies by regulators should be reviewed with respect to data protection and privacy concerns. The government and regulators specific to finance of the country should start focusing on the valuation of data that is processed by banking companies and recommend practices to safeguard consumer interests.

Open Data principles should govern the financial sector in order to enhance Competition

The regulators should pay heed to the open data policy among participants of a fintech sector. The regulators should begin with the mandatory norms directing financial service companies to encourage banking institutions to enable participants to access the databases of their rejected credit applications on a specific platform on a consensual basis. The practice of the UK with respect to Open Data Regulations in Banking can be adopted, where banking institutions on the basis of consent framework allow data to be available to banking partners in order to foster competition. Even the RBI Steering Committee on Fintech recommended:

“It also recommends that all financial sector regulators study the potential of open data access among their respective regulated entities, for enhancing competition in the provision of financial services.”

The KYC process should be reformed with respect to the Supreme Court’s Judgment on Aadhaar’s validity

Fintech businesses are the most affected entities due to the striking down of Section 57 of the Aadhaar Act as it invalidated the online KYC process. The online KYC and authentication provided the required efficiency and convenience to fintech firms with respect to their endeavours of on-boarding as many as consumers on their digital platform. It is recommended that alternatives to the mandatory linking to Aadhaar should be adopted in the form of possible video-based KYC, such that the documents as verified must be protected and processed with the prior consent of the consumer.

Other key recommendations

1. It is recommended that the adequate cybersecurity, anti-money laundering and fraud control measures should be adopted by investing in technologies and guidelines that can prevent fraud.

2. Technical innovations should be monitored with respect to the potential risk that innovation carries in operation under the contemporaneous legal landscape of the country.

3. A self-regulatory body to facilitate the needs of fintech is much needed as for the RBI it is still turning out to be difficult to replace the existing regulatory structure. A regulatory mechanism allowing the broader participative consultation approach should be adopted.

4. Regulators should invest in Reg-Tech (“Reg Tech is a sub-set of FinTech that focuses on technologies that facilitate the delivery of regulatory requirements more efficiently and effectively than existing capabilities. In July 2015 the FCA issued a call for input entitled ‘Supporting the development and adoption of Reg Tech’.”)

5. The majority of economies have adopted the practice of setting up of the regulatory sandboxes catalyzing the fintech innovations. It is recommended that RBI should continue with the introduction of the mechanisms, like regulatory sandboxes, enabling the adaptation of regulatory initiatives which will play a key role in maintaining India’s competitive edge.

Summary: Philippines Senator introduces the ‘Anti-False Content Act’ to fight fake news

The article has been authored by Aryan Babele and first published on Medianama. Read https://www.medianama.com/2019/08/223-the-lowdown-the-anti-false-content-act-to-address-fake-news-that-was-introduced-in-the-philippines/

The Senate of the Philippines has announced the introduction of the Anti-False Content Act’ on 1st July 2019. The newly proposed anti-fake news bill, as filed by the Senator President Vicente Sotto III, aims to prohibit “the publication and proliferation of false content on the Philippine internet, providing measures to counteract its effects and prescribing penalties therefor.” The Senator, in the explanatory note to the Bill, said that

“In the Philippines, widespread are headlines that are mere click-baits; made up quotes attributed to prominent figures; and digitally altered photos. Philipinos have fallen prey to believing that most of them are credible news…. In this regard, this bill seeks to protect the public from the deleterious effects of false and deceiving content online.”

However, media groups are warning that the proposed Bill could lead to censorship. On 25th July 2019, the international group Human Rights Watch (HRW) opposed the proposed law citing that the Bill is “sweepingly broad and threatens to stifle discussion on websites worldwide” and “would excessively restrict online freedom of speech”, in a news release. Linda Lakhdir, Asia Legal Adviser at HRW, further said that:

“The proposed ‘false content’ law poses real risks for activists, journalists, academics, and ordinary people expressing their views on the internet”

Declaration of Policy

The proposed Act declares that the policy of the State is “to protect people from any misleading or false information that is being published and has become prevalent on the internet”. In this regard, the State shall commit to:

  1. Be proactive in preventing further exploitation of online media platforms for such purpose;
  2. Counteract its concomitant prejudicial effects to public interest while remaining cognizant of the people’s fundamental rights to freedom of speech and freedom of the press.

What is ‘online intermediary’?

It refers to “a provider of service which displays an index of search results that leads the internet users to a specific online location”, giving them access to “contents originating from third parties”, and “allows them to upload and download content”. It includes but not limited to social-networking sites, search engine services, internet-based messaging services, and video-sharing sites.

What constitutes ‘publication’?

It refers to the “act of uploading content on an online intermediary with an intent to circulate particular information to the public”.

What is ‘fictitious online account or website’?

It refers to those accounts and websites “that has an anonymous author or uses an assumed name in pursuing activities” in order to avoid punishment or legal consequences of such activities.

Counter-active measures

According to the Section 5 of the proposed Act, the Department of Justice (DOJ) Office of Cybercrime shall have the authority to issue a rectification order, a takedown order and/or a block access order to restrain the creation and/or publication of the content online that contains false information or that tend to mislead the public.

Rectification order refers to an order directing the administrator of the online account or website to issue a notice indicating the necessary corrections to published content.

Takedown order refers to an order directing the owner or administrator of the online account or website to take down the published content.

Block Access order refers to an order directing the online intermediary to disable access by users to the published content.

These orders can be issued by the DOJ Office of Cybercrime in two following cases:

  1. When there is a complaint filed to the DOJ Office of Cybercrime by an aggrieved party is valid and has sufficient basis;
  2. In matters affecting the public interest, the same Office can issue the appropriate order on its own volition (motu proporio).

“Public interest shall refer to anything that affects national security, public health, public safety, public order, public confidence in the Government, and international relations of the Philippines.”

Appeal to cancel the order

According to Section 6 of the Bill, the online publisher or online intermediary who has been issued with Orders under Section 5 of the Bill, can appeal against such Order to the Office of the Secretary of the DOJ.

Punishable Acts under the proposed law

According to Section 4 of the Bill, the following acts shall be punishable offences:

  1. Creating and/or publishing content on one’s personal online account or website knowing or having a reasonable belief that the content online that contains false information or tend to mislead the public;
  2. Use of fictitious online account or website for creating and/or publishing the content that contains false information or misleading the public;
  3. Offering or providing one’s service to create and publish content online intentionally to deceive the public, regardless whether it is done for profit or not;
  4. Financing an activity which has for its purpose the creation and/or publication of a content online containing false information or that would tend to mislead the public;
  5. Non-compliance with any of the government’s Takedown orders, Rectification orders or Block Access orders issued under Section 5 of the proposed law, whether deliberate or through negligence.

Penalties

Section 8 of the Bill proposes following stringent penalties for the afore-mentioned punishable offenses such that:

  1. If an individual found guilty of creating and/or publishing the false information online and mislead the public as provided under Section 4(a) of the proposed law, he/she will be punished with imprisonment of up to six years, or fine of not more than PHP 300,000, or both.
  2. If an individual found guilty of using fictitious online account or website to create and/or publish the false information online and mislead the public as provided under Section 4(b), he/she will be punished with imprisonment of up to six years, or fine of not more than PHP 500,000, or both.
  3. If an individual found guilty of offering or providing one’s services to create and/or publish the false information online with the intent to deceive the public as provided under Section 4(c), he/she will be punished with imprisonment of up to six years, or fine of not more than PHP 200,000, or both.
  4. If an individual found guilty of financing an activity as provided under Section 4(d), he/she will be punished with imprisonment of up to twenty years, or fine of not more than PHP 100,000, or both.
  5. If an individual found guilty of not complying with government’s orders as issued under Section 5 of the proposed law, he/she will be punished with imprisonment of up to twenty years, or fine of not more than PHP 200,000, or both.

Jurisdiction of the regional trial courts

Section 9 provides that the regional trial courts will have jurisdiction over Philippine nationals who commit the acts punishable under the proposed law, whether or not they were in the Philippines when the offense was committed.

Law Enforcement Authorities

The Cybercrime Division of the Philippine National Police (PNP) and the National Bureau of Investigation (NBI) will be responsible for the enforcement of the provisions of the Act.

Facebook’s Clampdown on the business of generating fake likes and followers: ‘Inauthentic Behavior’ on Instagram

Facebook has announced in a blog release titled “Preventing Inauthentic Behavior on Instagram” that Facebook and Instagram have sued a company and three individuals based in New Zealand for making a business of selling fake likes, views and followers on Instagram. It has filed a lawsuit in US federal court alleging that “the company and individuals used different companies and websites to sell fake engagement services to Instagram users”.

It said it issued warnings to the company and suspended company’s associated accounts for violating Facebook’s Terms of Use, but the activities persisted. By filing the lawsuit Facebook wants to send a message that fraudulent activity is not tolerated and it will protect the integrity of its platform.

The lawsuit

The lawsuit asks the Court to prevent the defendant company from “engaging and profiting in the sale of fake likes, views and followers on Instagram”. It also seeks to prevent a “violation of its Terms of Use and Community Guidelines”. Further, it aims to prevent a “violation of the Computer Fraud and Abuse Act and other California laws for distributing fake likes on Instagram in spite of Facebook suspending their accounts and revoking access”.

The Lawsuit details that company called Social Media Series has various websites and services to generate fake likes and followers for Instagram users who wanted to inflate their followers. Customers paid ranging $10 to $99 per week depending on the number of likes they want to purchase for their accounts which then generate almost within seconds of posting a new photo.

The lawsuit says that “through their business, Defendants [Social Media Series Limited and its directors] interfered and continue to interfere with Instagram’s service, create an inauthentic experience for Instagram users, and attempt to fraudulently influence Instagram users for their own enrichment”.

As the lawsuit further claim, the company and its directors has “unjustly enriched themselves at the expense of Facebook and Instagram in the amount of approximately $9,430,000”, since July 2018.

Inauthentic experience

Facebook said in the blogpost that “Inauthentic activity has no place on our platform”. It claims that the social media giant “devote significant resources” to detect and stop the inauthentic behavior. This includes “blocking the creation and use of fake accounts, and using machine learning technology to proactively find and remove inauthentic activity from Instagram”.

It further said that, “today’s lawsuit is one more step in our ongoing efforts to protect people and prevent inauthentic behavior on Facebook and Instagram”. Facebook expects to be paid unspecified damages for manipulating Instagram’s platform.

Clamping down on “Inauthentic Behavior”

Facebook now has multiple lawsuits in the works relating to individuals or companies that sell fake engagement on its social media platforms. Facebook recently removed or unpublished over 1,000 Facebook pages and Instagram accounts from India and Pakistan for ‘inauthentic behavior’. It filed a lawsuit in March 2019  against several companies and individuals based in China claiming that they are engaged in selling of fake accounts, likes, and followers on Facebook and Instagram. In November 2018, Instagram warned users to avoid inauthentic follows and likes generated by third-party apps and services, as reported by Cult of Mac.

Key Points from Mark Zuckerberg’s call for regulation of the Internet: harmful content, data portability, election interference, privacy

This article authored by Aryan Babele has been first uploaded in MediaNama.

In his article in the Washington Post, Facebook founder Mark Zuckerberg suggested the need for new rules from lawmakers to balance the interests and responsibilities of all the different stakeholders’ i.e. people, companies and governments. He called for regulation on four areas require an active role of governments and regulators: harmful content, election integrity, privacy and data portability.”

Key Legal Improvements that Mark Zuckerberg suggested (Read)

1. Harmful Content

  • Content takedowns subject to appeals: In the absence of any legal standards, most of the social media platforms adopt self-regulation, but struggle because of a large base. Zuckerberg says that people should understand the difficulty that internet companies face in “deciding what counts as terrorist propaganda, hate speech and more”, that Facebook realises that they have “too much power over speech” and therefore to reduce it, the decisions regarding any speech should be subjected to an appeal before independent bodies. This seems to be how Facebook is looking to limit the move away from self-regulation.
  • Define standards for harmful content: There is a need for defining standards by third-party bodies on harmful content against which the distribution of harmful content will be governed and measured. “Internet companies should be accountable for enforcing standards on harmful content”. Zuckerberg proposes that “regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum”.
  • Quarterly compliance reports: He also suggested an idea of mandating the publication of transparency reports in every quarter of the year by every major Internet service company, which Facebook already publishes. He says that this “is just as important as financial reporting.”

Indian scenario on harmful content:

  • The government released a draft of The Information Technology [Intermediaries Guidelines (Amendment) Rules] 2018 on 24th December 2018, which are intended to curb the misuse of social media and stop the spreading of ‘unlawful content’. Although no clarity on the definition of “unlawful” content has been provided, leaving it open to abuse.
  • As there is no standard has been adopted to filter the “unlawful” content in the draft, it forces companies to take judgment calls regarding content on the basis of “take down first, think later”. However, the draft promotes the deployment of “automated tools to filter content”.

2. In terms of Election Interference: It is important to highlight the importance that Zuckerberg has given to the legislation for creating common standards in terms of regulations that govern political information campaigns and verification of political actors. “Facebook has already made significant changes around political ads: Advertisers in many countries must verify their identities before purchasing political ads”, he says, while adding that “deciding whether an ad is political isn’t always straightforward”.

  • Updating online political advertising laws: “Online political advertising laws primarily focus on candidates and elections, rather than divisive political issues where we’ve seen more attempted interference.” Laws related to elections are temporal even when political campaigns are non-stop and may include controversial use of data and targeting. Therefore, he said that “legislation should be updated to reflect the reality of the threats and set standards for the whole industry”.

Indian scenario on online Election Interference:

  • Election laws in India are very ill-equipped when it comes to dealing with online political advertisements. The Election Commission, which is the constitutional authority that regulates state and national elections, is itself relying on online platforms to self-regulate and prevent ‘illegal’ content. In absence of any comprehensive legislation that can provide Election Commission with the authority to make rules and standards for monitoring the online political advertisements, these online platforms are open to censor or amplify certain information without transparency.
  • In January, the committee led by senior deputy election commissioner Umesh Sinha submitted its report to the commission that recommended modifying the provisions of Section 126 (prohibits displaying any election matter by means, inter alia, of television or similar apparatus, during the period of 48 hours before the hour fixed for conclusion of poll in a constituency) and certain other provisions of the Representation of the People Act, 1951, including provisions of the Model Code of Conduct to bring Social Media platforms under its purview.
  • Chief election commissioner Sunil Arora said all major social media platforms — Facebook, Twitter, Google, WhatsApp and Share Chat — are taking measures such as verification of political advertisers’ credentials, sharing expenditure on it with the Election Commission (EC) through public databases and adhering to the “silence period” that comes into effect 48 hours before the polls.

3. In terms of Data Protection and Privacy:

  • Adopting GDPR as a globally harmonized framework: Reiterating the common demand of entrepreneurs for a globally harmonized framework of regulations on data protection, Zuckerberg agrees that there is a need to develop privacy regulations in line with the European Union’s General Data Protection Regulation (‘GDPR”). He further insists that “New privacy regulation in the United States and around the world should build on the protections GDPR provides”. GDPR approach to privacy regulation serves as the best example for the common global framework as it provides certain standard protections – protects the right to choose how the information should be used and does away from the process of data localisation as it subjects the data to unwarranted access. Such protections together will establish a framework under which companies like Facebook can be held accountable when it makes mistakes.
  • The Data Protection framework must not be ambiguous: Lawmakers should adopt new privacy regulations which must be clear on the points that even GDPR failed to clarify. “We need clear rules on when information can be used to serve the public interest and how it should apply to new technologies such as artificial intelligence”.

Indian Scenario on Privacy Regulations:

  • Till now the only legal protection provided to personal information in India is through section 43A of the Information Technology Act and the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 developed under the section. This provision mandates that a body corporate which ‘receives, possesses, stores, deals, or handles’ any ‘sensitive personal data’ to implement and maintain ‘reasonable security practices’, are held liable to compensate those affected when they failed to implement such practices. Given the maturity of privacy jurisprudence in the most countries around the world, these rules are just a half-hearted approach cutting a sorry figure.
  • In its landmark judgment in the Justice KS Puttaswamy case in August 2017, the Apex Court ruled the privacy as the fundamental right under Article 21 of the Constitution of India, though not in its absolute sense. Since then the government has taken significant steps to modify the privacy regulations in the line of GDPR of EU.
  • As the Personal Data Protection Bill, 2018 as recommended by the Justice Srikrishna Committee is all set to be introduced in next session of the Parliament. It covers basic protections and even recommends the data localisation which has raised concerns among various Internet services.

4. Data Portability: “Regulation should guarantee the principle of data portability. If you share data with one service, you should be able to move it to another”. The data portability will provide the choice to people to select between competing for internet services. This can actually serve in balancing the interests of people and innovators. However, the application of data portability requires clear rules of about the liabilities of protecting information when data is transferred from one service to the other. According to Zuckerberg, “this also needs common standards” and the open source Data Transfer Project is a suggested standard data transfer format.

Indian Scenario on Data Portability

Data portability may also be considered an upgraded version of the right to access and the right to erasure of personal data, both of which are present in the current Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011.

A Step Ahead: Analysing Indian Arbitration Law in the Context of International Technology Disputes

[This article was first published on the Mapping ADR Blog as authored by Aryan Babele,  you can read this article at http://mappingadr.in/a-step-ahead-analysing-indian-arbitration-law-in-the-context-of-international-technology-disputes/]

Technology-based enterprises are becoming the leaders of the global market in its every aspect. No industry has experienced such explosive growth as has been experienced by the industry of technology-based enterprises; especially in the context of globalization of the economy and the complementary expansion in international trade in recent years. The technology industry is indeed an international sphere due to its components, viz international supplying and distributing networks that have enabled manufacturers to provide their technology products/services to consumers at a global scale. For instance, Biotechnology is high in demand at global scale due to its influence in multiple spheres- medical, environmental, industrial etc., which are facilitated by processes like manufacturing, licensing and distributing. The global economy has given a significant boost to the demands on flexible dispute resolution, including international arbitration, as a means for resolving technology business disputes. This characteristic of technology business has become one of the main driving forces to the fact that the technology industry is progressively adopting arbitration as a dispute resolution method for international transactions where the base of its customers, suppliers and resources is established across multiple jurisdictions. As the competition to become a leader for the proper seat of technology-arbitration is becoming stiffer among nations, it is interesting to note that why arbitration is better than litigation for technology disputes. Further, considering India’s huge Information Technology industry, it is important to analyse the preparedness of the arbitration law of India to handle the international technology arbitrations.

In the technology industry, the contracts between two parties are most often based on the objective to provide services such as to acquire, sell or finance a high-tech business or project; manufacture, distribute and/or deliver; license patents or other intellectual property rights (IPRs); and purchase insurance policies covering risks associated with the production or operation of high-tech assets.[1] Therefore, difficulty with litigation in technology disputes, that arises out of a contract is that it involves multi-faceted issues related to different rights- acquisition, patent, know-how, trade secrets, etc. Fulfilment of liabilities established by such rights needs certain assurances from the national law regarding neutrality, speedy and flexible procedures, fulfilment of intentions and needs of the parties, confidentiality protection, experts’ decision etc.

DISADVANTAGES OF LITIGATION IN INTERNATIONAL TECHNOLOGY DISPUTES

In litigation, the major disadvantage to the parties in a technology dispute is the decision by an inexpert who is not able to appreciate the technicalities of a scientific testimony with little or no knowledge of relevant legislation and regulations. Players in fast-paced technology markets cannot afford to have progress stalled for lengthy and expensive litigation due to unexpected adjournments and options of appeal to higher courts.[2] Public nature of judicial proceedings makes the preservation of confidentiality a problematic task in litigation, which is extremely significant for technology-based enterprises. In litigation there arises a situation where legal actions for a dispute are submitted across multiple jurisdictions simultaneously, leading to uncertain and risky results. In such a scenario, even the litigators find it very uncomfortable to litigate abroad surrounded by unfamiliar foreign laws, regulations, customs, or language. Therefore, given the risks, it is not reasonable for technology-based enterprises to always opt for litigation in order to get resolve the international business disputes.

ARBITRATION: A SUITABLE DISPUTE RESOLUTION MECHANISM

As litigation is not always the best resolution method for the disputes which involve technology-based enterprises, there is a need to explore an alternative dispute resolution mechanism. In an international dispute, the greatest concern for both the parties is the favourability of the substantive and procedural laws of a particular jurisdiction to one or the other party. International arbitration provides the autonomy to the parties to decide the law and the forum which will govern the procedural and substantive aspects of the dispute resolution. As the pace of settling a dispute is given major consideration in commercial disputes, especially for technology business, the inherent characteristic of arbitration proceedings being a cheaper and a quicker process makes it an attractive approach to resolve disputes. As there is no appeal on the merits in arbitration, it is another reason for it being a swifter process than litigation.[3] Even if such an appeal to the Courts for the enforcement of the final award, that is too a streamlined process with time limits on the decision.[4] Arbitration, in contrast to the litigation, assures the confidentiality privilege pursuant to the agreement as it is a private procedure. Further, the availability of uniform rules for international commercial arbitration better meets the requirement of parties in the context of international technology disputes. For example, the success of the New York Convention, 1958, that has been ratified by 145 nations, boosts the confidence in the parties to afford the international arbitration as a mechanism for dispute resolution. Therefore, it is amply clear that all the usual advantages of arbitration in commercial disputes are applicable to the technology disputes making it a more efficient and effective dispute resolution alternative to litigation.

ANALYSING THE ARBITRATION AND CONCILIATION ACT, 1996 IN THE CONTEXT OF INTERNATIONAL TECHNOLOGY DISPUTES

After a lot of serious bureaucratic deliberations and political-intellectual debates, comprehensive overhauling of the Arbitration Act of 1940 resulted in the enactment of the Arbitration and Conciliation Act, 1996. With recent amendments in 2015, considering judgements of the Apex Court of India in cases of Bhatia International v. Bulk Trading[5]and BALCO v. Kaiser[6]it consolidated the domestic as well as international law of arbitration to make it suitable to the international commercial disputes in a better way than ever before. It further reflects the standards of the UNCITRAL model law on international commercial arbitration to promote the neutral and independent arbitral proceedings in India. Almost every provision of the Act 1996 takes into consideration the intent of the parties in one form or another. It also stresses the significance of arbitration agreement providing the instrument for parties to choose the expert decision makers and their powers as arbitrator in the arbitral proceedings.[7]

Privacy is a major concern for the technology disputes and constant interference of the national courts in arbitral proceedings subject it to broader public scrutiny, due to which not only confidentiality but also flexibility, procedural predictability, and informality of arbitral proceeding gets fragmented. The Act of 1996 provides a very limited number of circumstances in which national courts can intervene in arbitral proceedings, allowing arbitrations to take place according to its natural flow. The Act considers the principle of ‘party autonomy’ as the essence of the arbitration. It provides for an arbitral tribunal with power to rule on its own jurisdiction and determine the rules of proceedings[8], in order to ensure proper and expeditious conduct of the arbitration between parties, preserving the party autonomy. It further excludes the intervention from national courts by allowing the continuation of arbitral proceedings and making of final awards, thereby eliminating prospects for the delay.[9] To keep parties responsible towards the arbitral proceedings, the tribunal has powers to further delineate procedural duties on each party that comes with certain obligations such as security for costs and dismissal of the claim, in order to avoid any inordinate delay.[10]

In technology disputes, one of the most sought-after reliefs is interim reliefs as most of the international technology disputes arise from the contracts/license agreements. In such disputes, at the time of any deadlock, it is the aim of the licensor to pause the exploitation of technology and trade secrets. The Act has given the power to the tribunal to order such relief on a provisional basis but that is highly subjected to the scrutiny of national courts.[11] Also, it provides power to the tribunal to make an interim arbitral award at any time of the proceedings.[12] For technology arbitrations, there is a requirement of more specific and clearer provisions. The Act is supposed to confer such power on the arbitral tribunal with more substantiated provisions elaborating on the situations and conditions in which the tribunal can grant such injunctive reliefs.

Further, the Act is also required to make broader provisions regarding the protection of privacy and confidentiality of the details of parties and subject of the arbitral proceedings. It is certainly a big requirement for technology disputes that there be a provision related to blanket cover for issues of protecting the confidentiality. Therefore, particularly in relation to international technology disputes, the Act 1996 is needed to provide greater assurance and discretion to parties in terms of choice of arbitration institution, choice of an expert for arbitration (especially expertise in Telecommunication, Media and Technology laws), and IP infringement disputes.[13]

CONCLUSION

Technology-based enterprises drive the major transformations of the world by providing solutions to some of the greatest conventional anomalies. As there will be more and more research and development of technologies, they would result in more commercial contracts. Trodding the same path, there would soon be a separate pile-up of technology disputes in national courts. Hence, it is the need of the hour for parties to consider other dispute resolution mechanisms that are more expeditious and flexible.

Technology companies are themselves starting to anticipate this and increasingly choosing arbitration to resolve the international disputes, as stated by SVAMC as well.[14] Western nations and developed nations from Pacific Rim t have also understood the significance of resolving the technology disputes in a more speedy manner and have already taken specific actions for rectifying the concerns.

For India, there is a stiff competition ahead to stand as a centre for the arbitration in international technology disputes. There is no doubt that the Arbitration and Conciliation Act, 1996 provides an attractive framework for the resolution of international commercial disputes; but for technology disputes, there is a need to take a step ahead and to incorporate in the Act broader provisions relevant to interim reliefs, more flexible confidentiality clauses, e-case management, efficient e-disclosure review etc. Given India’s established law of arbitration in place and booming IT industry, there is a great opportunity for India to play a leader’s role in resolving international technology disputes.

 

 

[1] Raymond G. Bender, Arbitration- An Ideal Way to Resolve High-Tech Industry Disputes, Dispute Resolution Journal Vol. 65 (4), https://svamc.org/wp-content/uploads/2015/08/Arbitration-An-Ideal-Way-to-Resolve-High-Tech-Industry-Disputes.pdf.

[2] Sandra J. Franklin, “Arbitrating Technology Cases—Why Arbitration May Be More Effective than Litigation When Dealing with Technology Issues,” Mich. Bar J. 31, 32 (July 2001).

[3]Supra note 1.

[4]§34, The Indian Arbitration and Conciliation Act, 1996, Act no. 26 of 1996, Acts of Parliament. (India). https://indiankanoon.org/doc/536284/

[5] (2002) 4 SCC 105. https://indiankanoon.org/doc/110552/

[6] (2012) 9 SCC 552.https://indiankanoon.org/doc/173015163/

[7]§§ 7, 11, Supra note 5.

[8] §§ 16, 19, id.

[9] § 16(5), id.

[10] §§ 9(ii)(b), 4(b), id.

[11] §17, id.

[12] § 31(6), id.

[13] Norton Rose Fulbright, Arbitration in technology disputes, International Law Office, (Nov. 9, 2017), https://www.internationallawoffice.com/Newsletters/Arbitration-ADR/International/Norton-Rose-Fulbright-US-LLP/Arbitration-in-technology-disputes

[14] Gary L. Benton, Technology Dispute Resolution Survey Highlights US and International Arbitration Perceptions, Misperceptions and Opportunities, Kluwer-Arbitration Blog, http://arbitrationblog.kluwerarbitration.com/2017/10/28/technology-dispute-resolution-survey-highlights-us-international-arbitration-perceptions-misperceptions-opportunities/

The Road to GDPR: Historical Context behind the European Data-Protection Laws

Since the last few months, internet users are receiving hundreds of emails or pop-ups from different websites regarding the frequent updates in their privacy policies. It is a formal process that most of the Europe based firms and service providers are completing, in order to become compliant with the most-debated General Data Protection Regulations (GDPR). It was on 25th May 2018, that the European Union’s GDPR came into force, providing significant upgrades to the E.U. data protection regulatory framework. It is a regulatory policy enhancement over the EU Directives 95/46/EC on Data Protection, adopted 20 years ago, which was centered on the protection of personal data of individuals in the era of early users of Internet that were engaged in processing and free movement of such data situated in various cyber-cafes. The directives later became the in-hand limitations that directed the internet service providers with a procedure that is to be adopted before handling data-processing of personal information of users. After 20 years, the Internet is ubiquitous in our lives as its application is prevalent around us everywhere. Therefore, recent GDPR requirements are going to massively impact the data-usage practices of both the consumers and the companies.

2016-01-30_GDPR_history

GDPR is a very much talked about topic these days as there is a lot of confusion surrounding that what is covered by GDPR and what not. The debate on the acceptance of GDPR became more heated as a string of Small and Medium Enterprises withdrawn from the EU market or shut down operations entirely in order to avoid the hefty costs of compliance. Such events itself tells that the GDPR is a strict law. GDPR is a far-reaching and multifaceted regulation, requiring the companies to provide significant control to consumers over their personal-data including establishing new rights for the individual (right of data portability, right to be forgotten, data localisation etc.). Another stringent check on companies is the debated-introduction of fines up to €20 million or 4 percent of the company’s turnover in case of breach of data-privacy by the company. Unarguably this makes EU a regulatory superpower, leading the pack of stricter regulations, on data-protection. Why EU is so adamant to afford such stricter regulations that can break up the global internet into regional or national chunks? The seriousness of the penalties reflects a European approach to privacy that can be traced back, in large part, to the history of its members’ experiences with personal data being used for certainly wrong purposes. To have a clear focus on GDPR and European approach to data protection, it is important to explore the dark past related to data protection in Europe.

The causes for adopting a very strict approach can be traced back to the Europe of World War II era, during which the Nazis in Germany consistently abused private data and personal information in order to create profiles of citizens and identify Jews and other minority groups. During the Nazi regime, the state’s control of market brought with it control of information technology as well. The access to such information-data also provided a door to the census information that indicated residents’ nationalities, native languages, religion, and profession. The punch cards that were used to feed in this information are the early data processors known as Hollerith machines, allegedly manufactured by IBM’s German subsidiary at the time Deutsche Hollerith Maschinen GmbH (Dehomag), as also mentioned in the book titled IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s most powerful Corporation. The use of census data to create a database of personal profiles according to which a broad level of discriminatory policies can be imposed- is a disturbing fact related to dark past of free movement of data.

Exploitation of private data didn’t end in Germany with the WWII coming to the end, but it was continued in the East German state as to keep in track the pro-Nazi agenda and later, in cold war era, spies of West German states. This was the first kind of mass surveillance by any state in the history through screening of private communications, periodical searching of houses, etc. The state kept the details of each and every personal data in their database from people’s friends to their sexual habits. Stasi, East German secret police force became most famous due to carrying out of such practices. As the Stasi started cross-border surveillance, in response, in 1970 West Germany approved what’s considered the country’s first modern data privacy legal framework concerning public sector data in the West German state of Hesse. This was followed by a 1977 Federal Data Protection Act designed to protect resident “against abuse in their storage, transmission, modification, and deletion.” West Europe’s push on privacy-related matters rendered the right to privacy a legal imperative in the Data Protection Convention (Treaty 108), as adopted by the Council of Europe.

Such concerns related to the exploitation of census data led to a landmark German Federal Constitutional Court’s judgment that the right of “self-determination over personal data” is a fundamental right. Later, this became the cornerstone of the EU’s view today. With the wave of European countries debating on the issue of the importance of personal information-data of citizens, the first data protection legislation was introduced into the Irish domestic law was the Data Protection Act of 1988, along with many commonwealth countries adopting such comprehensive legislation into their domestic law. The end of Cold War coincided with the rise in data transfers throughout Europe in the ‘90s. This is how migrating market throughout the European continent became a threat to the personal data of citizens of individual European states. Therefore, in order to establish a single market EU also included a 1995 E.U. data protection regulation, and cautious attitudes about privacy became a European norm. The European Data Protection Directive is created, reflecting technological advances and introducing new terms including processing, sensitive personal data, and consent, among others.

The 1995 Directive was implemented as EU further adopted the Directive on Privacy and Electronic Communications in 2002. In 2006, the EU Directive on the retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks is adopted. Although it was declared invalid by a Court of Justice ruling in 2014 for violating fundamental rights. By 2009, the EU Electronic Communications Regulations in response to email addresses and mobile numbers evolved as becoming prime currency in conducting marketing and sales campaigns. Perhaps most famously, in 2014 Europe’s top court, the Court of Justice of the European Union, affirmed the so-called right to be forgotten and ruled that Google has to abide by user requests to take down “data that appear to be inadequate, irrelevant or no longer relevant” — and since then, Google has received 655,000 requests to remove about 2.5 million links, and complied with 43.3% of those requests. (Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (es), Mario Costeja González, ECLI:EU:C:2014:317)

Given such a complex historical backdrop, the European data-protection legislations are intuitively more appealing and less subject to resistance. Europe has been always the most active regime in terms of enactments related to protections on privacy that tend to apply all sectors of the economy. To this legacy, GDPR is just a significant upgrade to that 1995 law. In the light of Cambridge Analytica’s Facebook data breach and the Equifax hack, such upgrade is being considered as a step that will reinforce consumer confidence with an assurance of protection of their personal data. Other regulations will require an update in alignment with GDPR, such as the ePrivacy Directive and Regulation 45/2001, which applies to the EU institutions when they process personal data. Member states are entitled to provide specific rules or derogations to the GDPR, where freedom of expression and information is concerned, or in the context of employment law or the preservation of scientific or historical research.

Balancing the Regulation and the Innovation: GDPR and AI

Artificial Intelligence (AI) sector is promising an altogether new generation of technological advancement being highly disruptive and productive for the Industry 4.0. AI is a constellation of technologies performing different cognitive functions- data analysis to language learning that assists a machine to understand thoughts, experiences and senses. The major functioning of A.I is to analyse the data and provide responses in accordance to the collected intelligence, basically AI provides a sui generis ability to analyse the big data applications in its various dimensions. Therefore, AI is most about the computer-generated behaviours which is considered intelligent in human beings. The concept of AI has existed for some time now, and contemporarily it is a reason of rapidly increasing computational power in industry (a phenomenon known as Moore’s law) [i] leading to the point where AI market will surpass $100 billion by 2025.[ii] AI is significant as it will transform the medium of interaction between humans and technology resulting in overall societal advantages such as inventiveness, innovation and confidence.

With all the advancement that AI will bring in the industry, it brings a lot of concern for regulators across the different jurisdictions. One of the major concerns with the application of AI is its character of feasting on large amount of data and hence its impact on data-privacy. This is making the regulators hesitant in order to allow AI start-ups to initiate any kind of large-scale activities based on AI technology. AI start-ups are soon going to hit a major impediment as the European Union’s General Data Protection Regulation (GDPR) is in effect now. The GDPR, adopted in April 2016, is being considered as the intention of European Union (EU) to form a strengthened, integrated and unified data-privacy mechanism within the EU. It aims primarily to provide the EU citizens an instrument of more control over their personal data and its protection. It provides a framework in which individuals will have liberty to ask questions that how the companies or institutions are processing and storing their personal data. The challenge of full accountability to consumer as strictly put mentioned by the GDPR makes the collection of data by more difficult impacting the AI start-ups which are absolutely dependant on varieties of personal data for machine-learning initiatives.

When it comes to knowing the specific limits that GDPR will put on AI start-ups and services then it can be explained in two-fold impacts. Firstly, processing of data has direct legal effects on the customer, such as credit applications, e-recruiting, or workplace monitoring, the GDPR will completely limit the usefulness of AI or these purposes as the Article 22 and Recital 71[iii] strictly provides for the requirement of explicit consent for each and every unit of data that is used making the functioning of the market slower. Secondly, the algorithms that the AI developers use for the application evolve themselves making it later not at all understandable, and this data combination becomes very complex to regulate.[iv]

The way out for AI start-ups seems to be in the organisational procedures that can standardise the obtaining of consent for the governance of the data within a well-structured data management framework. To be in compliance with the GDPR while processing the huge amount of data it is required that AI developers provide a fixed policy of filing an automated appeal to consumers. Illustrating this it is required that if a consumer is denied the service by any AI application, developers should provide a chance to know the reason to that consumer i.e. an appeal. It is worth mentioning that it is humans that have created, modified and implemented AI technology and they also have the potential to make it compliant and moderate according to the reasonable considerations of regulators. GDPR is not an evil for AI applications but it is just a regulatory initiative with which if AI technology develops, it will get more confidence of the potential consumers.

[i] ICO, Big Data, Artificial Intelligence, Machine Learning and Data Protection, Information Commissioner’s Office, https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection.pdf.

[ii] Todd Wright and Mary Beth, The GDPR: An Artificial Intelligence Killer?, Datanami, https://www.datanami.com/2018/02/27/gdpr-artificial-intelligence-killer/.

[iii] David Roe, Understanding GDPR and Its impact on the Development of AI, CMS wire, https://www.cmswire.com/information-management/understanding-gdpr-and-its-impact-on-the-development-of-ai/.

[iv] David Meyer, AI Has a Big Privacy Problem and Europe’s New Data Protection Law Is About to Expose It, Fortune, http://fortune.com/2018/05/25/ai-machine-learning-privacy-gdpr/.

Understanding the ‘Technology of Regulation’: Regulating the Scientific Advancements

Regulations are most often considered as adversaries of technological changes. The position of technology is to stimulate the growth of the enterprises, markets, and industries, while the periodical regulations as issued by the government, represents the limits that are imposed on this growth. This is the general conception of regulations that is no doubt everyone has regarding the regulation of technology since the 1970s when the debate started which was focused on controlling the nation-states expedition of nuclear energy, supersonic transport, and food additives. Today, the debate continues as the fears of technologies such as dark web, genetically modified foods etc. calling for regulations as precautionary measures. And to an extent, the conflict is unavoidable.

The dynamics that are induced by the technology revolution are credited with half or more productivity growth. The process of ‘creative destruction’ by entrepreneurs who devise new ways of producing goods and services is potentially a far more potent source of progress that is short-term price competition, as pointed out by Schumpeter. However, regulation can retard all of Schumpeter’s three stages of technological change: invention, innovation, and diffusion.

Every negative in the whole story is just not about the regulations. An anxiety amounts when there is talk about driverless cars, artificial intelligence, and social media, regulation is the only way to relax the stress of uncertainties that these technological changes will bring in lives of humans. These are not the views of legislators only, but also from the people who are driving these technologies and people who are driven by these technologies.

Is there a way to balance regulation and technology? The way seems to be accepting the change in the technology of regulation. Regulations are being imposed in traditional ways only such that considered to be of one type and of effecting in one way only. However, there is a way to explore more in this regard, just as there are many different types of technologies, there are many different types of regulations. Different technology instruments, such as technical requirements, performance standards, taxes, allowances, and information disclosure, can have very different effects on technological change and other important consequences.

One of the main reasons that the present regulatory technology is not rendering desired results is that the state regulators are not dedicating the time, energy, or funding to the regulations in the way the technology is developed. The key to bringing in the same creativity and inspiration into the regulations, such that the incentivized-approach must be followed, is to allow the private regulators to build the regulatory systems of the digital age.

The drivers of this shift are often ultimately regulated companies themselves- looking to define a reasonably reliable playing field on which they and their competitors meet. Private regulators are already regulating to a certain extent by having autonomy over the governance of choosing their terms and conditions of the ‘agreement’ which is the main source of the entire corporate control. Another compelling reason for bringing up the private regulators in the game is that the private entities are closer to what is happening, at increasingly high speed, on the ground, and in the cloud is not going to go away till the time they are responsible for developing new technologies.

It is very important to create a supervised cohort of private regulators. This gets the best of both worlds: the regulations that follow the incentivized approach and being accountable to the government and the understanding of these regulations to the market players in very clear terms. The question of arbitrariness because of these regulators cannot creep in as the licenses to regulate will always be in the hands of the government. Further, they have to keep their regulatory clients happy by developing easier, less costly and more flexible ways of implementing regulatory controls.

The sooner we adopt the new technology of regulation and move beyond the idea that conventional regulation can handle the challenges of our powerful new technologies, the better. The idea to regulate the innovative and disruptive technologies is a useless idea unless we figure out how to harness the power of markets, and new approaches to government accountability, to that task.

(This blog series will explore and cover all the areas of regulations that are present and required for adjusting the balance with certain scientific advancements. Suggestions and Improvements are invited from readers)