Artificial Intelligence, Cyber and Technology, Geopolitics, International Diplomacy, Writing

Technology: Taming – and unleashing – technology together

Innovative approaches will require regulatory processes to include all stakeholders.

Technology has long shaped the contours of geopolitical relations – parties competed to outinnovate their opponents in order to build more competitive economies, societies and militaries. Today is different. With breakthroughs in frontier technologies manifesting at rapid rates, the question is not who will capture their benefits first but how parties can work together to promote their beneficial use and limit their risks.

The challenge: benefits of frontier technologies may be compromised by inequities and risks

The prolific pace of advancement of frontier technologies – artificial intelligence (AI), quantum science, blockchain, 3D printing, gene editing and nanotechnology, to name a few – and its pursuit by a multitude of state and non-state actors, with varied motivations, has opened a new chapter in contemporary geopolitics. For state actors, these technologies offer a chance to gain strategic and competitive advantage, while for malicious nonstate actors, these technologies present another avenue to persist with their destabilizing activities.

Therefore, emerging technologies have added another layer to a fragmented and contested global political landscape. Besides shaping geopolitical dynamics, they are also transforming commonly held notions of power – by going beyond the traditional parameters of military and economic heft to focus on states’ ability to control data and information or attain a tech breakthrough as the primary determinant of a state’s geopolitical influence.

These technologies also have significant socioeconomic implications. By some estimates, generative AI could add the equivalent of $2.6 trillion to $4.4 trillion to the global economy and boost labour productivity by 0.6% annually through 2040.14 Yet, simultaneously, the rapid deployment of these technologies has sparked concerns about job displacement and social disruption. These dynamics are triggering new geopolitical alignments as states seek to cooperate or compete in developing and using new technologies.

As frontier technologies take centre stage in global politics, they present a new challenge for international diplomacy.

As frontier technologies take centre stage in global politics, they present a new challenge for international diplomacy. What can states do to stem the proliferation of frontier dual-use technologies in the hands of malicious actors who intend to cause harm? Can states look beyond their rivalries to conceive out-of-the-box solutions, or will they always be playing a catch-up game with tech advancements? What role behoves the United Nations-led multilateral frameworks regarding the global governance of these technologies, or will plurilateralism and club-lateralism trump it?

A new approach for governing frontier technologies

The historical evolution of global tech regimes offers important lessons for the challenges posed by frontier technologies today. During the Cold War, industrialized nations established export control regimes, such as the Nuclear Suppliers Group and the Missile Technology Control Regime, that sought to exclude certain countries by denying them several dual-use technologies. Those control regimes proved successful in curbing tech proliferation. However, with changing geopolitical realities, the same regimes began extending membership to previously excluded countries. This approach offers a vital lesson: shedding the initial exclusivist approach in favour of extending membership helped to retain the regimes’ legitimacy.

Secondly, while the multilateral export control regimes succeeded, the nuclear non-proliferation regimes performed sub-optimally as they amplified the gap between nuclear haves and have-nots. This triggered resentment from the nuclear have-nots, who sought to chip away at the legitimacy of the regimes.

The key lesson for today is that the success of any tech-related proliferation control efforts is contingent on not accentuating existing technology divisions between the Global North and South.

The UN-led multilateral framework has focused on enhancing global tech cooperation through initiatives like the Secretary-General’s High-level Panel on Digital Cooperation. However, while there has been little substantive progress at the global, multilateral level, bilateral and minilateral tech cooperation has thrived. Groupings such as the Quad, AUKUS and I2U2 that focus on niche tech cooperation present a possible model pathway forward.15 They have demonstrated the value of like-minded partners coming together to realise a common vision and ambition. These arrangements also suggest that even as the UN-led multilateral frameworks attempt to grapple with frontier technologies, minilaterals may provide the starting point for collaboration to address frontier technologies’ advancement.

To ensure that efforts at tech regulation succeed, countries will be required to undertake innovation in policy-making, where governments take on board all the stakeholders – tech corporations, civil society, academia and the research community. The challenge posed in recent months by generative AI through tools like deep fakes and natural language processing models like ChatGPT has shown that unless these stakeholders are
integrated into policy design, regulations will always be afterthoughts.

How to strengthen tech cooperation

The following are four proposals for strengthening global cooperation on frontier technologies:

– Develop the Responsibility to Protect (R2P) framework for emerging technologies: Similar to the R2P framework developed by the UN for protecting civilians from genocide, war crimes, ethnic cleansing and crimes against humanity, the international community must create a regulatory R2P obligation for states to protect civilians from the harms of emerging technologies. This obligation would entail three pillars: 1) the responsibility of each state to protect its populations from the emerging technologies’ misuse, 2) the responsibility of the international community to assist states in protecting their populations from the emerging technologies’ misuse, and 3) the responsibility of the international community to take collective action to protect populations when a state is manifestly failing to protect its own people from the emerging technologies’ misuse. The specific measures that are needed will vary depending on the specific technologies involved and the risks that they pose.

– Design a three-tier “innovation to market” roadmap: States must ensure responsible commercial application and dispersion of new technologies. One critical step towards this is for states to design a three-tiered tech absorption framework comprising a regulatory sandbox (pilot tested in a controlled regulatory environment for assessing collateral impact), city-scale testing and commercial application.

– Convene a standing Conference of the Parties for future tech: The Global South must convene a standing Conference of the Parties (COP) for future technologies along the lines of COP for climate change negotiations. This body would meet on an annual basis where the multistakeholder community – national governments, international organizations and tech community – will deliberate on new tech developments, present new innovations and reflect on related aspects of the dynamic tech ecosystem and its engagement with the society and communities.

– Link domestic innovation ecosystems: Inter-connected national innovation ecosystems will ensure that like-minded countries can pool their finite financial, scientific and technological human resources to develop technologies. For instance, in the field of quantum science, the European Commission’s research initiative, the Quantum Flagship, has partnered with the United States, Canada and Japan through the InCoQFlag project. Likewise, the Quad has the Quad Center of Excellence in Quantum Information Sciences. This underlines the importance of prioritizing one of the frontier technologies and networking domestic innovation ecosystems to focus on its development, as no country alone can harness the deep potential of frontier technologies and mitigate the associated risks. 

Technology as a tool of trust

Throughout history, technology has been the currency of geopolitics. New innovations have bolstered economies and armies, strengthening power and influence. Yet, technology has also served as an opportunity to bind parties closer together. Today, at a time of heightened geopolitical risks, it is incumbent on leaders to pursue frameworks and ecosystems that foster trust and cooperation rather than division.


This essay is a part of the report Shaping Cooperation in a Fragmenting World.

Standard
Cyber and Internet Governance, Cyber and Technology, European Union, India, media and internet, tech and media, USA and Canada, Writing

AI, Democracy, and the Global Order

Future historians may well mark the second half of March 2023 as the moment when the era of artificial intelligence truly began. In the space of just two weeks, the world witnessed the launch of GPT-4, Bard, Claude, Midjourney V5, Security Copilot, and many other AI tools that have surpassed almost everyone’s expectations. These new AI models’ apparent sophistication has beaten most experts’ predictions by a decade.

For centuries, breakthrough innovations – from the invention of the printing press and the steam engine to the rise of air travel and the internet – have propelled economic development, expanded access to information, and vastly improved health care and other essential services. But such transformative developments have also had negative implications, and the rapid deployment of AI tools will be no different.

AI can perform tasks that individuals are loathe to do. It can also deliver education and health care to millions of people who are neglected under existing frameworks. And it can greatly enhance research and development, potentially ushering in a new golden age of innovation. But it also can supercharge the production and dissemination of fake news; displace human labor on a large scale; and create dangerous, disruptive tools that are potentially inimical to our very existence.

Specifically, many believe that the arrival of artificial general intelligence (AGI) – an AI that can teach itself to perform any cognitive task that humans can do – will pose an existential threat to humanity. A carelessly designed AGI (or one governed by unknown “black box” processes) could carry out its tasks in ways that compromise fundamental elements of our humanity. After that, what it means to be human could come to be mediated by AGI.

A carelessly designed AGI (or one governed by unknown “black box” processes) could carry out its tasks in ways that compromise fundamental elements of our humanity.

Clearly, AI and other emerging technologies call for better governance, especially at the global level. But diplomats and international policymakers have historically treated technology as a “sectoral” matter best left to energy, finance, or defense ministries – a myopic perspective that is reminiscent of how, until recently, climate governance was viewed as the exclusive preserve of scientific and technical experts. Now, with climate debates commanding center stage, climate governance is seen as a superordinate domain that comprises many others, including foreign policy. Accordingly, today’s governance architecture aims to reflect the global nature of the issue, with all its nuances and complexities.

As discussions at the G7’s recent summit in Hiroshima suggest, technological governance will require a similar approach. After all, AI and other emerging technologies will dramatically change the sources, distribution, and projection of power around the world. They will allow for novel offensive and defensive capabilities, and create entirely new domains for collision, contest, and conflict – including in cyberspace and outer space. And they will determine what we consume, inevitably concentrating the returns from economic growth in some regions, industries, and firms, while depriving others of similar opportunities and capabilities.

Importantly, technologies such as AI will have a substantial impact on fundamental rights and freedoms, our relationships, the issues we care about, and even our most dearly held beliefs. With its feedback loops and reliance on our own data, AI models will exacerbate existing biases and strain many countries’ already tenuous social contracts.

That means our response must include numerous international accords. For example, ideally we would forge new agreements (at the level of the United Nations) to limit the use of certain technologies on the battlefield. A treaty banning lethal autonomous weapons outright would be a good start; agreements to regulate cyberspace – especially offensive actions conducted by autonomous bots – will also be necessary.

Technologies such as AI will have a substantial impact on fundamental rights and freedoms, our relationships, the issues we care about, and even our most dearly held beliefs.

New trade regulations are also imperative. Unfettered exports of certain technologies can give governments powerful tools to suppress dissent and radically augment their military capabilities. Moreover, we still need to do a much better job of ensuring a level playing field in the digital economy, including through appropriate taxation of such activities.

As G7 leaders already seem to recognize, with the stability of open societies possibly at stake, it is in democratic countries’ interest to develop a common approach to AI regulation. Governments are now acquiring unprecedented abilities to manufacture consent and manipulate opinion. When combined with massive surveillance systems, the analytical power of advanced AI tools can create technological leviathans: all-knowing states and corporations with the power to shape citizen behavior and repress it, if necessary, within and across borders. It is important not only to support UNESCO’s efforts to create a global framework for AI ethics, but also to push for a global Charter of Digital Rights.

The thematic focus of tech diplomacy implies the need for new strategies of engagement with emerging powers. For example, how Western economies approach their partnerships with the world’s largest democracy, India, could make or break the success of such diplomacy. India’s economy will probably be the world’s third largest (after the United States and China) by 2028. Its growth has been extraordinary, much of it reflecting prowess in information technology and the digital economy. More to the point, India’s views on emerging technologies matter immensely. How it regulates and supports advances in AI will determine how billions of people use it.

Engaging with India is a priority for both the US and the European Union, as evidenced by the recent US-India Initiative on Critical and Emerging Technology (iCET) and the EU-India Trade and Technology Council, which met in Brussels this month. But ensuring that these efforts succeed will require a reasonable accommodation of cultural and economic contexts and interests. Appreciating such nuances will help us achieve a prosperous and secure digital future. The alternative is an AI-generated free for all.

Standard
Cyber and Internet Governance, Cyber and Technology, Democratic Countries, Digital India, Infrastructure, tech and media, Writing

Digital Public Infrastructure – lessons from India

Public infrastructure has been a cornerstone of human progress. From the transcontinental railways of the nineteenth century to telecommunication in the twentieth century, infrastructure has been vital to facilitating the flow of people, money and information. Built on top of public infrastructure, democratic countries with largely free markets have fostered public and private innovation and, therefore, generated considerable value creation in societies.

In the twenty-first-century, technological innovation has created a tempest of ideological, geographical and economic implications that pose new challenges. The monopolisation of public infrastructure, which plagued previous generations, has manifested itself in the centralised nature of today’s digital infrastructure. It is increasingly evident that the world needs a third type of public infrastructure, following modes of transport such as ports and roads, and lines of communication such as telegraph or telecom – but with open, democratic principles built in.

​​Digital Public Infrastructure (DPI) can fulfil this need, though it faces several challenges. There is a disturbing trend of the weaponisation of data and technology – or “Digital Colonisation” (Hicks, 2019) – resulting in a loss of agency, sovereignty and privacy. Therefore, proactively deliberating on how to build good DPI is key to avoiding such challenges.

The monopolisation of public infrastructure, which plagued previous generations, has manifested itself in the centralised nature of today’s digital infrastructure.

To begin with, it is important to crystallise what DPI is and what it does. Put simply, foundational DPIs mediate the flow of people, money and information. First, the flow of people through a digital ID System. Second, the flow of money through a real-time fast payment system. And third, the flow of personal information through a consent-based data sharing system to actualise the benefits of DPIs and to empower the citizen with a real ability to control data. These three sets become the foundation for developing an effective DPI ecosystem.

India, through India Stack[1], became the first country to develop all three foundational DPIs: digital identity (Aadhar[2]), real-time fast payment (UPI[3]) and a platform to safely share personal data without compromising privacy (Account Aggregator built on the Data Empowerment Protection Architecture or DEPA) (Roy, 2020). Each DPI layer fills a clear need and generates considerable value across sectors.

However, like in the case of physical infrastructure, it is important that DPIs not succumb to monopolisation, authoritarianism and digital colonisation. This can only happen through a jugalbandi (partnership) of public policy and public technology, i.e., through a techno-legal framework. Techno-legal regulatory frameworks are used to achieve policy objectives through public-technology design. For example, India’s DEPA offers technological tools for people to invoke the rights made available to them under applicable privacy laws. Framed differently, this techno-legal governance regime embeds data protection principles into a public-technology stack.

When aggregated, foundational DPIs constitute the backbone of a country’s digital infrastructure. These layers interface with each other to create an ecosystem that facilitates seamless public service delivery and allows businesses to design novel solutions on top of the DPI layers. In turn, this enables the creation of Open Networks as not seen before. India is now developing such open networks for credit (Open Credit Enablement Network[4]), commerce (Open Network for Digital Commerce[5]), Open Health Services Network (UHI[6]) and many more. When DPIs are integrated, they can generate network effects to create these open networks for various sectors.

Following India’s successful experiment, there is a desire across the world to replicate it (Kulkarni, 2022)[7]. Countries can choose from three potential models to mediate the flow of people, money and information: the DPI model, Web3 and the Big Tech model. Of these three, DPI has emerged as the most feasible model due to its low cost, interoperability and scalable design, and because of its safeguards against monopolies and digital colonisation.

For India’s DPI success to become a worldwide revolution, three types of institutions must be built. First, we need independent DPI steward institutions. It is important to have a governance structure that is agile and responsive. A multiparty governance process through independent DPI institutions will be accountable to a broad range of stakeholders rather than be controlled by a single entity or group. This can build trust and confidence in DPI. India has created the Modular Open Source Identity Platform (MOSIP[8]), adopted by nine nations and with already more than 76 million active users. MOSIP is housed at the International Institute of Information Technology, Bangalore (IITB), an independent public university. IIITB’s stewardship has been critical to MOSIP’s success.

A multiparty governance process through independent DPI institutions will be accountable to a broad range of stakeholders rather than be controlled by a single entity or group.

Secondly, we need to develop global standards through a multilateral dialogue led by India. If standards originating from developed nations were transplanted to an emerging economies’ context without deferring to their developmental concerns, smaller countries would simply be captive to dominant technology players. Additionally, without these standards, Big Tech would likely engage in regulatory arbitrage to concentrate power.

Finally, we need to develop sustainable financing models for developing DPI for the world. Currently backed by philanthropic funding, such models are at risk of becoming a tool of philanthropic competition and positioning.
The world needs a new playbook for digital infrastructure that mediates the flow of people, money and information. This will facilitate countries looking to digitally empower their citizens. They can then rapidly build platforms that address the specific needs of people, while ensuring people are able to trust and use the platform – without fear of exclusion or exploitation.

References

Hicks, J. (2019) Digital Colonialism’: Why countries like India want to take control of data from Big TechThePrint.  (Accessed: January 25, 2023).

Kulkarni, S. (2022) Emerging economies keen to replicate India’s Digital Transformation: KantETCIO. The Economic Times. (Accessed: February 3, 2023).

Roy, A. (2020) Data Empowerment and Protection Architecture: Draft for Discussion. NITI Aayog. (Accessed: January 25, 2023).

Notes:


[1] See https://indiastack.org

[2] See https://uidai.gov.in/en/

[3] See https://www.npci.org.in/what-we-do/upi/product-overview

[4] See https://indiastack.org/open-networks.html

[5] See https://ondc.org/

[6] See https://uhi.abdm.gov.in/

[7] See https://mosip.io/

[8] See https://mosip.io/

Standard
big tech, Cyber and Internet Governance, Cyber and Technology, Cyber Space, India, media and internet, Tech Regulations, U.S., USA and Canada

Accountable Tech: Will the US take a leaf out of the Indian Playbook?

2024 is a decisive year for democracy and the liberal order. 1.8 billion citizens in India and the United States, who together constitute nearly 1/4th of the world’s population, are going to elect their governments in the very same year. This will be the first such instance in a world increasingly mediated and intermediated by platforms, who will be crucial actors shaping individual choices, voter preferences, and indeed, outcomes at these hustings. It is therefore, important to recognise these platforms as actors and not just benign intermediaries.

Prime Minister Modi’s government, especially in its second term, has approached digital regulation with the objective of establishing openness, trust and safety, and accountability. In June this year, Union Minister of State for Electronics & Technology, Rajeev Chandrasekhar, invited public inputs on the draft amendments to the IT Rules 2021 with an ‘open, safe and trusted, accountable internet’ as the central area of focus.

Runaway platforms and cowboy capitalism are the big dangers to the sanctity of our elections and to the citizens’ acceptance of political outcomes.

This Indian aspiration for Accountable Tech must be an imperative for all liberal and open societies if we are to enrich the public sphere, promote innovation and inclusive participation, and indeed, defend democracy itself. If we fail to act now and act in unison, we could end up perverting the outcomes in 2024. Runaway platforms and cowboy capitalism are the big dangers to the sanctity of our elections and to the citizens’ acceptance of political outcomes. India has clearly seen the need for it and is striving to make large tech companies accountable to the geographies they serve. The latest comer who seems to have understood the importance of this is the United States of America.

On 8 September 2022, the White House convened a Listening Session on Tech Platform Accountability ‘with experts and practitioners on the harms that tech platforms cause and the need for greater accountability’. The session ‘identified concerns in six key areas: competition; privacy; youth mental health; misinformation and disinformation; illegal and abusive conduct, including sexual exploitation; and algorithmic discrimination and lack of transparency.’ Hopefully, this session will lead to a more contemporary regulatory and accountability framework that aligns with what is underway in India.

From private censorship and unaccountable conversations hosted by intermediaries to propagation of polarised views, all of them constitute a clear and present danger to democracies, and certainly to India and the US, who are among the most plural, open, and loud digital societies. Digital India is indeed going to be ground zero of how heterogenous, diverse, and open societies co-exist online and in the real world. The efforts of the Indian government to put together sensible regulation may actually benefit many more geographies and communities. If India can create a model that works in the complex human terrain of India, variants of it would be applicable across the world.

The efforts of the Indian government to put together sensible regulation may actually benefit many more geographies and communities.

It must also be understood that there is no single approach to manage platforms, even though there could be a wider and shared urge to promote openness, trust and safety, and accountability. The regulations that flow from this ambition are necessarily going to be contextual and country specific.

Hence, it is important that India, the US, and other large digital hubs coordinate and collaborate with each other to defend these universal principles even as they institute their own and region-specific regulations. For instance, policy architecture in the US will focus on managing platforms and technology companies operating under American law and consistent with their constitutional ethos. India, on the other hand, has the onerous task of ensuring that these same corporations adhere to Indian law and India’s own constitutional ethic.

India and the US lead the free world in terms of global social media users. As of January 2022, India had 329.65 million Facebook23.6 million Twitter, and 487.5 million WhatsApp (June 2021) users, while the US had 179.65 million Facebook, 76.9 million Twitter, and 79.6 million WhatsApp (June 2021) users. The online world is no longer a negligible part of society. Most people online see the medium as an agency additive and are keen to use it to further their views and influence others’ thinking. Of these, many are also influencers in their own localities. What transpires online now has a population scale impact. The mainstream media reads from it, social media trends define the next day’s headline and the debates on primetime television.

Policy architecture in the US will focus on managing platforms and technology companies operating under American law and consistent with their constitutional ethos.

Thus, the idea that one can be casual in managing content on these platforms is no longer feasible and will have deleterious consequences as recent developments have shown. Intermediary liability, that sought to insulate platforms from societal expectations, needs to be transformed to a notion of intermediary responsibility. It must now become a positive and a proactive accountability agenda where the platforms become a part of responsible governance and responsible citizenship.

Predictable regulation is also good for business and policy arbitrage harms corporate planning; so, platforms too have a stake in making their board rooms and leadership accountable. They must make their codes and designs contextual and stop hiding behind algorithmic decision-making that threatens to harm everyone, including their own future growth prospects. And this must be the ambition as we head into 2024–the year when technology could decide the fate of the free world.

Standard
Cyber and Internet Governance, Cyber and Technology, European Union, tech and media, USA and Canada

Digital democracies and virtual frontiers: How do we safeguard democracy in the 4IR?

Thank you Tom, and thank you Amb. Power for inviting me to speak on this important issue. Let me congratulate her and the US government for really beginning to respond to the challenges of intrusive tech. That has to be one of the issues that global civic actors, thinkers, and even political leaders must devote time and energy on. I think intrusive tech will take away the gains of the past.

Let me start with something Amb. Power mentioned. The Arab Spring powered by social media levelled the divide—even if briefly—between the palace and the street. The ‘Age of Digital Democracy’ possibly began then. Technology has become the mainstay of civic activism since. Not only are more voices heard in our parts of the world, even elected governments are also more responsive to them. Intrusive tech can undo the gains and I think Amb. Power was absolutely right in stressing on some of the issues that need to be addressed.

The past year also has made us aware of the threats and weaknesses to digital democracies. First, the very platforms that have fuelled calls for accountability often see themselves as above scrutiny, bound not by democratic norms but by bottom-lines. However, acquisition metrics and market valuations don’t sustain democracy too well. The contradiction between short-term returns on investment and the long-term health of a digital society is stark. Hate, violence, and falsehoods drive engagement, and therefore, profits for companies and platforms, our societies are indeed on shaky ground, and this is one area where we must intervene.

We must also ask: Are digital infrastructure and services proprietary products or are they public goods? The answer is obvious. To make technology serve democracy, tech regulations must be rethought. I think this debate needs to be joined. Big Tech boardrooms must be held to standards of responsible behaviour that match their power to persuade and influence. The framework must be geography neutral. Rules that govern Big Tech in the North can’t be dismissed with a wink and a nudge in the South and this is again something civic actors must emphasise  on.

Second – and this is important, much of Big Tech is designed and anchored in the US. Understandably, it pushes American—or perhaps Californian—free speech absolutism. This is in conflict with laws in most democracies—including in the US after the 6 January Capitol attack. While protecting free speech, societies seek safeguards to prevent undesirable consequences, especially violence. If American Big Tech wishes to emerge as global Tech, it must adhere to democratic norms globally. Its normative culture must assimilate and reconcile, not prescribe and mandate. Absent such understanding, a clash of norms is visible and already upon us. It is going to erode our gains.

Finally, a key threat emanates from authoritarian regimes with technological capabilities. They seek to perversely influence open societies by weaponising the very freedoms they deny their own people. In their virtual world, Peng Shuai is free and happy; in their real world, she is under house arrest—a new meaning to the term ‘virtual reality’. Confronted by wolf warriors, the rest of us can’t be lambs to the slaughter.

Open societies have always defended their borders stoutly and they must also safeguard the new digital frontlines. In 2024, the two most vibrant democracies will go to elections in the same year, the first time in our digital age, we must not allow authoritarian states or their agents to manipulate public participation at the hustings.

As I conclude let me leave you with a thought: It’s not darkness alone that kills democracy; runaway technology, steeped in nihilism, could strangulate it. Just scroll down your social media feed this evening…

Standard
Cyber and Technology, India, international affairs, media and internet, tech and media

Just deserts? Western reportage of the second wave in India exposes deep schisms in relations with the East

Co-authored with Mr. Jaibal Naduvath

This article is a continuation of a previous article written by the authors, Revisiting Orientalism: Pandemic, politics, and the perceptions industry

In Lord Byron’s poemChilde Harold’s Pilgrimage (1812), the protagonist Harold, contemplating the grandness of the Colosseum, imagines the condemned gladiator, dignified yet forlorn, butchered for the entertainment of a boisterous, blood lusty Roman crowd out on a holiday.

Public spectacles of suffering are integral to the discourse of power. The perverse imagery and messaging surrounding the suffering seeks to intimidate and suppress the subaltern’s agency to perpetuate ethnic dominance and social control. It pivots around an elevated moral sense of the ‘self’. In his seminal work, When Bad Things Happen to Other People, John Portmann argues that it is not unusual to derive gratification over the suffering of the ‘other’, particularly when the native feels that the suffering or humiliation of the ‘other’ is deserved. The suffering then becomes fair recompense for transgressions real and imagined, and the accompanying sense of justice and closure brings forth feelings of gratification.

India is reeling in the aftermath of the second wave of COVID-19. As death reaps rich dividends cutting across class and covenant, the country is engaged in a determined fightback. The developments have made global headlines, and, in equal measure, triggered global concern. Apocalyptic images of mass pyres and victims in their death throes, replete with tales of ineptitudeprofiteering and callous attitudes, have made front page news and have become television primetime in much of the trans-Atlantic press, conforming to reductive stereotypes that have informed three centuries of relations with the Orient. The ‘self-inflicted’ suffering is then ‘fair recompense’.

Continue reading
Standard
Cyber and Technology, Cyber Security, Multilareism, tech and media

Amid changing post-pandemic realities, India needs to be swift in identifying partners whom it can trust

We will see the rise of a New World Order driven by national interest, reliability of partners, and of course, economic factors. India has to use a “Gated Globalisation” framework to negotiate this change.

 Gated Globalisation, GDPR, Integrity, Partners, Pandemic, Big Tech, Regulations

The COVID-19 vaccines are coming. And along with this sanjivani comes a new age of geopolitics. The vaccines are varied, with different pricing points and affordability. Nations have secured their vaccine supplies from countries and companies they trust, often by forging new alliances. The scepticism over the vaccines from China and Russia shows that trust is the operating word in the post-pandemic era — and it is not limited to the choice of vaccines.

As we enter the third decade of the 21st century, a multipolar world awaits us. The US and China — rivals for the top slot till the pandemic hit the world — will now have to contend with traditional and rising powers like the UK, France, India and Brazil. Each country will engage with others selectively, not in every arena. We will see the rise of a New World Order driven by national interest, reliability of partners, and of course, economic factors. India has to use a “Gated Globalisation” framework to negotiate this change.

The security landscape will continue to drive partnerships, but these will no longer be omnibus alliances. India is locked in a confrontation with China in the Himalayas. The US and its traditional allies are ramping up their presence in the Western Pacific. A new Great Game is underway in the Indo-Pacific where the Quad is emerging. The Middle East is in a deep churn as Israel and Arabs discover Abrahamic commonalities. Europe is caught in a struggle to retain its values amid the diversity it has acquired over the years.

The Gated Globalisation framework requires that India should protect its interests in these unsettled times. Strong fences are necessary, but so is the creation of new partnerships (like the Quad) based on trust and common interests. Gated Globalisation has no place for parlour games like “non-alignment”; it will test the tensile strength of “strategic autonomy”. The need for a new coalition was felt after Doklam and has become a necessity post-Ladakh: India needs friends in deed.

The Gated Globalisation framework requires that India should protect its interests in these unsettled times. Strong fences are necessary, but so is the creation of new partnerships (like the Quad) based on trust and common interests

Beyond security, like everybody else, India has to make partnership choices on trade, capital flows and the movement of labour. The WTO’s multilateral trading arrangements have frayed beyond repair. Whether it is RCEP in the Indo-Pacific, the new version of NAFTA called USMCA, or the reconfigured EU, countries will have to decide whether they belong inside these gated trading arrangements. India has chosen to stay out of RCEP and the UK has left the EU.

By imposing restrictions on trade with China, India faces restrictions on capital flows. But this does not preclude enhanced capital flows from new partners. To prevent inflow of illicit funds, India has barred capital from poorly-regulated jurisdictions. India’s capital account for investment is largely open while its current account is carefully managed. Similarly, while restricting debt flows, India is open for equity flows from friendly countries. Other nations have similar policies. By managing capital flows in this manner, countries have enabled tighter financial relationships within their gated communities while shutting out those who are inimical to their interests.

India’s global diaspora is now over 30 million and sends more through remittances ($80 billion per year) than foreign capital inflows. The pandemic-enforced work-from-home may see the creation of new pools of skilled workers, living in virtual gated communities, further enhancing income from jobs physically located elsewhere. Moreover, the Indian diaspora is now increasingly impacting policy in countries like the US, UK and Australia where it has contributed politicians and technocrats, innovators and influencers, billionaires and cricket captains.

The pandemic-enforced work-from-home may see the creation of new pools of skilled workers, living in virtual gated communities, further enhancing income from jobs physically located elsewhere

Finally, technology flows and standards will also define gated communities. The internet is already split between China and everyone else. The Great Firewall of China has shut out many of the big tech players like GoogleFacebook and Netflix. Instead, China has its Baidu, Alibaba and Tencent. With the advent of 5G technology, this split will get deeper and wider over issues of trust and integrity.

The EU-crafted General Data Protection Regulation (GDPR) is an excellent example of Gated Globalisation. The EU has set the terms of engagement; those who do not comply will be kept out. The Indian law on data protection that is currently being discussed follows a similar sovereign route. However, it remains to be seen whether these norms can checkmate China’s massive digital surveillance apparatus.

There are other emerging arenas that will likely become the focus of big power competition. China has moved in all possible directions to develop its global strategies. It has linked its national security interests with its Belt-and-Road Initiative and its debt programmes. It is offering a package deal of 5G technology with new telecom networks. China has learnt well from the US.

Amid these rapidly changing post-pandemic realities, India has to be swift in identifying partners whom it can trust and who will help protect and further its national interests. Ambiguity, lethargy and posturing will not do.

Standard
Cyber and Internet Governance, Cyber and Technology, Cyber Security, Digital India, Multilareism, tech and media

On the Cusp of Digital History: Nine Lessons for the Future

s we step into a year of uncertainties after a disruptive year of the pandemic, there is only one universal certitude: 2021 will witness the increasing adoption of technology as innovation gathers extraordinary speed. Clearly, our digital future is exciting, but it is hazy too. There are galactic black holes; and even that which is visible is overwhelming.

Despite acknowledging the need for critical discourse, our pace of enquiry, examination and action has been lethargic and out of step with the motivations of coders hardwiring our future through soft interventions. They are changing economies, societies, politics and, indeed, the very nature of humanity at an astonishing speed and with far-reaching consequences.

Nations that effectively respond to the advent of the Digital Era will be in the vanguard of the Fourth Industrial Revolution and will emerge stronger as the 21st century approaches high noon. Others will suffer the adverse consequences of the coming digital disruptions.

At the turn of the decade, Delhi hosted a stellar set of thinkers and speakers at the annual CyFy conference organised by the Observer Research Foundation, which focused on technology, security and society. Here are nine takeaways from the debates and discussions that threw up a kaleidoscope of scintillating ideas.

1. CHINA’S DIGITAL VICTORY PARADE

hat the US accomplished in the 20th century, China has set out to achieve in the 21st. The first takeaway from the CyFy debates is that China’s surge will continue, and it will profoundly change the world order. The US and its partners are witnessing the inexorable rise of an authoritarian digital power with the COVID-19 pandemic emboldening

THE US AND ITS PARTNERS ARE WITNESSING THE INEXORABLE RISE OF AN AUTHORITARIAN DIGITAL POWER… CHINA’S SURGE WILL CONTINUE, AND IT WILL PROFOUNDLY CHANGE THE WORLD ORDER.

Beijing to tighten its surveillance and suppression networks—bolstered by big data, facial recognition, et al.

The China Electronics Technology Group Corporation (CETC), a defence contractor, for instance, pitches such future applications as detecting ‘abnormal behaviour’ on surveillance cameras or among online streamers. These applications intimate such detections to law enforcement agencies.[i] Several regimes around the world are attracted to these Chinese offerings, which enable them to control their citizens.

Meanwhile, the old Atlantic Consensus is in total disarray. Europe is intent upon carving out its niche in emerging technologies while promoting new technology champions to challenge American tech dominance. After taking over the European Union presidency, Germany has called for the expansion of digital sovereignty as the leitmotif of EU’s digital policy.[ii] A new Digital Services Act may fundamentally alter intermediary liability and mark a new milestone in digital rights and freedoms.[iii]

Across the Atlantic, the US has made its fear of China Tech apparent but is yet to initiate a coherent effort to build an influential digital alliance as a sustained response to China’s relentless digital expansionism. Which brings us to the central geopolitical question: Can the US and Atlantic nations, currently marred by divisions and domestic disquiet, get their act together to respond to this emergence? Authoritarian tech is at the gates: Does the West have the resolve to respond? Will a new Administration in Washington, DC herald a new and meaningful approach? Or will America continue to turn inwards?

China will not offer any negotiated space. Beijing’s offer will be binary, so will be the outcomes. It is, therefore, imperative for a club of technology-savvy countries to come together if liberalism is to be preserved in our digital century.

2. END OF MULTILATERALISM AND THE RISE OF CLUBS OF STATES

o say that the international order is failing and floundering is not to state anything startlingly new; it’s only to underscore the bleakness of the global reality. However, like the proverbial silver lining, there is a degree of optimism around the role and centrality of smaller groupings. Regional partnerships, alliances of democracies, and plurilateral arrangements between nations with focused engagements and specific purpose platforms are seen to be important in these turbulent times. This is best exemplified by Australia, India and Japan—who with an eye towards China and propelled by their shared interest in a free, fair, inclusive, non-discriminatory and transparent trade regime—are banding together for a Supply Chain Resilience Initiative.[iv]

These small groupings, built around shared but limited objectives, are dying multilateralism’s lifeline. The Year of the Pandemic and its resultant disruptions have left the world with few options. One of them is to begin rebuilding multilateralism with smaller groups of countries with aligned interests. Hopefully, over time, this will lead to an efficient, inclusive international order.

India, Japan and Australia have taken on the responsibility abdicated by the US of building a resilient, vibrant, secure technology network in the Indo-Pacific. The role of the EU, ASEAN (more difficult due to deep divisions) and democracies in the Indo-Pacific in defending and strengthening norms and laws associated with technology and politics was elaborated loudly and clearly at CyFy.

States matter and the leadership of individual nations will have to drive the global arrangements that will best serve this century. While dialogue with geopolitical adversaries remains critical, meaningless consensus-driven multilateral approaches are not viable in a world fundamentally fractured along political, economic and ideological fault lines. We need action, not pious declarations. Given the pace at which emerging technologies are evolving, organisations like the UN are too slow, unwieldy and politically compromised to have any significant impact.

3. GLITCH IN GLOBALISATION

n the post-COVID-19 era, globalisation as we have known it will be in tatters, yet decoupling will be more difficult than before. There is a simplistic assumption that you can decouple your digital world from the real world. This is not so. If you exclude entities from your digital platforms, it will be difficult to sustain traditional trade in goods and services with them. Commerce and connectivity of the future will have a different texture.

As economic growth, national identities and digital technologies collide, “Gated Globalisation” will be the new mantra. With interdependence no longer fashionable, supply chains will be shaped by rising national security concerns. Increasingly, cross-border flows of data, human capital and emerging technologies are viewed as vulnerabilities. A focus on autonomy and indigenous capabilities has accompanied growing incidents of cross-border cyber operations and cyberattacks.

Commerce may be conditioned on norms along the lines of what the General Data Protection Regulation (GDPR) seeks to do with the digital economy. The Blue Dot Network and supply chain initiatives may all end up creating layers of permissions and permits that will create toll plazas on digital freeways. The digital domain was built on the assumption of hyper interconnectedness. Will it be able to grow with mushrooming policy barriers?

4. UNCHARTED TERRITORY: BETWEEN TECHNOLOGY AND STATE

new and fascinating dynamic is rapidly emerging between democracies and technologies, raising an interesting question: If a democratic state tames technologies, can democracy survive? This question has been posed by Marietje Schaake, the International Policy Director at the Stanford Cyber Policy Center.[v] Technology is being co-opted into a ‘techno-nationalist’ narrative: The melding of a country’s national interest with its technological capabilities while excluding ‘others’. This techno-nationalist narrative often emanates from tech giants who are increasingly speaking in the state’s protectionist language. Mark Zuckerberg’s written statement ahead of US Congressional anti-trust hearings was couched in the language of protecting the core American values of openness and fairness, as opposed

IF A DEMOCRATIC STATE TAMES TECHNOLOGIES, CAN DEMOCRACY SURVIVE? THE COROLLARY TO THAT IS EQUALLY TRUE AND PROMPTS ANOTHER QUESTION: CAN DEMOCRACIES SURVIVE IF THEY DO NOT REGULATE TECHNOLOGIES?

to China’s (authoritarian) vision.[vi]

The corollary to that is equally true and prompts another question: Can democracies survive if they do not regulate technologies? The isolating and polarising effects of social media, for instance, have already resulted in a slew of analysts chanting the dirge for democracy.[vii] The answers to these questions are unclear, but it is certainly true that the protection of the public sphere, the integrity of political regimes, and the robustness of conversations must be common aspirations should we want democracy to survive and strengthen.

Be it regulations, education, incentives, ethics or norms, we will have to dig deeper into our toolbox to come up with answers that would allow this to happen. Currently, the negative impacts of technology on our evolving and fractured societies are threatening to overwhelm its promise and potential. Can a new regulatory compact emerge that negotiates the digital ethics for corporations, communities and governments? This decade will witness an unspoken contest over writing this new code of ethics. It remains to be seen whose code will prevail and, more importantly, for what purpose.

5. UNACCOUNTABLE TECH AND CORPORATE BOARDROOMS

e cannot overlook the changes that the relationship between big companies, technology and societies has undergone. If successive anti-trust actions in the US, EU, Australia, India and elsewhere are any indication,[viii] accountable boardrooms are now an expectation and will soon be a reality; the shape it will take will be defined by the debates taking place around the world. We can be certain that in the coming years, corporate governance is not going to be the same.

Large companies, having dominance and influence, will need to be more responsive to the communities they serve. The blueprint of new corporate governance cannot but be influenced by the needs of the locality; the nature of the framework will have to be contextual and culturally sensitive. Since mammoth corporations determine our very agency and choice, it is part of their fiduciary duty to ensure that the interests of the company and the community are ethically aligned.

Outside corporate boardrooms, we cannot ignore the role of coders or programmers in Bengaluru, Silicon Valley, Tel Aviv and other tech hubs. As we become increasingly reliant on software, can we let coders be the new cowboys of the Wild West without any accountability? As we get further entangled in the intricate web of algorithms, it has become clear that we need to demystify them. No more black box responses, no more unaccountable algorithms. What we need are programmers who are held responsible for the impact their codes make on people’s lives. We need algorithms that are not only transparent but also seen to be so.

6. THE PANDEMIC & DIGITAL SOCIETIES

he pandemic has made us reassess our approach to life and behaviour. We consume, we communicate, and we integrate using technology. Nearly a year on, COVID-19 has not only furthered technology’s invasion of our lives but also brought to the fore new realities, especially regarding privacy. The deepening concern over privacy is intertwined with the change in the ownership of data. The pandemic provided the pretext to alter the role played by big corporations and the control of the state over technological devices, products and services.[ix]

The digitalisation of our day-to-day lives may enable an unprecedented level of personalised oversight over individual behaviour. In its mildest form, this can be ‘libertarian paternalism’, a nudging predicated on the belief that individual choices are rarely made on the basis of complete information and are instead a product of psychological biases. At the other end of the spectrum, the ‘gamification’ of citizenship under this new paradigm would be the ultimate realisation of the Hobbesian social contract, whereby the Leviathan would be entrenched in every aspect of citizens’ lives.

In order to retain the ownership of data and individual autonomy, all these changes must be accompanied by the strengthening of our resolve to defend individual choice, freedom and rights by formulating adequate laws that would ensure that the values we create serve us, the people.

7. REWRITING THE SOCIAL CONTRACT

he world needs a new social contract—a digital social contract. The pandemic has thrown the old workplace order into a state of flux, thereby, reopening the debate surrounding the provision of the three Ps: Paycheck, protection and purpose to individuals. The equivalent of 475 million full-time jobs vanished in the second quarter of 2020[x] and many others found themselves without health insurance and other benefits typically linked to work contracts at the greatest time of their need. To ‘build back better’, the new order is being shaped by new terms of contract and employment, concepts of social protection and minimum wage for all, and the altered role of the state, big tech and individuals. The global shift towards virtual workspaces also provides an opportunity to induct a more diverse work force, especially individuals from historically marginalised communities. However, we need to take note of the challenges that might accompany these changes—such as ensuring safe, inclusive digital workspaces, keeping pace with ever-changing technology, meeting the demand for human skills, and coping with the displacement of jobs. As we move to a more ‘virtual first’ work environment, we need to make sure that nobody is left behind.

Meaningful engagement with vulnerable communities necessarily involves outreach by governments as well as large technology firms, both of whom have benefited from the data of these communities. It is, therefore, the responsibility of both to build bridges with the communities that would be most vulnerable to the disruptive impact of the technologies they build and benefit from. We must take advantage of this moment to forge technology that will be in service of humanity—taking ‘people-centered innovation’ from a buzzword to actual practice.

8. OUTLASTING THE VIRUS: INFODEMIC AND I

n intense battle is being waged against the Infodemic, which is running parallel to the battle against the Pandemic. Misinformation, the darkest shade of grey in the Chrome Age, is now being used to destabilise businesses and political systems, and dissolve the social cohesion shared by individuals. “Misinformation costs lives”,[xi] and the Infodemic has led to uncountable preventable deaths.

NO SINGLE AGENT CAN EVER ENSURE THE INTEGRITY OF THE GLOBAL INFORMATION SYSTEM. THE ANSWER, THEREFORE, LIES IN THE COMING TOGETHER OF ALL THE THREE IMPORTANT ACTORS—THE STATE, BIG TECH AND THE PUBLIC.

No amount of digital distancing is helping curb the spread of fake news. This emergence of a highly polarised information system should be effectively countered by a new guarantor of the public domain. No single agent can ever ensure the integrity of the global information system. The answer, therefore, lies in the coming together of all the three important actors—the state, big tech and the public.

The state should help denounce disinformation and simultaneously promote high quality content. Big tech can devise algorithms to filter out such misinformation, curtail the financial incentive acquired through it and display a higher sense of responsibility. Indeed, if platforms can display the same energy and responsiveness they did during the US elections in other jurisdictions, we may have some hope for a tenable solution.

Finally, the public should expand their information base by incorporating different sources of information, reading before posting on social media, and exposing and reporting fake news. It is only through the realisation of collective responsibility that we can hope to find a ‘vaccine’ for the Infodemic.

9. #TECHFORGOOD – A RAY OF HOPE

n a gloomy landscape of various shades of grey, we are at last beginning to see some light and some white. The emergence of a technology moment where communities are beginning to find their voices and change the course of their future, provides a glimmer of hope. Across the world, especially in Asia and Africa, people are discovering, nurturing and shaping new aspirations and goals for themselves by using technology. The African Union highlighted the need to diversify, develop and assert ownership over its digital society and economy.[xii] Community data has transformed from a fringe idea to a mainstream policy debate, receiving a nod, for instance, in India’s Non-Personal Data Governance Framework.[xiii]

Even as the pandemic upended our lives, we saw governments deploying technology for the greater social good; we saw businesses respond to it with extreme ingenuity; and we also saw women seizing this moment and retaining agency.

The post-pandemic era offers us an opportunity to build a more diverse and inclusive digital order. We can, and must, redefine diversity and support minorities and women to play a key role as a new world emerges from the debris of the war on COVID-19. The world today once again stands on the cusp of history. It cannot afford to fail in laying a new foundation which is free of the of the frailties of the past.

Standard
Content Modernisation, Cyber and Technology, Free Speech, Freedom and Expression, India, media and internet, tech and media

Social media platforms can’t be a law unto themselves

Social Media Platforms, Communities, safety, societies, debates, politics, US Election, monitoring, safety, sovereign, boards, superfluous

The US elections are witnessing heated, contested, loud and aggressive debates, with media (new and old) donning visibly partisan robes. One such media report has sought to pull India and Indians into the middle of the Trump Vs Biden campaign battle.

First, there is a simple fact that often eludes social media platforms like Twitter and Facebook as well as the easily outraged proponents of the newly-minted ‘cancel culture’. Social media platforms may have their own terms and rules of engagement and content monitoring; it is their prerogative. But those terms and rules cannot—and, more importantly, must not—be allowed to supersede the law as framed by sovereign states, especially democracies, where these platforms operate.

Social media platforms may have their own terms and rules of engagement and content monitoring; it is their prerogative. But those terms and rules cannot—and, more importantly, must not—be allowed to supersede the law as framed by sovereign states, especially democracies, where these platforms operate

For example, the law as it exists in India, based on its constitutional and penal provisions, overrides terms and rules framed by social media companies based anywhere in the world, operating within its territory. After all, if these companies ensure compliance with Indian law in order to conduct business, equal compliance would be in order for content monitoring and management. There cannot be two separate arrangements.

It is necessary to underscore this point to unclutter the debate triggered by an article in the Wall Street Journal, which imputed that Facebook has shown extraordinary preference for the BJP on account of the political bias of some of its employees. Facts do not bear out the newspaper’s contention, which is of a piece with what is termed as ‘cancel culture’, where only those views that are endorsed by a ‘select few’ may be allowed a public platform. The ensuing shouting match needs to be contested with a tempered view on the more substantial issue of a platform’s self-assumed supreme right and absolute authority to decide what can and cannot be allowed to be said.

It is quite obvious that the upcoming US presidential election and the attempt to coerce platforms into becoming an extension of the campaign for or against the incumbent Administration—which is no secret within and outside social media corporate offices—instigated the outrage against Facebook’s alleged political bias in India. It is a proxy that prepares the ground for what lies ahead.

It is quite obvious that the upcoming US presidential election and the attempt to coerce platforms into becoming an extension of the campaign for or against the incumbent Administration—which is no secret within and outside social media corporate offices

But since it involves India, and Indian users of Facebook and all other platforms, let us locate this debate within the Indian context and focus on three crucial questions surrounding platform accountability and compliance.

First, what is the Indian consensus on what constitutes freedom of expression online? It certainly must not be what Facebook, Twitter, et al deem fit. The Constitution of India guarantees its citizens certain fundamental rights that cannot be encroached upon except under special circumstances and that too only by the state, not an external agency. Article 19 (2) qualifies freedom of speech and expression with “reasonable restrictions … in the interests of the sovereignty and integrity of India, the security of the state, friendly relations with foreign states, public order, decency or morality or in relation to contempt of court, defamation or incitement to an offence.” More often than not, these restrictions are followed in the breach, unless there is a specific complaint.

India’s judiciary has intervened to either uphold or strike down these restrictions depending on their application. In other words, there are no specific, watertight definitions guiding Article 19 (2). By and large, all speech, unless it violates India’s penal code, is held to be free. Perceptions and legal positions have changed over time. For instance, DH Lawrence, once held to be ‘obscene’ by the courts, is now a part of university curricula. Barring the early years of the Republic, the restriction on speech that may harm relations with ‘friendly countries’ has never been imposed. The Supreme Court’s Hindutva judgement has elasticised political speech. ‘Hate speech’ is broadly defined as speech that promotes enmity between communities while ‘violent speech’ is understood to be speech that calls for or poses an imminent threat of violence. There are separate laws to deal with both and judgements to guide their application.

Now, for some recent examples of free speech, social media responses and the consequences. Last week, P Naveen, a relative of Congress MLA Akhanda Srinivasa Murthy in Karnataka, posted content—in response to another post—which was deemed to be “anti-Islam” by a mob that ran riot, burning down the legislator’s house and ransacking two police stations. The police opened fire; four people lost their lives. It could be argued that had Facebook been alert and ‘unbiased’ in its content monitoring, then it would have pulled down the provocative post that prompted the response by Naveen, thus, nipping all mischief in the bud. But since the first post remained untouched, could that be imputed to Facebook’s bias towards a specific community and against another? At which point does accountability come in?

Also last week, Twitter locked the account of JNU professor and well-known public figure Anand Ranganathan for posting a verse from the Quran that called for “punishment” of those who “abuse Allah and his messenger” in the context of the Bengaluru riots. Twitter has said that Anand Ranganathan’s account has been locked because his tweet violated the platform’s rules that explicitly state, “You may not promote violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” How can something that can be freely stated, either in writing or verbally—in Indian media because there is no legal bar on it—be out of bounds on Twitter? Do Twitter rules supersede Indian laws?

Which brings us to the second question: What should be the operating law for social media platforms? Is it the law of India or is it the law of the country where the parent company is based? Or, more ominously, is it the ‘law’ as framed by the company, regardless of the law(s) of either the parent or the host country. With respect to the brouhaha over Facebook and its alleged bias, were company officials pushing back against Indian law or were they pushing back against a presumptive ‘law’ to (selectively) govern speech globally? It is this grey area in which social media platforms operate that creates fertile ground for mischief and indeed for political capture and gaming. It is imperative that there be no ambiguity in this regard, especially because it concerns the rights of citizens.

The third big question that arises is: What recourse do we have when someone – an individual, an institution, or the state – seems to be erring on the side of hate speech? Will the errant entity or individual be held to account by law or by dissent? As of now, it is unclear as to exactly which authority we are appealing. Without a framework to approach and remedy cases like these, Indian social media users will remain subject to the whims and fancies of content platforms, who will arbitrarily decide for themselves what is the ‘common good’, often with scant regard for the law of the land. For instance, if Twitter were to ban the account of an Opposition leader, it would prompt cries of censorship. On the other hand, if a ruling party leader were to be banned, should it be seen as acting under pressure?

It is for the state, a sovereign entity, to respond (in keeping with its extant laws) if there is a speech violation, intentionally or otherwise. Platforms like Facebook and Twitter cannot be allowed to arrogate to themselves this role and render the sovereign power of the state meaningless. As mentioned earlier, the Constitution allows only for the State to restrict citizens’ rights in extraordinary situations, and even that is open to judicial scrutiny. If we were to allow Facebook, Twitter and other social media platforms to have the executive power to infringe upon our freedoms, it would create space and scope for them to be used as political tools by those in power or out of power. That would be against all canons of national sovereignty and fly in the face of freedom.

Therefore, social media platforms cannot, and must not, interfere in the domestic debates and political processes of a sovereign country. If cause and effect (to a social media post) are limited to a local jurisdiction, content platforms cannot, and must not, get involved, except for providing evidence to aid law enforcement, if called upon by the courts. If the effect of social media content is transnational, then international agreements between the countries concerned may be used for law-enforcement application.

Social media platforms cannot, and must not, interfere in the domestic debates and political processes of a sovereign country

The noise over Facebook’s alleged bias aside, we are talking about the sanctity, integrity and safety of societies, communities and countries. Sovereign states, more so in democracies, are responsible for these areas of public life and are accountable to national institutions as well as citizens. Social media and content platforms, however, are accountable to nobody but their boards. They are far removed from the concerns of the user whose time and patronage they actively solicit.

Corporate governance cannot be restricted to the non-digital world; it must now extend to digital platforms. Platforms should be held both answerable and actionable for their decisions, which cannot be unfettered from the law of the land where they operate. Everything else is superfluous.

Standard