Category Archives: Cyber and Technology
Swords and Shields: Navigating the Modern Intelligence Landscape
SAMIR SARAN | ARCHISHMAN RAY GOSWAMI
As key custodians of a nation’s strategic intent, national intelligence services must account for and adapt to the wider socio-cultural and political factors shaping their operational environment. Today, shifting geopolitical tides in the form of accelerated multipolarity, scientific progress, and the erosion of accountability in global technological governance have converged to reshape national intelligence strategies. This paper seeks to make sense of these changes by discussing key features of the shifting global intelligence landscape. These include factors such as the role of ‘geotechnography’ in blurring distinctions between offline and online experiences, the consequences of growing inter-state competition over rare-earth elements and supply chains, the evolving character of human intelligence (HUMINT) amid ubiquitous technical surveillance (UTS), and the role of private sector intelligence and Big Tech in a data-infused geostrategic terrain. The aim is to foster a discussion on how nations think about and use intelligence in changing times. It closes with an exploration of the implications of these changes for India’s national security.
Originally appeared here
Technology: Taming – and unleashing – technology together
Innovative approaches will require regulatory processes to include all stakeholders.
Technology has long shaped the contours of geopolitical relations – parties competed to outinnovate their opponents in order to build more competitive economies, societies and militaries. Today is different. With breakthroughs in frontier technologies manifesting at rapid rates, the question is not who will capture their benefits first but how parties can work together to promote their beneficial use and limit their risks.
The challenge: benefits of frontier technologies may be compromised by inequities and risks
The prolific pace of advancement of frontier technologies – artificial intelligence (AI), quantum science, blockchain, 3D printing, gene editing and nanotechnology, to name a few – and its pursuit by a multitude of state and non-state actors, with varied motivations, has opened a new chapter in contemporary geopolitics. For state actors, these technologies offer a chance to gain strategic and competitive advantage, while for malicious nonstate actors, these technologies present another avenue to persist with their destabilizing activities.
Therefore, emerging technologies have added another layer to a fragmented and contested global political landscape. Besides shaping geopolitical dynamics, they are also transforming commonly held notions of power – by going beyond the traditional parameters of military and economic heft to focus on states’ ability to control data and information or attain a tech breakthrough as the primary determinant of a state’s geopolitical influence.
These technologies also have significant socioeconomic implications. By some estimates, generative AI could add the equivalent of $2.6 trillion to $4.4 trillion to the global economy and boost labour productivity by 0.6% annually through 2040.14 Yet, simultaneously, the rapid deployment of these technologies has sparked concerns about job displacement and social disruption. These dynamics are triggering new geopolitical alignments as states seek to cooperate or compete in developing and using new technologies.
As frontier technologies take centre stage in global politics, they present a new challenge for international diplomacy.
As frontier technologies take centre stage in global politics, they present a new challenge for international diplomacy. What can states do to stem the proliferation of frontier dual-use technologies in the hands of malicious actors who intend to cause harm? Can states look beyond their rivalries to conceive out-of-the-box solutions, or will they always be playing a catch-up game with tech advancements? What role behoves the United Nations-led multilateral frameworks regarding the global governance of these technologies, or will plurilateralism and club-lateralism trump it?
A new approach for governing frontier technologies
The historical evolution of global tech regimes offers important lessons for the challenges posed by frontier technologies today. During the Cold War, industrialized nations established export control regimes, such as the Nuclear Suppliers Group and the Missile Technology Control Regime, that sought to exclude certain countries by denying them several dual-use technologies. Those control regimes proved successful in curbing tech proliferation. However, with changing geopolitical realities, the same regimes began extending membership to previously excluded countries. This approach offers a vital lesson: shedding the initial exclusivist approach in favour of extending membership helped to retain the regimes’ legitimacy.
Secondly, while the multilateral export control regimes succeeded, the nuclear non-proliferation regimes performed sub-optimally as they amplified the gap between nuclear haves and have-nots. This triggered resentment from the nuclear have-nots, who sought to chip away at the legitimacy of the regimes.
The key lesson for today is that the success of any tech-related proliferation control efforts is contingent on not accentuating existing technology divisions between the Global North and South.
The UN-led multilateral framework has focused on enhancing global tech cooperation through initiatives like the Secretary-General’s High-level Panel on Digital Cooperation. However, while there has been little substantive progress at the global, multilateral level, bilateral and minilateral tech cooperation has thrived. Groupings such as the Quad, AUKUS and I2U2 that focus on niche tech cooperation present a possible model pathway forward.15 They have demonstrated the value of like-minded partners coming together to realise a common vision and ambition. These arrangements also suggest that even as the UN-led multilateral frameworks attempt to grapple with frontier technologies, minilaterals may provide the starting point for collaboration to address frontier technologies’ advancement.
To ensure that efforts at tech regulation succeed, countries will be required to undertake innovation in policy-making, where governments take on board all the stakeholders – tech corporations, civil society, academia and the research community. The challenge posed in recent months by generative AI through tools like deep fakes and natural language processing models like ChatGPT has shown that unless these stakeholders are
integrated into policy design, regulations will always be afterthoughts.
How to strengthen tech cooperation
The following are four proposals for strengthening global cooperation on frontier technologies:
– Develop the Responsibility to Protect (R2P) framework for emerging technologies: Similar to the R2P framework developed by the UN for protecting civilians from genocide, war crimes, ethnic cleansing and crimes against humanity, the international community must create a regulatory R2P obligation for states to protect civilians from the harms of emerging technologies. This obligation would entail three pillars: 1) the responsibility of each state to protect its populations from the emerging technologies’ misuse, 2) the responsibility of the international community to assist states in protecting their populations from the emerging technologies’ misuse, and 3) the responsibility of the international community to take collective action to protect populations when a state is manifestly failing to protect its own people from the emerging technologies’ misuse. The specific measures that are needed will vary depending on the specific technologies involved and the risks that they pose.
– Design a three-tier “innovation to market” roadmap: States must ensure responsible commercial application and dispersion of new technologies. One critical step towards this is for states to design a three-tiered tech absorption framework comprising a regulatory sandbox (pilot tested in a controlled regulatory environment for assessing collateral impact), city-scale testing and commercial application.
– Convene a standing Conference of the Parties for future tech: The Global South must convene a standing Conference of the Parties (COP) for future technologies along the lines of COP for climate change negotiations. This body would meet on an annual basis where the multistakeholder community – national governments, international organizations and tech community – will deliberate on new tech developments, present new innovations and reflect on related aspects of the dynamic tech ecosystem and its engagement with the society and communities.
– Link domestic innovation ecosystems: Inter-connected national innovation ecosystems will ensure that like-minded countries can pool their finite financial, scientific and technological human resources to develop technologies. For instance, in the field of quantum science, the European Commission’s research initiative, the Quantum Flagship, has partnered with the United States, Canada and Japan through the InCoQFlag project. Likewise, the Quad has the Quad Center of Excellence in Quantum Information Sciences. This underlines the importance of prioritizing one of the frontier technologies and networking domestic innovation ecosystems to focus on its development, as no country alone can harness the deep potential of frontier technologies and mitigate the associated risks.
Technology as a tool of trust
Throughout history, technology has been the currency of geopolitics. New innovations have bolstered economies and armies, strengthening power and influence. Yet, technology has also served as an opportunity to bind parties closer together. Today, at a time of heightened geopolitical risks, it is incumbent on leaders to pursue frameworks and ecosystems that foster trust and cooperation rather than division.
This essay is a part of the report Shaping Cooperation in a Fragmenting World.
AI, Democracy, and the Global Order
Future historians may well mark the second half of March 2023 as the moment when the era of artificial intelligence truly began. In the space of just two weeks, the world witnessed the launch of GPT-4, Bard, Claude, Midjourney V5, Security Copilot, and many other AI tools that have surpassed almost everyone’s expectations. These new AI models’ apparent sophistication has beaten most experts’ predictions by a decade.
For centuries, breakthrough innovations – from the invention of the printing press and the steam engine to the rise of air travel and the internet – have propelled economic development, expanded access to information, and vastly improved health care and other essential services. But such transformative developments have also had negative implications, and the rapid deployment of AI tools will be no different.
AI can perform tasks that individuals are loathe to do. It can also deliver education and health care to millions of people who are neglected under existing frameworks. And it can greatly enhance research and development, potentially ushering in a new golden age of innovation. But it also can supercharge the production and dissemination of fake news; displace human labor on a large scale; and create dangerous, disruptive tools that are potentially inimical to our very existence.
Specifically, many believe that the arrival of artificial general intelligence (AGI) – an AI that can teach itself to perform any cognitive task that humans can do – will pose an existential threat to humanity. A carelessly designed AGI (or one governed by unknown “black box” processes) could carry out its tasks in ways that compromise fundamental elements of our humanity. After that, what it means to be human could come to be mediated by AGI.
A carelessly designed AGI (or one governed by unknown “black box” processes) could carry out its tasks in ways that compromise fundamental elements of our humanity.
Clearly, AI and other emerging technologies call for better governance, especially at the global level. But diplomats and international policymakers have historically treated technology as a “sectoral” matter best left to energy, finance, or defense ministries – a myopic perspective that is reminiscent of how, until recently, climate governance was viewed as the exclusive preserve of scientific and technical experts. Now, with climate debates commanding center stage, climate governance is seen as a superordinate domain that comprises many others, including foreign policy. Accordingly, today’s governance architecture aims to reflect the global nature of the issue, with all its nuances and complexities.
As discussions at the G7’s recent summit in Hiroshima suggest, technological governance will require a similar approach. After all, AI and other emerging technologies will dramatically change the sources, distribution, and projection of power around the world. They will allow for novel offensive and defensive capabilities, and create entirely new domains for collision, contest, and conflict – including in cyberspace and outer space. And they will determine what we consume, inevitably concentrating the returns from economic growth in some regions, industries, and firms, while depriving others of similar opportunities and capabilities.
Importantly, technologies such as AI will have a substantial impact on fundamental rights and freedoms, our relationships, the issues we care about, and even our most dearly held beliefs. With its feedback loops and reliance on our own data, AI models will exacerbate existing biases and strain many countries’ already tenuous social contracts.
That means our response must include numerous international accords. For example, ideally we would forge new agreements (at the level of the United Nations) to limit the use of certain technologies on the battlefield. A treaty banning lethal autonomous weapons outright would be a good start; agreements to regulate cyberspace – especially offensive actions conducted by autonomous bots – will also be necessary.
Technologies such as AI will have a substantial impact on fundamental rights and freedoms, our relationships, the issues we care about, and even our most dearly held beliefs.
New trade regulations are also imperative. Unfettered exports of certain technologies can give governments powerful tools to suppress dissent and radically augment their military capabilities. Moreover, we still need to do a much better job of ensuring a level playing field in the digital economy, including through appropriate taxation of such activities.
As G7 leaders already seem to recognize, with the stability of open societies possibly at stake, it is in democratic countries’ interest to develop a common approach to AI regulation. Governments are now acquiring unprecedented abilities to manufacture consent and manipulate opinion. When combined with massive surveillance systems, the analytical power of advanced AI tools can create technological leviathans: all-knowing states and corporations with the power to shape citizen behavior and repress it, if necessary, within and across borders. It is important not only to support UNESCO’s efforts to create a global framework for AI ethics, but also to push for a global Charter of Digital Rights.
The thematic focus of tech diplomacy implies the need for new strategies of engagement with emerging powers. For example, how Western economies approach their partnerships with the world’s largest democracy, India, could make or break the success of such diplomacy. India’s economy will probably be the world’s third largest (after the United States and China) by 2028. Its growth has been extraordinary, much of it reflecting prowess in information technology and the digital economy. More to the point, India’s views on emerging technologies matter immensely. How it regulates and supports advances in AI will determine how billions of people use it.
Engaging with India is a priority for both the US and the European Union, as evidenced by the recent US-India Initiative on Critical and Emerging Technology (iCET) and the EU-India Trade and Technology Council, which met in Brussels this month. But ensuring that these efforts succeed will require a reasonable accommodation of cultural and economic contexts and interests. Appreciating such nuances will help us achieve a prosperous and secure digital future. The alternative is an AI-generated free for all.
Digital Public Infrastructure – lessons from India
Public infrastructure has been a cornerstone of human progress. From the transcontinental railways of the nineteenth century to telecommunication in the twentieth century, infrastructure has been vital to facilitating the flow of people, money and information. Built on top of public infrastructure, democratic countries with largely free markets have fostered public and private innovation and, therefore, generated considerable value creation in societies.
In the twenty-first-century, technological innovation has created a tempest of ideological, geographical and economic implications that pose new challenges. The monopolisation of public infrastructure, which plagued previous generations, has manifested itself in the centralised nature of today’s digital infrastructure. It is increasingly evident that the world needs a third type of public infrastructure, following modes of transport such as ports and roads, and lines of communication such as telegraph or telecom – but with open, democratic principles built in.
Digital Public Infrastructure (DPI) can fulfil this need, though it faces several challenges. There is a disturbing trend of the weaponisation of data and technology – or “Digital Colonisation” (Hicks, 2019) – resulting in a loss of agency, sovereignty and privacy. Therefore, proactively deliberating on how to build good DPI is key to avoiding such challenges.
The monopolisation of public infrastructure, which plagued previous generations, has manifested itself in the centralised nature of today’s digital infrastructure.
To begin with, it is important to crystallise what DPI is and what it does. Put simply, foundational DPIs mediate the flow of people, money and information. First, the flow of people through a digital ID System. Second, the flow of money through a real-time fast payment system. And third, the flow of personal information through a consent-based data sharing system to actualise the benefits of DPIs and to empower the citizen with a real ability to control data. These three sets become the foundation for developing an effective DPI ecosystem.
India, through India Stack[1], became the first country to develop all three foundational DPIs: digital identity (Aadhar[2]), real-time fast payment (UPI[3]) and a platform to safely share personal data without compromising privacy (Account Aggregator built on the Data Empowerment Protection Architecture or DEPA) (Roy, 2020). Each DPI layer fills a clear need and generates considerable value across sectors.
However, like in the case of physical infrastructure, it is important that DPIs not succumb to monopolisation, authoritarianism and digital colonisation. This can only happen through a jugalbandi (partnership) of public policy and public technology, i.e., through a techno-legal framework. Techno-legal regulatory frameworks are used to achieve policy objectives through public-technology design. For example, India’s DEPA offers technological tools for people to invoke the rights made available to them under applicable privacy laws. Framed differently, this techno-legal governance regime embeds data protection principles into a public-technology stack.
When aggregated, foundational DPIs constitute the backbone of a country’s digital infrastructure. These layers interface with each other to create an ecosystem that facilitates seamless public service delivery and allows businesses to design novel solutions on top of the DPI layers. In turn, this enables the creation of Open Networks as not seen before. India is now developing such open networks for credit (Open Credit Enablement Network[4]), commerce (Open Network for Digital Commerce[5]), Open Health Services Network (UHI[6]) and many more. When DPIs are integrated, they can generate network effects to create these open networks for various sectors.
Following India’s successful experiment, there is a desire across the world to replicate it (Kulkarni, 2022)[7]. Countries can choose from three potential models to mediate the flow of people, money and information: the DPI model, Web3 and the Big Tech model. Of these three, DPI has emerged as the most feasible model due to its low cost, interoperability and scalable design, and because of its safeguards against monopolies and digital colonisation.
For India’s DPI success to become a worldwide revolution, three types of institutions must be built. First, we need independent DPI steward institutions. It is important to have a governance structure that is agile and responsive. A multiparty governance process through independent DPI institutions will be accountable to a broad range of stakeholders rather than be controlled by a single entity or group. This can build trust and confidence in DPI. India has created the Modular Open Source Identity Platform (MOSIP[8]), adopted by nine nations and with already more than 76 million active users. MOSIP is housed at the International Institute of Information Technology, Bangalore (IITB), an independent public university. IIITB’s stewardship has been critical to MOSIP’s success.
A multiparty governance process through independent DPI institutions will be accountable to a broad range of stakeholders rather than be controlled by a single entity or group.
Secondly, we need to develop global standards through a multilateral dialogue led by India. If standards originating from developed nations were transplanted to an emerging economies’ context without deferring to their developmental concerns, smaller countries would simply be captive to dominant technology players. Additionally, without these standards, Big Tech would likely engage in regulatory arbitrage to concentrate power.
Finally, we need to develop sustainable financing models for developing DPI for the world. Currently backed by philanthropic funding, such models are at risk of becoming a tool of philanthropic competition and positioning.
The world needs a new playbook for digital infrastructure that mediates the flow of people, money and information. This will facilitate countries looking to digitally empower their citizens. They can then rapidly build platforms that address the specific needs of people, while ensuring people are able to trust and use the platform – without fear of exclusion or exploitation.
References
Hicks, J. (2019) ‘Digital Colonialism’: Why countries like India want to take control of data from Big Tech, ThePrint. (Accessed: January 25, 2023).
Kulkarni, S. (2022) Emerging economies keen to replicate India’s Digital Transformation: Kant, ETCIO. The Economic Times. (Accessed: February 3, 2023).
Roy, A. (2020) Data Empowerment and Protection Architecture: Draft for Discussion. NITI Aayog. (Accessed: January 25, 2023).
Notes:
[1] See https://indiastack.org
[2] See https://uidai.gov.in/en/
[3] See https://www.npci.org.in/what-we-do/upi/product-overview
[4] See https://indiastack.org/open-networks.html
[5] See https://ondc.org/
[6] See https://uhi.abdm.gov.in/
[7] See https://mosip.io/
[8] See https://mosip.io/
Accountable Tech: Will the US take a leaf out of the Indian Playbook?
2024 is a decisive year for democracy and the liberal order. 1.8 billion citizens in India and the United States, who together constitute nearly 1/4th of the world’s population, are going to elect their governments in the very same year. This will be the first such instance in a world increasingly mediated and intermediated by platforms, who will be crucial actors shaping individual choices, voter preferences, and indeed, outcomes at these hustings. It is therefore, important to recognise these platforms as actors and not just benign intermediaries.
Prime Minister Modi’s government, especially in its second term, has approached digital regulation with the objective of establishing openness, trust and safety, and accountability. In June this year, Union Minister of State for Electronics & Technology, Rajeev Chandrasekhar, invited public inputs on the draft amendments to the IT Rules 2021 with an ‘open, safe and trusted, accountable internet’ as the central area of focus.
Runaway platforms and cowboy capitalism are the big dangers to the sanctity of our elections and to the citizens’ acceptance of political outcomes.
This Indian aspiration for Accountable Tech must be an imperative for all liberal and open societies if we are to enrich the public sphere, promote innovation and inclusive participation, and indeed, defend democracy itself. If we fail to act now and act in unison, we could end up perverting the outcomes in 2024. Runaway platforms and cowboy capitalism are the big dangers to the sanctity of our elections and to the citizens’ acceptance of political outcomes. India has clearly seen the need for it and is striving to make large tech companies accountable to the geographies they serve. The latest comer who seems to have understood the importance of this is the United States of America.
On 8 September 2022, the White House convened a Listening Session on Tech Platform Accountability ‘with experts and practitioners on the harms that tech platforms cause and the need for greater accountability’. The session ‘identified concerns in six key areas: competition; privacy; youth mental health; misinformation and disinformation; illegal and abusive conduct, including sexual exploitation; and algorithmic discrimination and lack of transparency.’ Hopefully, this session will lead to a more contemporary regulatory and accountability framework that aligns with what is underway in India.
From private censorship and unaccountable conversations hosted by intermediaries to propagation of polarised views, all of them constitute a clear and present danger to democracies, and certainly to India and the US, who are among the most plural, open, and loud digital societies. Digital India is indeed going to be ground zero of how heterogenous, diverse, and open societies co-exist online and in the real world. The efforts of the Indian government to put together sensible regulation may actually benefit many more geographies and communities. If India can create a model that works in the complex human terrain of India, variants of it would be applicable across the world.
The efforts of the Indian government to put together sensible regulation may actually benefit many more geographies and communities.
It must also be understood that there is no single approach to manage platforms, even though there could be a wider and shared urge to promote openness, trust and safety, and accountability. The regulations that flow from this ambition are necessarily going to be contextual and country specific.
Hence, it is important that India, the US, and other large digital hubs coordinate and collaborate with each other to defend these universal principles even as they institute their own and region-specific regulations. For instance, policy architecture in the US will focus on managing platforms and technology companies operating under American law and consistent with their constitutional ethos. India, on the other hand, has the onerous task of ensuring that these same corporations adhere to Indian law and India’s own constitutional ethic.
India and the US lead the free world in terms of global social media users. As of January 2022, India had 329.65 million Facebook, 23.6 million Twitter, and 487.5 million WhatsApp (June 2021) users, while the US had 179.65 million Facebook, 76.9 million Twitter, and 79.6 million WhatsApp (June 2021) users. The online world is no longer a negligible part of society. Most people online see the medium as an agency additive and are keen to use it to further their views and influence others’ thinking. Of these, many are also influencers in their own localities. What transpires online now has a population scale impact. The mainstream media reads from it, social media trends define the next day’s headline and the debates on primetime television.
Policy architecture in the US will focus on managing platforms and technology companies operating under American law and consistent with their constitutional ethos.
Thus, the idea that one can be casual in managing content on these platforms is no longer feasible and will have deleterious consequences as recent developments have shown. Intermediary liability, that sought to insulate platforms from societal expectations, needs to be transformed to a notion of intermediary responsibility. It must now become a positive and a proactive accountability agenda where the platforms become a part of responsible governance and responsible citizenship.
Predictable regulation is also good for business and policy arbitrage harms corporate planning; so, platforms too have a stake in making their board rooms and leadership accountable. They must make their codes and designs contextual and stop hiding behind algorithmic decision-making that threatens to harm everyone, including their own future growth prospects. And this must be the ambition as we head into 2024–the year when technology could decide the fate of the free world.
Digital democracies and virtual frontiers: How do we safeguard democracy in the 4IR?
Thank you Tom, and thank you Amb. Power for inviting me to speak on this important issue. Let me congratulate her and the US government for really beginning to respond to the challenges of intrusive tech. That has to be one of the issues that global civic actors, thinkers, and even political leaders must devote time and energy on. I think intrusive tech will take away the gains of the past.
Let me start with something Amb. Power mentioned. The Arab Spring powered by social media levelled the divide—even if briefly—between the palace and the street. The ‘Age of Digital Democracy’ possibly began then. Technology has become the mainstay of civic activism since. Not only are more voices heard in our parts of the world, even elected governments are also more responsive to them. Intrusive tech can undo the gains and I think Amb. Power was absolutely right in stressing on some of the issues that need to be addressed.
The past year also has made us aware of the threats and weaknesses to digital democracies. First, the very platforms that have fuelled calls for accountability often see themselves as above scrutiny, bound not by democratic norms but by bottom-lines. However, acquisition metrics and market valuations don’t sustain democracy too well. The contradiction between short-term returns on investment and the long-term health of a digital society is stark. Hate, violence, and falsehoods drive engagement, and therefore, profits for companies and platforms, our societies are indeed on shaky ground, and this is one area where we must intervene.
We must also ask: Are digital infrastructure and services proprietary products or are they public goods? The answer is obvious. To make technology serve democracy, tech regulations must be rethought. I think this debate needs to be joined. Big Tech boardrooms must be held to standards of responsible behaviour that match their power to persuade and influence. The framework must be geography neutral. Rules that govern Big Tech in the North can’t be dismissed with a wink and a nudge in the South and this is again something civic actors must emphasise on.
Second – and this is important, much of Big Tech is designed and anchored in the US. Understandably, it pushes American—or perhaps Californian—free speech absolutism. This is in conflict with laws in most democracies—including in the US after the 6 January Capitol attack. While protecting free speech, societies seek safeguards to prevent undesirable consequences, especially violence. If American Big Tech wishes to emerge as global Tech, it must adhere to democratic norms globally. Its normative culture must assimilate and reconcile, not prescribe and mandate. Absent such understanding, a clash of norms is visible and already upon us. It is going to erode our gains.
Finally, a key threat emanates from authoritarian regimes with technological capabilities. They seek to perversely influence open societies by weaponising the very freedoms they deny their own people. In their virtual world, Peng Shuai is free and happy; in their real world, she is under house arrest—a new meaning to the term ‘virtual reality’. Confronted by wolf warriors, the rest of us can’t be lambs to the slaughter.
Open societies have always defended their borders stoutly and they must also safeguard the new digital frontlines. In 2024, the two most vibrant democracies will go to elections in the same year, the first time in our digital age, we must not allow authoritarian states or their agents to manipulate public participation at the hustings.
As I conclude let me leave you with a thought: It’s not darkness alone that kills democracy; runaway technology, steeped in nihilism, could strangulate it. Just scroll down your social media feed this evening…
Just deserts? Western reportage of the second wave in India exposes deep schisms in relations with the East
Co-authored with Mr. Jaibal Naduvath
This article is a continuation of a previous article written by the authors, Revisiting Orientalism: Pandemic, politics, and the perceptions industry
In Lord Byron’s poem, Childe Harold’s Pilgrimage (1812), the protagonist Harold, contemplating the grandness of the Colosseum, imagines the condemned gladiator, dignified yet forlorn, butchered for the entertainment of a boisterous, blood lusty Roman crowd out on a holiday.
Public spectacles of suffering are integral to the discourse of power. The perverse imagery and messaging surrounding the suffering seeks to intimidate and suppress the subaltern’s agency to perpetuate ethnic dominance and social control. It pivots around an elevated moral sense of the ‘self’. In his seminal work, When Bad Things Happen to Other People, John Portmann argues that it is not unusual to derive gratification over the suffering of the ‘other’, particularly when the native feels that the suffering or humiliation of the ‘other’ is deserved. The suffering then becomes fair recompense for transgressions real and imagined, and the accompanying sense of justice and closure brings forth feelings of gratification.
India is reeling in the aftermath of the second wave of COVID-19. As death reaps rich dividends cutting across class and covenant, the country is engaged in a determined fightback. The developments have made global headlines, and, in equal measure, triggered global concern. Apocalyptic images of mass pyres and victims in their death throes, replete with tales of ineptitude, profiteering and callous attitudes, have made front page news and have become television primetime in much of the trans-Atlantic press, conforming to reductive stereotypes that have informed three centuries of relations with the Orient. The ‘self-inflicted’ suffering is then ‘fair recompense’.
Continue reading