Most Affordable IAS Coaching in India  

Editorial 1: Research security should be a national priority

Context

Policymakers must focus on strengthening research security as a part of the broader science and technology strategy in India.

 

Introduction

As India aims to achieve its development objectives by 2047, the government has laid an emphasis on the role of science and technology in strategic and emerging sectors. Investment in cutting-edge technologies is essential to stay globally competitive, address societal challenges and unlock economic opportunities. Like in many nations, India is building an innovation ecosystem to harness the transformative power of these technologies. However, along with this intensification of research and development (R&D) arises a new challenge — research security.

 

Risks in the Evolving Geopolitical Landscape

  • Collaboration and knowledge exchange: While collaboration and the free exchange of knowledge are fundamental to scientific progress, there are new risks in the rapidly evolving geopolitical landscape.
  • Foreign interference, intellectual property theft, insider threats, cyberattacks, unauthorized access: Foreign interference, intellectual property theft, insider threats, cyberattacks, and unauthorized access to sensitive information are concerns for countries investing in advanced technologies.
  • Impact on India’s strategic sectors: If left unaddressed, they could undermine India’s progress in strategic sectors.

 

Importance of Research Security

  • Research security: Research security, in this context, refers to safeguarding scientific research from threats to confidentialityeconomic value, or national interest.
  • India’s strategic technologies: India is ramping up investments in strategic technologies which include spacedefencesemiconductorsnuclear technologycybersecuritybiotechnologyclean energyartificial intelligence, and quantum technology.
  • Protection of research outputs: So, ensuring strategic research outputs remain protected is critical.
  • Consequences of security breaches: Any breach of security could compromise national interests, delay technological advancements, and expose sensitive data to exploitation by foreign actors.

 

Strengthening Research Security

  • Focus on research security: Policymakers must focus on strengthening research security as a part of India’s broader science and technology strategy.
  • Protection of sensitive data, intellectual property, research infrastructure, and personnel: This involves a concerted effort to protect sensitive dataintellectual propertyresearch infrastructure, and personnel.
  • Preventing espionage, sabotage, and foreign influence: Preventing espionagesabotage, and adversarial foreign influence are essential to safeguard India’s R&D investment.

 

The global landscape, China factor

  • Research security breaches: The issue of research security is not far-fetched, as there have been several cases of research security breaches around the world with serious consequences.
  • Harvard University case: In a famous case, a senior professor at Harvard University and his two Chinese students were arrested for not-disclosing their links to Chinese funding, while also receiving funding from the U.S. Department of Defense.
  • COVID-19 vaccine research: In another case, COVID-19 vaccine research facilities were subject to cyberattacks in 2020 to steal sensitive vaccine research and development data.
  • European Space Agency (ESA): The European Space Agency (ESA) has also suffered several cyberattacks to sabotage or steal sensitive information, prompting ESA to develop a partnership with the European Defence Agency on cybersecurity.

 

Global Responses to Strengthen Research Security

  • US CHIPS and Science Act: Such incidents have prompted several countries to develop policies and guidelines to strengthen research security. The US CHIPS and Science Act has several provisions on research security, which are complemented by other guidelines, including the research security framework of the National Institute of Standards and Technology.
  • Canada’s National Security GuidelinesCanada has come up with National Security Guidelines for Research Partnerships and a Policy on Sensitive Technology Research and Affiliations of Concern, along with a list of sensitive technologies.
  • Collaboration restrictions: Canada has identified research institutions — primarily from ChinaIran, and Russia — with which collaborations should be avoided.
  • European Council’s approach: The European Council’s recommendation is taking a different approach based on the principles of self-governance by the sector, a risk-based and proportionate response, and country-agnostic regulations. It underlines the need to establish a centre of expertise on research security and highlights research security-related guidelines for Horizon Europe, the primary research funding programme of the EU.

 

Military-Civil Fusion and Strategic Implications

  • Military-civil fusion strategy: Several of these initiatives are partially driven as responses to the military-civil fusion strategy of the Chinese Communist Party, which promotes the use of dual-use technologytechnology transfer, funding, and foreign collaborations.
  • Nexus between defence and research: There is a close nexus between China’s defence industryuniversities, and research institutions to develop and share strategic research and technologies between the civilian and military sectors.

 

Promoting research security in India

Limited attention in academia and government: Unfortunately, the concept of research security has received little attention in academic circles and government policymaking, leading to vulnerabilities that adversarial actors could exploit.

Mapping Security Vulnerabilities

  • Mapping vulnerabilities: The first step would be to systematically map the security vulnerabilities in our research ecosystem.
  • Understanding foreign influence: This would involve understanding the nature of foreign influence in our universities.
  • Assessing vulnerabilities of research labs and infrastructure: Assessing the vulnerabilities of key research labs and sensitive research infrastructure.
  • Analysing foreign collaborations and funding: Analyzing foreign collaborations and funding in strategic technologies.
  • Reviewing personnel hiring and access control: Reviewing the personnel hiring and access control practices to comprehend possible insider threats in crucial research facilities.

 

Role of Government Agencies and Research Institutions

  • Deliberation on securing research: For this, government agencies and research institutions need to deliberate on possible steps to make strategic research more secure while avoiding over-regulation.
  • Engagement with trusted international partners: Further, engagement with trusted international partners could be explored for the initial capacity building and awareness-raising in this area.

 

Concrete Steps for Research Security

  • Engagement of security and intelligence agencies: Concrete steps would require security and intelligence agencies to engage with researchers and develop an understanding of the sensitive research areas.
  • Classification of research: This would also necessitate the classification of research in different categories based on strategic value, possible economic impact, and the national security implications.
  • Development of a research security framework: Thus, a research security framework could be developed providing research security guidelines.

 

Risk-Based and Proportionate Response

  • European Council approach: A risk-based and proportionate response approach similar to the one recommended by the European Council could be considered as it seeks to avoid over-regulation while reducing security risks.
  • Research security surveillance mechanism: There would be a requirement to develop a research security surveillance mechanism to keep tabs on emerging risks.

 

Observe these cautions: Challenges for Research Security

  • In-principle and practical challenges: There are several in-principle and practical challenges for research security.
  • International collaboration in science: Science is inherently international and collaborative in nature, and international collaborations are crucial drivers of scientific progress.
  • Opposition from researchersResearch security seeks to restrict certain funding and collaborations, which would be opposed by researchers for infringing on academic freedom and hindering scientific progress.
  • Balancing with open science: Research security would also have to find a balance with open science, which includes sharing of research infrastructureopen data, and involving the general public in scientific research via citizen science.
  • Promotion of open science: Rightfully, open science is promoted by governmentsfunding agenciesscience academies, and individual researchers.

 

Administrative and Regulatory Challenges

  • Administrative and regulatory burden: Another major challenge would be the additional administrative and regulatory burden that research security would bring to research institutions and individual researchers, already strained by the overly bureaucratic nature of our institutions and funding agencies.
  • Collaboration with technical experts: It is crucial that research security is implemented in close collaboration with the technical experts rather than security and intelligence agencies making decisions without full understanding of the matter.
  • Avoiding political interference: It is important that research security should not become an instrument of political interference in academic institutions.

 

Funding and Capacity Building for Research Security

  • Significant funding and engagementResearch security would require significant funding, effective communicationengagement, and capacity building to create a cadre of professionals who could design, develop, implement, and lead research security efforts in India.
  • Creation of a dedicated office: A dedicated office similar to one at the U.S. National Science Foundation could be created for research security in the newly established Anusandhan National Research Foundation (ANRF).
  • Focal point for coordination: Such an office could become a focal point for coordinating and synergizing efforts for research security among security agencies and academic institutions.

 

Conclusion

Research security would require significant funding, effective communication, engagement, and capacity building to create a cadre of professionals who could design, develop, implement and lead research security efforts in India. A dedicated office similar to one at the U.S. National Science Foundation could be created for research security in the newly established Anusandhan National Research Foundation (ANRF). Such an office could become a focal point for coordinating and synergising efforts for research security among security agencies and academic institutions. Finally, researchers should be engaged at all levels of decision-making to find the right balance of security issues with open science, regulatory burden and scientific progress. Here, the spirit of ‘as open as possible and as closed as necessary’ could help guide decision-making.


Editorial 2: What India’s AI Safety Institute could do

Context

India’s AI Safety Institute should tap into parallel international initiatives.

 

Introduction

In October, the Ministry of Electronics and Information Technology (MeitY) convened meetings with industry and experts to discuss setting up an AI Safety Institute under the IndiaAI Mission. Curiously, this came on the heels of Prime Minister Narendra Modi’s visit to the U.S., the Quad Leaders’ Summit, and the United Nations Summit of the Future. AI appeared high on the agenda in the run up to the Summit of the Future, with a high-level UN advisory panel producing a report on Governing AI for Humanity.

Policymakers should build on India’s recent leadership at the G20 and the GPAI, and position it as a unifying voice for the global majority in AI governance. The design of the Safety Institute should prioritise raising domestic capacity, capitalising on India’s comparative advantages, and plugging into international initiatives.

  • The Summit of the Future yielded the Global Digital Compact that identifies multi-stakeholder collaboration, human-centric oversight, and inclusive participation of developing countries as essential pillars of AI governance and safety.
  • As a follow up, the UN will now commence a Global Dialogue on AI. It would be timely for India to establish an AI Safety Institute which engages with the Bletchley Process on AI Safety.
  •  If executed correctly, India can deepen the global dialogue on AI safety and bring global majority perspectives on human centric safety to the forefront of discussions.

 

Institutional reform

Concerns from MeitY’s AI Advisory (March 2024)

  • Government approvals for AI systems: The advisory proposed that government approvals be required before the public roll-out of experimental AI systems.
  • Institutional capability: Some raised questions about the Indian government's capacity to suitably determine the safety of novel AI deployments.
  • Issues with bias, discrimination, and One-Size-Fits-All: Other provisions on bias, discrimination, and treating all AI deployments the same suggested the advisory was not based on technical evidence.

 

Avoiding Prescriptive Regulatory Controls

  • Caution against prescriptive controls: India should be cautious and avoid the prescriptive regulatory controls proposed in the European Union (EU) and China.
  • Impact on information sharing: The threat of regulatory sanctions in a rapidly evolving technological ecosystem stifles proactive information sharing between businesses, governments, and the broader ecosystem.
  • Minimal compliance: This environment nudges labs to only take the minimum steps required for compliance.
  • Specialized Agencies: Each jurisdiction demonstrates recognition of the need to establish specialized agencies, such as China’s Algorithm Registry and the EU’s AI Office.
  • Decoupling institution building from regulation: India should separate institution building from regulation-making to maximize the benefits of institutional reform.

 

International Efforts in AI Safety

The Bletchley Process

  • Key summits: The Bletchley process is underscored by the U.K. Safety Summit in November 2023 and the South Korea Safety Summit in May 2024.
    • he next summit is set for France.
  • International network: This process is creating an international network of AI Safety Institutes.

 

U.S. and U.K. AI Safety Institutes

  • MoUs and collaboration: The U.S. and the U.K. were the first to set up these institutes and have already signed MoUs to exchange knowledge, resources, and expertise.
  • Partnerships with AI labs: Both institutions are signing MoUs with AI labs and receiving early access to large foundation models.
  • Sharing technical inputs: Mechanisms have been installed to share technical inputs with AI labs before their public rollouts.
  • Role of safety institutes: These Safety Institutes facilitate proactive information sharing without being regulators. They are positioned as technical government institutions that leverage multi-stakeholder consortiums and partnerships to assess the risk of frontier AI models to public safety.
  • Focus on national security: They largely view AI safety through the lens of cybersecurity, infrastructure security, biosphere safety, and other national security threats.

 

Government-Led AI Safety Institutes

  • Improving government capacity: These safety institutes aim to improve government capacity and mainstream the concept of external third-party testing, risk mitigations, and assessments.
  • Transforming AI governance: Government-led AI safety institutes aim to deliver insights that can turn AI governance into an evidence-based discipline.
  • Global collaboration opportunity: The Bletchley process offers India an opportunity to collaborate with governments and stakeholders from across the world.
  • Need for shared expertise: Shared expertise will be crucial to keep up with the rapid innovation trajectories of AI.

 

Way Forward: Charting India’s approach

  • India should establish an AI Safety Institute which integrates into the Bletchley network of safety institutes.
  • For now, it should be independent from rulemaking and enforcement authorities and, instead, operate exclusively as a technical research, testing, and standardisation agency.
  • It would allow India’s domestic institutions to tap into the expertise of other governments, local multi-stakeholder communities, and international businesses.
  • While upscaling its AI oversight capabilities, India can also use the Bletchley network to advance the global majority’s concerns with AI’s individual centric risks.

 

Conclusion

The institute could champion perspectives on risks relating to bias, discrimination, social exclusion, gendered risks, labour markets, data collection and individual privacy. Consequently, it could deepen the global dialogue around harm identification, big picture AI risks, mitigations, red-teaming, and standardisation. If done right, India may become a global steward for forward-thinking AI governance which embraces many stakeholders and government collaboration. The AI Safety Institute can demonstrate India’s scientific temper and willingness to implement globally compatible, evidence-based and proportionate policy solutions.