Most Affordable IAS Coaching in India  

Editorial 1: The Indian Himalayan Region needs its own EIA

Context:

The Teesta dam breach in Sikkim in early October and the recent floods and landslides in Himachal Pradesh are a stark reminder of the havoc our development model is wreaking on our environment and ecology especially in the mountains. It is imperative to assess the worthiness of any significant human endeavour in terms of its impact on the environment.

Environment Impact Assessment (EIA)

The basis of the EIA is one such process defined by the United Nations Environment Programme (UNEP) as a tool to identify the environmental, social, and economic impacts of a project before it is implemented. This tool compares various alternatives for the proposed project, predicts and analyses all possible environmental repercussions in various scenarios. The EIA also helps decide appropriate mitigation strategies.

The EIA process would need comprehensive, reliable data and would deliver results only if it is designed to seek the most appropriate, relevant and reliable information regarding the project. Hence, the baseline data on the basis of which future likely impacts are being predicted are very crucial.

History of EIA in India:

In 1994, the Union Ministry of Environment, Forests and Climate Change (MoEFCC) under the Environment (Protection) Act 1986 (EPA), promulgated the first EIA notification making Environmental Clearance (EC) mandatory for setting up some specified new projects and also for expansion or modernisation of some specific activities.

The EIA 2006 notification lays down the procedure as well as institutional setup to give environmental clearance for the projects that need such clearance as per this notification. Only projects enumerated in the schedule attached to the notification require prior EC. An EIA is not required for many projects. This notification has categorised projects under various heads such as mining, extraction of natural resources and power generation, and physical infrastructure.

The case of the Indian Himalayan Region (IHR)

Unfortunately, the threshold limits beyond which EIA is warranted for all these projects is the same across the country. Despite all levels of government being acutely aware of the special needs of the IHR ( (it serves as a water tower and the provider of ecosystem services), the region’s vulnerabilities and fragility have not been considered separately. Even the draft 2020 notification which was floated for public discussion does not treat the IHR differently than the rest of the country

Flaws in the graded approach

The Indian regulatory system uses a graded approach, a differentiated risk management approach depending on whether a project is coming up within a protected forest, a reserved forest, a national park, or a critical tiger habitat.

The stringency of environmental conditions proposed in the terms of references at the scoping stage of the EIA process is proportionate to the value and sensitivity of the habitat being impacted by the project.

We have enough systemic understanding that the Himalayas are inherently vulnerable to extreme weather conditions such as heavy rains, flash floods, and landslides and are seismically active. Climate change has added another layer of vulnerability to this ecosystem.

The increasing frequency with which the Himalayan States are witnessing devastation every year after extreme weather conditions shows that the region is already paying a heavy price for this indifference.

The needs of these mountains could be addressed at all four stages of the EIA — screening, scoping, public consultation, and appraisal — if the yardstick for projects and activities requiring EC in mountainous regions is made commensurate with the ecological needs of this region.

Regulation and implementation of EIA in India:

There is no regulator at the national level, as suggested by the Supreme Court of India in 2011 in Lafarge Umiam Mining case, to carry out an independent, objective and transparent appraisal and approval of the projects for ECs and to monitor the implementation of the conditions laid down in the EC.

The EIA process now reacts to development proposals rather than anticipate them. Due the fact that they are financed by the project proponent, there is a veering in favour of the project.

The process now does not adequately consider cumulative impacts as far as impacts caused by several projects in the area are concerned but does to some extent cover the project’s subcomponents or ancillary developments.

In many cases, the EIA is done in a ‘box ticking approach’ manner, as a mere formality that needs to be done for EC before a project can be started.

Conclusion:

Policymakers would do well to explore other tools such as the strategic environmental assessment which takes into account the cumulative impact of development in an area to address the needs of the IHR as a fundamental policy.


Editorial 2: Confronting the long­term risks of Artificial Intelligence

Context:

Risk is a dynamic and ever- ­evolving concept, susceptible to shifts in societal values, technological advancements and scientific discoveries. For instance, before the digital age, sharing one’s personal details openly was relatively risk free. Yet, in the age of cyberattacks and data breaches, the same act is fraught with dangers.

Risks associated with AI:

Our understanding of Artificial Intelligence (AI)-related risk can drastically change as the technology’s capabilities become clearer. This underscores the importance of identifying the short­ and long term risks.

Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. "AI" may also refer to the machines themselves.

 

The immediate risks might be more tangible, such as ensuring that an AI system does not malfunction in its day­to­day tasks. Long­ term risks might grapple with broader existential questions about AI’s role in society and its implications for humanity.

Addressing both types of risks requires a multifaceted approach, weighing current challenges against potential future ramifications.

Over the long term

Yuval Noah Harari has expressed concerns about the amalgamation of AI and biotechnology, highlighting the potential to fundamentally alter human existence by manipulating human emotions, thoughts, and desires.

One should be a bit worried about the intermediate and existential risks of more evolved AI systems of the future — for instance, if essential infrastructure such as water and electricity increasingly rely on AI.

Any malfunction or manipulation of such AI systems could disrupt these pivotal services, potentially hampering societal functions and public well ­being.

Similarly, although seemingly improbable, a ‘runaway AI’ could cause more harm — such as the manipulation of crucial systems such as water distribution or the alteration of chemical balances in water supplies, which may cause catastrophic repercussions even if such probabilities appear distant.

AI sceptics fear these potential existential risks, viewing it as more than just a tool — as a possible catalyst for dire outcomes, possibly leading to extinction.

The evolution to human level

AI that is capable of outperforming human cognitive tasks will mark a pivotal shift in these risks. Such AIs might undergo rapid self­ improvement, culminating in a super-­intelligence that far outpaces human intellect. The potential of this super­intelligence acting on misaligned, corrupted or malicious goals presents dire scenarios.

Ethics and AI:

The challenge lies in aligning AI with universally accepted human values. The rapid pace of AI advancement, spurred by market pressures, often eclipses safety considerations, raising concerns about unchecked AI development.

The lack of a unified global approach to AI regulation can be detrimental to the foundational objective of AI governance — to ensure the long term safety and ethical deployment of AI technologies.

AI Index from Stanford University reveals that legislative bodies in 127 countries passed 37 laws that included the words “artificial intelligence”.

International collaboration:

There is also a conspicuous absence of collaboration and cohesive action at the international level, and so long term risks associated with AI cannot be mitigated. If a country such as China does not enact regulations on AI while others do, it would likely gain a competitive edge in terms of AI advancements and deployments. This unregulated progress can lead to the development of AI systems that may be misaligned with global ethical standards, creating a risk of unforeseen and potentially irreversible consequences. This could result in destabilisation and conflict, undermining international peace and security.

The dangers of military AI

Furthermore, the confluence of technology with warfare amplifies long term risks. The international community has formed treaties such as the Treaty on the Non-­Proliferation of Nuclear Weapons (NPT) to manage such potent technologies, demonstrating that establishing global norms for AI in warfare is a pressing but attainable goal. Treaties such as the Chemical Weapons Convention are further examples of international accord in restricting hazardous technologies.

Conclusion:

Nations must delineate where AI deployment is unacceptable and enforce clear norms for its role in warfare. In this evolving landscape of AI risks, the world must remember that our choices today will shape the world we inherit tomorrow.