Mains Daily Question
June 13, 2023

“Artificial Intelligence (AI) is turning out to be a double-edged sword”. Comment. Also, bring out the practices that we need to adopt in order to promote responsible AI.

Model Answer

Approach:

Introduction: Define Artificial intelligence.

Body: Mention the beneficial uses of artificial intelligence and the emerging issues that we are currently witnessing on account of the use of AI.

Conclusion: Suggest the practices that we need to adopt in order to promote the responsible use of AI.

Answer:

Artificial intelligence (AI) is the ability of a computer or a robot to do such tasks that require human intelligence and discernment, such as the ability to reason, discover meaning, generalize, or learn from experience etc. The AI algorithms are trained using large datasets so that they can identify patterns, make predictions and recommend actions, much like a human would, just faster and better.

AI is acting as a double-edged sword, as there are beneficial uses of AI along with that, we are also witnessing misuse of this emerging technology.

Beneficial use of AI:

  1. Healthcare: AI has been applied with some success to models for diagnosing COVID from lung scans and imagery, or to diagnosing the 'COVID' cough from other types of coughs.   
  2. Education: In many countries, AI is being used to develop personalized testing tools, identify areas of weakness and help students improve.
  3. Finance: As artificial intelligence (AI) is making “greater inroads” into governance, the Comptroller and Auditor General (CAG) of India said that Supreme Audit Institutions (SAIs) must prepare themselves for auditing AI-based governance systems.
  4. Manufacturing, industry and sustainable economic growth: AI is enabling more efficient and effective manufacturing, production and distribution throughout Asia and Europe and the Americas.
  5. Media: Online translation and publishing software have transformed online publishing, media, and the distribution of text and materials, including books and websites. Example: GPT-3 had written an article for The Guardian in 2020.
  6. Agriculture: AI can help detect pests and diseases by analyzing images of plants and data on the behaviour of livestock. Agricultural robots and automation are saving labour in many resource-consuming tasks.

 

Misuse of AI:

  1. Mass surveillance: China tracks their citizens using AI. Beijing can inspect mobile phones, texts, conversations, banking transactions, and so on to ensure that Chinese citizens don’t stray too far ideologically.
  2. Propagation of harmful stereotypes: In February 2022, Russian online trolls utilized AI to paint Ukrainian President Zelensky, amongst other things, as a Nazi, despite being Jewish. In this way, artificial intelligence is causing conflicts, social and political divisions, and even the propagation of harmful stereotypes.
  3. Spreading falsehoods: During the lead-up to the Brexit vote, Brexiters hired Cambridge Analytica to gin up public support to leave the European Union.
  4. Deep Fakes: Researchers at the University of Washington created a deep fake video of former U.S. President Barack Obama demonstrating lip-syncing to an audio track of a speech he had not given.
  5. Loss of Jobs: The McKinsey Global Institute predicts that by 2030, artificial intelligence will trigger the loss of 800 million jobs worldwide. Due to AI’s multitasking potential, automation could one day replace many human workers as we are currently witnessing in countries like Japan, South Korea, and Taiwan which are utilizing AI extensively.

 

A strong Responsible AI framework entails mitigating the risks of AI with imperatives that address four key areas:

  1. Governance - Establishing strong governance with clear ethical standards and accountability frameworks will allow AI to flourish. Good governance of AI is based on fairness, accountability, transparency and explainability. E.g., there should be regulatory thresholds based on “how much compute goes into a model.”
  2. Design - Create and implement solutions that comply with ethical AI design standards and make the process transparent; apply a framework of explainable AI; design a collaborative user interface, and enable trust in your AI from the outset by accounting for privacy, transparency and security from the earliest stage.
  3. Monitoring - Audit the performance of AI against a set of key metrics. Make sure algorithmic accountability, bias and security metrics are included. For example, a  model that can “persuade, manipulate and influence a person’s beliefs can be one threshold.
  4. Reskilling - Democratize the understanding of AI across organizations to break down barriers for individuals impacted by the technology; revisit organizational structures with an AI mindset; recruit and retain the talent for long-term AI impact.

Asilomar AI Principles (divided into three categories: Research, Ethics and values, and Longer-term issues) developed by Future of Life Institute & Global Partnership in AI is a step in the right direction. But as it is a transnational issue, effective international cooperation on issues such as AI licensing and auditing is the need of the hour.

Subjects : Current Affairs
Only Students can submit Answer.

Enquire Now