Responsible AI — The Need For Ethical Guard Rails

The Hindu     17th March 2021     Save    
QEP Pocket Notes

Context: Highlighting the need for a value-based global AI governance framework to realise the true potential of the technology.

AI for good: Google has identified over 2,600 use cases of “AI for good” worldwide.

  • Unprecedented growth: From beating human champions at Jeopardy in 2011 to vanquishing the world’s number one player of Go, to decoding proteins.
    • Current usage: Embedded in recommendations on streaming or shopping site, in GPS mapping technology, in the predictive text that completes our sentences etc.
  • The unlimited scope: AI can leapfrog us toward eradicating hunger, poverty and disease, opening up new and hitherto unimaginable pathways for climate change mitigation etc.
    • Direct benefits: AI has helped increase crop yields, raised business productivity, improved access to credit and made cancer detection faster and more precise.
    • Towards SDGs: A study published in Nature reviewing the impact of AI on Sustainable Development Goals (SDGs) finds that AI may act as an enabler on 134 or 79% of all SDG targets.
  • Economic potential: Can contribute >$15 trillion to world economy by 2030, adding 14% to global GDP.

Concerns of AI

  • Hindering SDGs: Study in Nature finds that AI can actively hinder 59 or 35% of SDG targets.
    • As AI requires massive computational capacity resulting in a big carbon footprint
  • Compounds digital exclusion: AI taking over jobs of low/middle income workers: E.g. Self-service kiosks to replace cashiers, fruit-picking robots to replace field workers, etc.
  • New inequalities: Without clear policies on reskilling workers, it will lead to new inequalities -
    • AI-related investments will shift to countries where AI is established, widening gaps among and within countries.
    • E.g. Big Tech’s big four, Alphabet/Google, Amazon, Apple and Facebook, added $2 trillion to their value in 2020, when the world was reeling under the impact of the pandemic.
  • Inherited biases: E.g. AI facial recognition and surveillance technology discriminating against people of colour and minorities and AI-enhanced recruitment engine being biased against females.
  • Data privacy concerns: The algorithm’s never-ending quest for data has led to our digital footprints being harvested and sold without our knowledge or informed consent.
    • E.g. Case of Cambridge Analytica, where such algorithms and big data were used to alter voting decisions.

Way forward

  • Establish ethical guard rails: Develop broad-based ethical principles, cultures and codes of conduct to inculcate transparency, accountability, inclusion and societal trust for AI.
  • Whole of society” to “whole of world” approach: Need for wider platforms and collaborations.
    • UN Secretary-General’s Roadmap on Digital Cooperation: so that AI is used in a manner that is “trustworthy, human rights-based, safe and sustainable, and promotes peace”.
    • UNESCO: Developed a global, comprehensive standard-setting draft Recommendation on the Ethics of Artificial Intelligence.
  • Striking the right balance: Between AI promotion and AI governance.
    • NITI Aayog’s Responsible AI for All strategy: Recognises the importance of multi-stakeholder governance structures that ensure dividends are fair, inclusive, and just.
  • Agreeing and implementing common guiding principles: Real challenge lies in practically implementing the framework upholding right values.
QEP Pocket Notes