Amid fears of unregulated growth of cutting-edge technology, there are concerns that innovation could be the end of civilization
“IT WAS THE LAUNCH OF CHATGPT BY OPEN AI WITH THE SUPPORT OF MICROSOFT EARLIER THIS YEAR THAT MADE THE WORLD RECOGNIZE THE POTENTIAL OF AI. IT HERALDED THE AVAILABILITY OF GENERATIVE AI FOR USE BY EVERYONE. THE DIFFERENCE BETWEEN NORMAL AND GENERATIVE AI IS THAT THE FORMER CAN ANALYZE DATA, WHILE THE LATTER CAN USE DATA TO CREATE SOMETHING NEW. MUCH HAS BEEN DISCUSSED ABOUT THE WAY IN WHICH THIS FRONTIER TECHNOLOGY CAN HIT JOBS IN NUMEROUS ECONOMIC SECTORS, BUT THE ETHICAL ASPECTS OF ITS MANIFOLD APPLICATIONS ARE OF EVEN GREATER CONCERN. LEADERS IN THE INFORMATION TECHNOLOGY INDUSTRY HAVE VARYING VIEWS ON THE ISSUE. TESLA AND X OWNER ELON MUSK HAS BEEN WARNING ABOUT THE NEGATIVE ASPECTS OF ALLOWING UNREGULATED GROWTH OF AI FOR QUITE SOME TIME. OTHERS ARE MUCH MORE SANGUINE ABOUT THE ISSUE, SUCH AS META CHIEF MARK ZUCKERBERG AND GOOGLE FOUNDERS LARRY PAGE AND SERGEY BRIN. THEY AGREE THAT REGULATION IS NEEDED, BUT DO NOT FEAR THAT IT COULD BE THE END OF CIVILIZATION, AS MUSK ONCE DECLARED.”
The buzz around artificial intelligence (AI) is becoming louder by the day. The prospect that this technology may take over our lives in the coming decades is a very real possibility. In many ways, it has already done so. As I write this article on a tablet, prompts are repeatedly given, suggesting which words should follow in the sentence. Reminders to leave for the airport at a particular time pop up on the phone simply because an air ticket has been sent by email. Advertisements about a particular product — say, a handbag — surface on one’s electronic devices after the purchase of such an item. And the whole world knows about the various disembodied voices that do our work via audio commands. The names are varied — Siri, Alexa or Google. These are all AI-driven applications. But much more lies ahead. As they say in this country, picture abhi baaki hai.
It was the launch of ChatGPT by Open AI with the support of Microsoft earlier this year that made the world recognize the potential of AI. It heralded the availability of generative AI for use by everyone. The difference between normal and generative AI is that the former can analyze data, while the latter can use data to create something new. Much has been discussed about the way in which this frontier technology can hit jobs in numerous economic sectors, but the ethical aspects of its manifold applications are of even greater concern. Leaders in the information technology industry have varying views on the issue. Tesla and X owner Elon Musk has been warning about the negative aspects of allowing unregulated growth of AI for quite some time. Others are much more sanguine about the issue, such as Meta chief Mark Zuckerberg and Google founders Larry Page and Sergey Brin. They agree that regulation is needed, but do not fear that it could be the end of civilization, as Musk once declared.
It is in this backdrop that Prime Minister Narendra Modi’s call for a global framework to ensure the ethical use of AI needs to be considered carefully. He is not the first to suggest that there must be convergence among countries on AI regulation. But so far, this has only been talked about while differing routes are simultaneously being taken on regulation. The European Union (EU) has raced ahead and is already talking about an AI legislation. It is also the EU which has been taking the toughest stance against the tendency of Big Tech to improve profitability at any cost. Its new Digital Services Act will come into force shortly, imposing rules regarding content moderation, user privacy and transparency on Internet giants. It also has other measures in the pipeline, including stricter enforcement of the new Digital Markets Act and formulating an AI Act.
As for the US, it is negotiating the issue carefully. So far, it has proposed an AI Bill of Rights aimed at protecting citizens from the use of discriminatory algorithms by corporates. It is worried about the disruptive effect of ‘algorithmic bias’. For instance, companies can use algorithms with inherent biases while carrying out recruitment. The pitfalls of using facial recognition algorithms are similarly well known. The idea of adopting non-mandatory guidelines for regulation is clearly not considered the right way forward for India despite the two countries’ avowed interest in cooperating in the AI regulation arena.
The Telecom Regulatory Authority of India (TRAI) recently presented a paper recommending the establishment of a statutory regulatory authority for AI. It has outlined a risk-based framework with legally binding obligations, in cases that directly impact humans.
From this cursory outline of different approaches by various countries, it is clear that global convergence on AI regulation will not be an easy task. Amid the fears of an unregulated growth of such cutting-edge technology, there are concerns that innovation could be stifled by excessive rule-making. On the one hand, there is the Musk-led ‘doomsday group’, which submitted a petition by over 1,000 tech leaders and researchers to the US Government in March, seeking a pause on all AI development. The petition, also signed by Apple co-founder Steve Wozniak, has said the pause would enable the creation of ‘shared safety protocols’. It should be lifted only when there is confidence that the effects of AI systems will be positive and their risks will be manageable, the petition says.
On the other hand, there is another segment of the tech fraternity which dismisses such concerns as being overblown. These include Meta that feels the industry does not need a license-based regime. Google, which has already launched Bard as a competitor to ChatGPT, is more amenable to a risk-based approach to the regulatory process. None of the Big Tech firms, however, agree with the concept of a pause on development. Seven of these giants in the race to provide more powerful generative AI recently agreed to the US administration’s proposal for voluntary curbs on technology development, pledging to manage the risks of the new tools.
In other words, there are sharp divisions in the tech fraternity over the pace of AI regulation. Ensuring that these are bridged, should become the role of a global regulator that takes into account the profound possibilities of misuse by individuals, corporations or countries.
For the layman, this may sound like a high-tech issue. It is no longer at that esoteric level. It is this technology that is helping students write their academic essays. It is drafting mundane emails for executives at the workplace. And for journalists, it’s actually churning out articles for potential publication. Imagine my dismay when ChatGPT zipped out a 1,000 word article on the state of the economy within seconds. Luckily such essays tend to have fake data and misleading comments, termed as hallucinations. These are the flaws that still make the human element indispensable. But this is a rapidly advancing technology and these flaws could be ironed out soon. It is thus time to quickly place some guard rails on AI to control unrestricted growth and potential malpractices.
(The author is a Senior Financial Journalist)
Be the first to comment