‘We are watching as AI renders people irrelevant’
Premium

‘Social media has trained us to think in binaries, and the world does not exist in binaries. It has made people who know and understand little act like they know a great deal,’ says Usha Ramanathan

September 20, 2023 09:00 am | Updated September 21, 2023 10:51 am IST - Bengaluru

Human rights activist Usha Ramanathan.

Human rights activist Usha Ramanathan. | Photo Credit: MOORTHY RV

Usha Ramanathan says that she is a tech skeptic. “As a skeptic, I will question everything that comes,” says the human rights activist, who works on the jurisprudence of law, poverty, and rights. “It doesn’t mean that I will reject it, but I will not accept it until I question,” says Ms. Ramanathan, who has extensively engaged with issues around technology and the human conditions of freedom and liberty over the years. 

People have become the subject of technology, today, she adds. “It has become the new faith, and you need to worship it without question,” she says. “If you don’t follow it out of adoration, then you follow it out of fear,” believes Ms. Ramanathan, who does not own a mobile phone even today, claiming that she has never seen a need for one. “I like a certain mental space and this takes away from this,” she says.

‘The idea that inadequacies of the human mind must be made up by interfacing with the machine is suddenly so commonplace,’ says Usha Ramanathan.

‘The idea that inadequacies of the human mind must be made up by interfacing with the machine is suddenly so commonplace,’ says Usha Ramanathan. | Photo Credit: Aathira Haridas@Chennai

Ms. Ramanathan, who recently delivered a talk titled A Philosophy for Technology at the National Centre for Biological Sciences (NCBS) in Bengaluru, offers insights into why the ubiquitousness of technology is problematic, the issues associated with the Indian national ID project, and why artificial intelligence is one of our most dangerous innovations to date.

A recent study by IBM claimed that data breach costs in India skyrocketed to 179 million in 2023, 28% from what it was in 2020. You have extensively written and spoken about how linked, digital databases are deeply problematic. Do you believe that having better data protection standards is the best way forward? Or does the problem lie in the digitisation of this data itself?

Years ago, we were told that you can’t use electronic and secret in the same breath, that they don’t sit well together. Once you create digital databases, to expect that you can actually secure them while eliminating the possibility of their being breached can only be myth-making. Security standards are being set and re-set as the state of knowledge of technology develops, and the standard prescribed is just what is known today. There is no guarantee against a breach. 

I believe that indiscriminate digitisation, including of the many legacy systems, is one of the larger security risks we have created for ourselves. The potential for convergence of what used to be discrete databases enhances the risk. It is important to acknowledge that this thinking about the possibilities of technology is not imagination that has come from the State. It is imagination that has been fed to the State by business interests. It hardly needs saying that, for businesses, national security is not a priority. This explains the kind of digital products being created by business interest, and then, to use their language, `evangelising’ it to the State. 

The state seems oblivious of the national security risk that such breaches pose - despite repeated attempts by persons like Commodore Batra to raise the issue, and despite the numerous times that the vulnerability of the databases have been exposed. Business interest is deeply engaged in the pursuit of profit; national security is not in its zone of concern. 

So, the problem lies in digitisation?

It isn’t just about digitisation. There are things that have worked well on being digitised, say the railway reservation system. Technology then held such promise. But it changes when every system gets digitised, people get coerced onto databases, and something like the UID number becomes ubiquitous cutting across silos— just listen, and you will hear the word `seamless’, much too often. I have never been able to understand how a State can fail to see the risk. Experience from across the world tells us that data breaches may go undetected for a long time because, unlike physical resources, if data is stolen, it doesn’t go `missing’; it has just been stolen, or manipulated, by some other player. And many breaches have become known only when there is a demand raised by those holding the system to ransom, for instance. 

The term `non-rivalrous’ is used in describing data as a resource. This means that more than one entity could access and use the same data at the same time as another because data is different from other physical resources like land, water or minerals. That also means, of course, that its `loss’ will not be noticed till something alerts us to its `loss’, and that could be a long long time. A few years ago, in Sweden, it took them a year to realise that the transportation data that they had sent out to be processed had been breached, and it took another six months for the government to admit that it had happened.

We angst that there are countries around us that bear animosity towards India. It doesn’t seem to bother us that these countries have the ability to access the data in our databases. They may have done it already, and we may simply not know it. I am really not sure where this kind of sanguineness comes from. You create a world with risks and pretend that the risk doesn’t exist. There is a deep reluctance among technology providers to let rules be framed within which they must operate; they argue that rules stifle innovation. So the push is for no responsibility, no liability.

You’ve spoken a lot against the UID project. What worries you so much about it? 

The UID project is one of the most lawless things that we have ever done. The project was started in 2009 without a law. It was said to be voluntary but became coercive very soon. The Supreme Court said, many times, that it should not be mandatory - and the project carried on in flagrant violation of these orders. It was then made ubiquitous.  

We had to leave the number with just about everyone or we would face steep penalties, even in the early years when we were being pushed into enrolling, and threatened with denial of access to bank accounts and mobile phones. 

Now we find the UID has crept into every crevice of civic life - upon birth, death, school enrolment, college entry, scholarship, work, unemployment registers, pension, in registering crime, transacting in mutual funds, payment systems, digilocker where our certificates are being sent, ration card, job card, PAN card, GST, disability ID, voter ID. We are in the stranglehold of an ID that is not verified, that has been demonstrated, again and again, as being insecure. 

Yet, the state now sees us through this system and makes up its mind about who we are, and how it should treat us. We have been converted into data and served up for companies to use in making their profits. Surveillance is no longer only about the state, its security establishment, and us. We are being told that we will not be recognised if we do not leave `digital footprints’ at every turn - so that the business of credit can be carried on with the least risk - to the business, not the citizen. This is the routinisation of surveillance. And to think that this has happened on the heels of a nine-judge bench that was unequivocal that privacy is an inherent, innate, fundamental right that cannot be waived! 

But haven’t innovative new technologies always been a driver of change, both positive and negative, whether it be the printing press, the wheel, the steam engine or the satellite? What makes IT so different, really?  Isn’t it more about how we use it than the technology itself? Take, for instance, something like social media, which has both helped mobilise people’s movements as well as negatively impacted mental health and been a tool for State propaganda.   

Technologies of different sorts have always created change, sometimes good and sometimes damaging. But I think we have made far too many mistakes in being starry-eyed about technology and not seeing the risks. 

The unthinking adoption of what the Industrial Revolution brought, what it allowed for and encouraged, in combination with colonialism, and corporatisation, has produced some of the worst atrocities against humanity. It created a world where one moved from use to exploitation, where certain parts of humanity were deemed dispensable, and so they were kept at the risky tasks. In a very few centuries, we find our water sources are gone, our air is unbreathable, climate change is a pressing issue, and there is a deepening of poverty. We lead terrible lives because of this that we thought was progress. Maybe we need to learn the lessons that hindsight is offering us. 

In the case of information technology, artefacts have been placed with every single person. Every field of learning seems to have been deployed to get technology to be the master and people its slaves. Choice vanishes. It becomes ubiquitous. And, yet, as we speak, no one, including its creators, really knows where it is going, and what the many tomorrows may bring.  

It wasn’t so long ago that technology brought excitement and anticipation. When the internet came to us, we were all agog. Tim Berners-Lee and the rest of his ilk had really opened up the world for us. That is how I too felt about it, until, one day, I read something that said that DARPA, the US defence research agency had a special interest in making it a part of life everywhere. It made me wonder… And then there was news that Tim Berners-Lee, the creator of the World Wide Web, was `devastated’ because, as it was reported, he had `seen his creation debased by everything from fake news to mass surveillance’. 

As for social media, I think there are some benefits, of course. Take, for instance, crimes against Dalits like the Una flogging incident, or the spread and reach of Black Lives Matter.  More than being a forum for debate, it helps in awareness that certain things are happening, things that ordinarily may have been suppressed. But it is a double-edged sword. For instance, social media helped drive the Arab Spring, helping create collectivity. But we know what happened soon after when the State took control. They used the same platforms to persecute the protestors.  

 Then there is this Twitter thing of 140 characters - a classic statement on what technology providers think about human beings and their minds. They expect matters of great importance to be discussed in 140 characters. Again, social media has trained us to think in binaries, and the world doesn’t exist in binaries. It has made people who know and understand little act like they know a great deal, enough for prejudice to pass off as judgment anyway. I am often inclined to think that social media is one of those things that we should perhaps boycott, or refuse to be part of. It often seems like it doesn’t deserve a place in our midst.  

Since that seems to be one of the most exciting technological developments in the recent past, what are your thoughts about generative AI? Is there something especially pernicious about it?

I have often wondered about why AI, and why now. It isn’t a new idea, but it is catching on like wildfire; the idea that inadequacies of the human mind must be made up by interfacing with the machine is suddenly so commonplace. Plainly, the generation of huge amounts of data is making it possible to think that the human mind stands revealed in all its weaknesses and fosters the belief that that data can be used to conjure up other intelligences that are more than a sum of its parts. That all this data is now within the reach and control of technology creators and controllers seems to have produced this moment of redundancy of the human. 

The digital world is allowing us to reconstitute every person as data, and AI needs data. Identities are getting shrunk because of this as anything that cannot be captured digitally, with a certain amount of ease, is left out of the reckoning. We are therefore creating AI systems that are imperfectly depicting humanity. The consequences that it can have could be massive. I think it is a system that should fail.   

It is a race to the bottom. I mean, what is the point of someone like Sam Altman going from government to government calling for AI regulation? You create something hugely problematic and then say regulate it? When you know regulation has provided few, if any, answers. Why create it in the first place? And what is meant when it is said, well, if I hadn’t made it, someone else would have anyway?  

We are watching as AI renders people irrelevant by using their personal data, and seems to thrive on assumptions about human beings that it sees emerging from a mass of data. We see the many imperfections of generative AI and are asked to be patient with it as it tries to get it right - patience that technology creators and controllers do not have with human beings. There are also assumptions about human beings; that data that is gathered about us produces predictability.

But is lived experience such a straight line? That is not how people live, think and act. Life is plural, after all.

Top News Today

Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.