Get full access to one story every week, and to summaries of all other stories. Just create a free account

Pervasive personalisation. Two words that describe how algorithms shape our views and behaviour and determine what we see, buy, read online. There’s so much value in personalisation that tech companies are giving away algorithmic tools for free; the actual money lies in the application, in manipulating large population groups. And as this happens, much of societal biases are reinforced online.

Elisa Celis and Nisheeth Vishnoi, professors at Ềcole Polytechnique Fẽdẽrale de Lausanne, Switzerland, propose a framework to de-bias algorithms. They’ve built a prototype for ‘Balanced Search’, to demonstrate how it’s possible to mitigate extreme views and deliver diverse content using novel algorithms. Businesses today use algorithms to select content for each user in order to maximise the positive feedback (or revenue) received. The duo argues that the current practice leads to extreme personalisation, which skews the content consumed to a single type.

Google-owned YouTube investing $5 million to create programmes that “counter hate and promote tolerance” is one thing and the search giant and other social platforms introducing algorithmic features that reduce bias and inequality are another. Because financial repercussions on the companies are enormous. In places like Germany, which has gone through the holocaust, policymakers are cautious and worried about extreme views, but the rest of the world is still figuring out the ever-changing dynamics of algorithms.

As The Ken reported earlier, the community of computer scientists working on machine learning—the core mathematical part that drives artificial intelligence, the applied part—is very small in India. Say, 40 versus 400, at Google’s Deep Mind alone. In this context, Vishnoi organised a workshop at the International Centre for Theoretical Sciences in Bengaluru earlier this month. The objective was to get together and groom a ‘young’ community of computer scientists to nudge them to think about relevant problems, says one of the speakers, Prateek Jain, a senior researcher at Microsoft Research India. We caught up with Celis and Vishnoi over separate conversations to understand why they want an algorithm literacy movement across the world. Edited excerpts, below:

Elisa Celis, EPFL, Lausanna

The Ken: What do you mean by ‘fair personalisation’? Personalisation is everywhere, and, to an extent, aren’t people themselves driving it? Most of us want to see content (or products when buying online) more directly related to our interests.

Celis: We are all shown a personalised list. Content selection algorithms take data and other information as inputs, including the user’s past behaviour, and produce such a list. To cite a personal example, since I buy baby clothes online every time I log in, I get ads for baby clothes even though I do many other things in life. How it’s not getting me right bothers me at a personal level. At a broader level, more and more evidence is emerging that algorithms have a bias, particularly against women and minority.

AUTHOR

Seema Singh

Seema has over two decades of experience in journalism. Before starting The Ken, Seema wrote “Myth Breaker: Kiran Mazumdar-Shaw and the Story of Indian Biotech”, published by HarperCollins in May 2016. Prior to that, she was a senior editor and bureau chief for Bangalore with Forbes India, and before that she wrote for Mint. Seema has written for numerous international publications like IEEE-Spectrum, New Scientist, Cell and Newsweek. Seema is a Knight Science Journalism Fellow from the Massachusetts Institute of Technology and a MacArthur Foundation Research Grantee.

View Full Profile

Enter your email address to read this story

To read this, you’ll need to register for a free account which will also give you access to our stories and newsletters

Or use your email ID