Saibal
We are biased. Artificial Intelligence is not.
Saibal Mukherjee 17 May 2019

We are biased. Artificial Intelligence is not.

We are all biased. From the time we are born we are conditioned through various lenses. Upbringing is often clouded by the garb of traditions, customs, and entrenched in bias ranging and varying from race, gender, ideology, caste, color, creed, community, class, jobs, wealth, culture, work, religion, human anatomy and human intelligence.

Machines makes products. Men sell them at various prices. Human decision making process is biased, be at home or workplaces. The utopian vision of seamless and autonomous technology with AI may soon turn into a dystopian future of enslaved mankind caused by bias in our AI systems.

More than 180 types of human biases have been identified by psychologists - from which either one or more of them can affect the consequential decision-making process of anyone using AI across governments and businesses. This could dismantle the very foundation of trust between humans and machines. Disparate investments in AI by countries could lead to an imbalance. Huge investments made in AI defence research (rather than education or eradication of poverty) and lack of consensus based regulation may contribute to the ever increasing biases within modern society. We have failed to achieve consensus on climate change. Will we agree on keeping AI clean?

We can over-regulate AI or let innovation fly and learn from our mistakes (Zuckerberg’s confession on internet regulation). But what we should not allow is our human bias sneaking into AI through data and stained algorithms which is itself prejudiced.         

Identification of Bias

AI is no more a nascent technology, but the technology for recognition of footprint of bias in AI is. This is because it buzzes beneath the surface, generates results that seem to be uncoloured by any prejudice, compounding the problems. As it sets its foot in more domains such as those that require greater sophistication, viz-a-viz loan-worthiness, landlord-tenant relationship, medical diagnosis, HR, judiciary, examination evaluation, etc., rooting out bias becomes even harder. The biggest problem is that AI projects a false sense of a secure, holistic and fair search process while validating and deepening bias further in its algorithms.

No alt text provided for this image

Recently, as per a recent report by ProPublica, a software tool developed by Northpointe, the tendency to commit crimes in future is two times greater for African-Americans when compared to white, indicating traces of bias against one race in the tool. Another study by the University of Massachusetts revealed that African-American English is analyzed poorly by Natural Language Processing tools, a commonly used application of ML. This raises serious questions on grounds of fairness and equity in AI in terms of access to text authored by people of colour.

Similar prejudice against a race in NLP systems can skew results when evaluating loan worthiness, job worthiness, pay, etc. and erode inclusivity completely from critical services. Failure in processing an African-American’s call for help in an emergency can collapse the entire AI system underpinning the emergency services. AI lets hate groups and biased individuals find a more advanced channel to advocate their ideology. Imagine an AI-enabled driverless car that fails to identify any dark-skinned individual. Poisoned algorithms can be catastrophic.

No alt text provided for this image

Another example is the polarization of our social fabric through filter bubbles. Mark Zuckerberg apologised for Facebook being used to divide people and changed the algorithms to ensure all perspectives come through unequivocally. Microsoft’s chatbot “Tay.ai” turned into a racist and misogynist conversationalist within 24 hours of its launch and after being fed with live Twitter feed. Another instance came to light when Microsoft’s algorithm found gender bias in Natural Language Processing (NLP) system “word2vec” which yielded the analogy, “Man is to Woman as Programmer is to Homemaker.” The same algorithm found statistically significant offensive associations too. These stereotypical associations may generate biased results for jobs and taint social diversity.

The primary cause of bias in any AI system is bad data and the algorithmic model itself. Some algorithms are trained with biased data. They may then continue to surf the waves of bias by replicating and using this data to produce more biased results. The use of data across governments, corporations and individuals may sow seeds of structural flaws in the society's infrastructure, corrupt data collection practices and introduce bias intentionally or unintentionally.

Reducing Bias in AI

With a long list of examples in hand, it is imperative that all types of bias in AI are identified and reduced to ensure a fair and inclusive interface between people and technology they need. The biggest challenge is the identification of the source of bias, as businesses refuse to disclose their algorithms due to proprietary business value. This can call for legislative oversight.

No alt text provided for this image

As private and public sectors experiment with AI, they are also wrestling with new ethical and legal questions. Think tanks, research organizations, and other groups are crafting recommendations for policymakers about how to ensure responsible and ethical use of AI.

Governments, in turn, are swinging into action. The European Union’s landmark General Data Protection Regulation (GDPR), which went into effect in 2018, was just the start. Some countries have been developing principles for AI while others are drafting laws and regulations.

The intersection of AI and public policy has reached an inflection point, says Jessica Cussins Newman, AI policy specialist at The Future of Life Institute. “There are specific AI applications that might be so potentially harmful to groups of people that industry self-regulation might not be enough,” says Cussins Newman. “Governments are waking up to the fact that they need to be thinking about this in a pretty comprehensive way.”

Countries around the world are in different stages on AI governance, but the concept is clearly gaining momentum. While the U.S. has yet to pass legislation on AI governance at the federal level, federal agencies are issuing sector-specific guidance as AI permeates different industries. State governments are also taking steps toward regulating AI. UK, EU, Singapore, India, China, Australia, France, New Zealand, South Korea and Japan have attempted regulations and recommendations related to AI governance and regulation. While some countries have proposed policy measures in play, most are still in the exploratory stage. As the technology becomes more pervasive, so too will the efforts to put enforceable regulations on the books—and around the world.

In the Renaissance splendour of the Vatican, thousands of miles from Silicon Valley, scientists, ethicists and theologians gather to discuss the future of robotics and AI. A workshop, Roboethics: Humans, Machines and Health was hosted by The Pontifical Academy for Life. Pope Francis presented a letter to the Human Community, where he outlined the paradox of "progress" and cautioned against developing technologies without first thinking of the possible costs to society. In the letter, the Pope emphasises the need to study new technologies: "There is a pressing need, then, to understand these epochal changes and new frontiers in order to determine how to place them at the service of the human person, while respecting and promoting the intrinsic dignity of all," Pope Francis writes.

No alt text provided for this image


The Road Ahead

But the future remains bright, despite what recent events might suggest. While there is a need for holistic solutions to fix unintentional biases and restore the integrity of data to reflect a richer social fabric through AI-enabled technology, a lot of progress is already being made. Many argue that with our biases reflecting in our AI tools, it’s an opportunity for us to face these biases in our society and reflect on them. The result? A more mature, inclusive society, perhaps?

And it can also be argued that bias isn’t always negative. Take self driving cars - when creating AI to power these new machines, it becomes important to introduce bias in the data - in the form of difficult weather conditions, and more challenging scenarios - in order to prepare them for a safer output.

As for mitigating bias in AI altogether - one way is by computational cognitive modelling. This can be done by using contractual approaches to ethics. There is a need to build algorithms that can identify inconsistencies in decision making and lead us to adopt egalitarian views. In the words of IBM CEO Ginny Rometty, the future of AI should be the one with “man and machine, not man versus machine”. As AI marches into more industries, elimination of bias will get harder. Hence, it is critical to neutralize biases now, diversify data set and seize value-aligned opportunities provided by AI.

Did you find this write up useful? YES 1 NO 0
×

C2RMTo Know More

Something Awesome Is In The Work

0

DAYS

0

HOURS

0

MINUTES

0

SECONDS

Sign-up and we will notify you of our launch.
We’ll also give some discount for your effort :)

* We won’t use your email for spam, just to notify you of our launch.
×

SAARTHTo Know More

Launching Soon : SAARTH, your complete client, case, practise & document management SAAS application with direct client chat feature.

If you want to know more give us a Call at :+91 98109 29455 or Mail info@soolegal.com