Blog Business Entertainment Environment Health Latest News News Analysis Opinion Science Sports Technology World
Improving fairness in AI

The development of artificial intelligence, machine learning, and technological models have become a prominent recent advancement within the last few years. Especially within recent years and currently ongoing, these new machines are used in various aspects of our lives when it comes to making a decision.


However, an ongoing issue and strain of doubt is the ability to place trust in those machines, especially when it comes to the machine’s capability of ensuring fairness and accuracy in results.


As a result, the Massachusetts Institute of Technology (MIT) recently published an article regarding creating methods that aim to increase accuracy. Within accuracy, the methods will help lead to fairness for minority groups and bias within these machine models.


Currently, selective regression is the most commonly used method to check for accuracy in machine models. The way it works is that the model itself rejects what it deems as necessary, and someone else checks it afterwards. However, studies at MIT and MIT-IBM Watson AI Lab have found that this method is not effective for those who are minorities who aren’t as represented within data. Due to that, there is a high chance that the models will make inaccurate predictions.


After discovering this issue, researchers at MIT decided to innovate two sets of algorithms that are centered around fixing this issue which was an area of concern. The algorithms showcase results that lessen the differences in predictions with marginalized groups.


The two algorithms are centered around being fair when it comes to predictions. The first algorithm is based on ensuring that the model takes into consideration various aspects of the dataset, like race, gender, sex and more. Next, the second algorithm is focused on ensuring that the machine models make a prediction as accurate as possible, despite attributes like sex and race.


This method is planned to advance into being tested for other information such as housing, interest rates and loads, student results and more. During this experimentation, they aim to try out various techniques with the algorithms to sense out to what extent of information can the models use.


According to Greg Wornell, who was a senior author at MIT, “Rather than just minimizing some broad error rate for the model, we want to make sure the error rate across groups is taken into account in a smart way.” In other words, their approach is centered around taking into consideration which samples are to be used from the data sets to ensure efficiency.


The study is planned to be presented in depth during the International Conference on Machine Learning (ICML) this month.


Share This Post On

Tags: #technology



0 comments

Leave a comment


You need to login to leave a comment. Log-in
TheSocialTalks was founded in 2020 as an alternative to mainstream media which is fraught with misinformation, disinformation and propaganda. We have a strong dedication to publishing authentic news that abides by the principles and ethics of journalism. We are a not-for-profit organisation driven by a passion for truth and justice in society.

Our team of journalists and editors from all over the world work relentlessly to deliver real stories affecting our society. To keep our operations running, we depend on support in the form of donations. Kindly spare a minute to donate to support our writers and our cause. Your financial support goes a long way in running our operations and publishing real news and stories about issues affecting us. It also helps us to expand our organisation, making our news accessible to more everyone and deepening our impact on the media.

Support fearless and fair journalism today.


Related