#TrendingNews Blog Business Entertainment Environment Health Lifestyle News Analysis Opinion Science Sports Technology World News
Improving fairness in AI

The development of artificial intelligence, machine learning, and technological models have become a prominent recent advancement within the last few years. Especially within recent years and currently ongoing, these new machines are used in various aspects of our lives when it comes to making a decision.

However, an ongoing issue and strain of doubt is the ability to place trust in those machines, especially when it comes to the machine’s capability of ensuring fairness and accuracy in results.

As a result, the Massachusetts Institute of Technology (MIT) recently published an article regarding creating methods that aim to increase accuracy. Within accuracy, the methods will help lead to fairness for minority groups and bias within these machine models.

Currently, selective regression is the most commonly used method to check for accuracy in machine models. The way it works is that the model itself rejects what it deems as necessary, and someone else checks it afterwards. However, studies at MIT and MIT-IBM Watson AI Lab have found that this method is not effective for those who are minorities who aren’t as represented within data. Due to that, there is a high chance that the models will make inaccurate predictions.

After discovering this issue, researchers at MIT decided to innovate two sets of algorithms that are centered around fixing this issue which was an area of concern. The algorithms showcase results that lessen the differences in predictions with marginalized groups.

The two algorithms are centered around being fair when it comes to predictions. The first algorithm is based on ensuring that the model takes into consideration various aspects of the dataset, like race, gender, sex and more. Next, the second algorithm is focused on ensuring that the machine models make a prediction as accurate as possible, despite attributes like sex and race.

This method is planned to advance into being tested for other information such as housing, interest rates and loads, student results and more. During this experimentation, they aim to try out various techniques with the algorithms to sense out to what extent of information can the models use.

According to Greg Wornell, who was a senior author at MIT, “Rather than just minimizing some broad error rate for the model, we want to make sure the error rate across groups is taken into account in a smart way.” In other words, their approach is centered around taking into consideration which samples are to be used from the data sets to ensure efficiency.

The study is planned to be presented in depth during the International Conference on Machine Learning (ICML) this month.


Share This Post On



0 comments

Leave a comment


You need to login to leave a comment. Log-in