In this article, I aim to discuss how labelling contributes towards the current technological advances used in our day-to-day lives. The theory of labeling explains that labeling focuses on how the agents of social control attach stigmatizing stereotypes to groups. How does this attachment of stereotypes function when it comes to machines?
Labeling becomes a much more complex concept when used in algorithmic training and then teaching machines to act as humans do. The world of algorithms depends on labeling for machines, understanding and interpreting images. Neuroscience has contributed towards remarkable success in artificial intelligence through the use of artificial neural networks equivalent to neurons of the brain.
I first noticed labeling when I was researching the use of algorithms in machine learning for drones. How our brain works to describe what an image depicts, whether it is a person or an object, almost only depends on labeling and has been applied to machines. What we understand as descriptions depending on size, color, shape and other characteristics is in the same way critical for a machine to identify an object or a person.
Describing an object based on the descriptions mentioned can seem quite simple. It is something we do daily without even realizing. Describing a ‘’red high chair’’ or a ‘’big yellow photo frame’’ happens without even questioning the color or the size of the object. The same way we see and identify, a machine can be programmed through trained algorithms to see a color, shape and object to then later identify.
This is understood as machine learning based on algorithmic training, in which labels play a crucial role. The construction of these labels requires the human contribution to collect, analyze, label and finally taxonomize data, including images to then use to train a neural network or an algorithm.
Databases such as ImageNet and MegaFace illustrate how massive collections of images have been used to expand machine vision. The images on these databases assist in algorithmic training and, therefore, in the advancement of machine learning. With over 200,000 categories, also known as ‘’taxonomies’’, images are taxonomized based on what they depict. These are then used for a machine to learn to recognize a concept on its own. For example, collecting several images which depict different red chairs can be put under the same taxonomy and then be used in machine learning for a machine to learn to identify a red chair.
Using MegaFace and ImagenNet, however, specifically comprises images of people through the use of face recognition. Moving away from the labeling for identifying objects and towards the labeling for identifying people, the process can be problematic. The labels become significantly complicated as people cannot be labeled based on characteristics such as shape, color, nationality, etc. in an objective manner.
The meaning of what an image depicts is diverse, as it can differ for each person. Labeling people to then be identified by machines raises the issue of subjectivity. The subjective understanding of people contributing to the labeling and taxonomizing of images has been challenged. Images labeled by people are based on what they are understood to depict as an empirical representation of what the machine should learn to see.
Examples of machines identifying facial expressions range from our very own phones to the use of AI in deception detection. High security environments such as airports have introduced this type of AI in cameras to help with security advances. Machines trained with the help of labeled images have been given the role of identifying potential security threats involving people.
Therefore, it is important to question how do we assign the responsibility for labeling? Should we apply the same algorithmic training to machine learning when it comes to people? Have we given technological advances superiority over the protection from them?
These questions are hard to answer, especially when we see the involvement of such technologies in military and political matters. Viewing people through machines founded on labeled images can be dangerous. The hidden biases and bio-political views of people applied in vision systems mean machines are doing something structurally problematic.
Examples recorded on ‘’ImageNet Roulette’’, an extension of ImageNet, included labels of people such as ‘’offender’’, ‘’failure’’ and ‘’man trap’’. Such labels exemplify the training of machines to not simply describe images of people but judge them.
Research has found that only 1% of the images collected are then used for machine learning, while the rest, 99%, remains unseen. This can mean that machines learn from a limited number of categorized images, which still exclude a significant amount of data. The potentially prejudiced representation of people based on colonial and discriminating traces from the past are being applied to modern day technologies.
At this point, it is important to raise the risks and threats involving machine vision systems. The lack of representation because of the limited use of data in algorithmic training and the risks of misidentification, especially when it comes to people, are closer to the technical menace. The application of such technologies in military weapons, weaponized remotely piloted aircraft and especially unmanned aerial vehicles, has become a reality for people around the world facing conflict.
The risks and threats of the modern technologies born through the process of labeling have been significantly researched over the past few decades. The aerial gaze over people being observed by machines causing civilian anxiety and mistrust has been described as a psychopathological impact. The traumatic consequences of witnessing the lethality of military drones has even been considered in the human rights regime.
The case for the proposed right for the freedom to live without the physical and psychological threat from above emphasized the need for establishing a way to protect the civilian population from the risks of technological developments.
What in the beginning of this article can seem to be a simple case of labeling colors and shapes has expanded in ways which we rarely ponder upon. Labeling has a foundational position on which modern technologies are constantly developing. The advanced technicalities surrounding machine learning and AI can be hard to challenge, and this could be the reason why we are not questioning such processes as much as we should.
Share This Post On
Leave a comment
You need to login to leave a comment. Log-in