Image Source: Analytics Insight
The White House released a blueprint called the AI Bill of Rights on October 4. The document highlights all the nitty-gritties of legality that are yet to be standardized across the use of technological algorithms. The greatest concern that it tackles pertains to protecting the democratic aspirations of the U.S. in the wake of the digital age.
Formulated by the Office of Science and Technology Policy, the 73-page long report states at the very outset that it is "intended to support the development of policies and practices that protect civil rights in the building, deployment, and governance of automated systems." It is the result of a year-long process that includes conversations with the stakeholders and impacted communities, meetings with industry experts, and discussions with federal policymakers. However, it has a non-binding status and does not translate to stringent regulatory policy.
Despite the fact that the white paper does not mandate compliance, it draws attention to the urgency of addressing A.I.'s tendency to discriminate. At the core of the guidelines are five overarching principles: safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanations; and human alternatives, consideration, and fallbacks.
The first of the principles suggests that automated systems should be born out of consultations with tech designers as well as the communities that would come to use them. The second prioritizes design equity so that users' identity markers are not exploited to marginalize them. The third upholds privacy and the need of the hour for tech-based companies to include consent forms in simpler language. The fourth talks about inculcating transparency, and the final advocates that deployers of automation should make available a human alternative when necessary.
The blueprint finds relevance against a backdrop that is populated with evidence of A.I. endangerment. A team of researchers from the University of California found racism encoded in an automated system that was used in a large hospital. According to their data, a black patient was less likely to be referred by the system than a white patient with the same symptoms.
Similar traces of racism were revealed when an algorithm used for federal profiling displayed weaker accuracy rates in recognizing Black faces. According to another study that investigated three facial recognition systems, the error rates were higher for darker-complexioned women.
Furthermore, the Federal Investigation Bureau in the U.S. recently released a warning against legacy medical devices. As notified by the Certified Information Systems Auditor (CISA), unpatched automation put patient data under threat several times in the past.
Consequently, in the wake of recurrent events where civil rights are compromised by algorithms, the formal advisory comes as a welcome initiative. However, it would amount to legitimate change only when lawmakers in the U.S. incorporate the standards into reform frameworks.
It is important to note that the blueprint comes with a disclaimer that frees the government from any accountability to follow the principles therein. Regardless, it will be interesting to see how the goals are integrated into the federal administration once they are included in law enforcement.
Share This Post On
Leave a comment
You need to login to leave a comment. Log-in