Facial Recognition AI: Does this discriminate colors?

Facial Recognition AI: Does this discriminate colors?


We glance at our iPhones, unlock them and wonder how Facebook by Best Software Developers managed to tag us in the picture. Face recognition isn't a joke. It is used to monitor law enforcement, screen passengers at airports and make housing and employment decisions. Face recognition has been banned in many cities, including San Francisco and Boston. Why? Face recognition is one of the most used biometrics (fingerprints and iris palms, voice and facial recognition), but it can also be the most inaccurate and privacy-sensitive.

 

Police use facial recognition to match suspects with driver's licenses or mugshot photos. Nearly half of Americans, or more than 117 million, have photos in a facial identification network. Participation occurs without consent and awareness and is partly supported by a lack of legislative oversight. These technologies, developed by Best Software Developers, have a significant racial bias towards Black Americans.

 

Even if face recognition is accurate, it empowers law enforcement agencies with a history of anti-activist surveillance and racism. It can also increase existing inequalities.

 

Face recognition algorithms can be inequitable.


The top custom software development companies can recognize faces with a high level of accuracy (over 90%), but these results may not be universal. Research has shown that error rates vary among demographic groups. Subjects aged between 18 and 30, Black and female, had the lowest accuracy. The 2018 landmark "Gender Shades Project" used an intersectional approach and evaluated three gender classification algorithms, including those developed by Microsoft and IBM. The subjects were split into lighter-skinned, darker-skinned, and lighter-skinned groups. The error rates of darker-skinned women were 34% lower than that of lighter-skinned males. NIST confirmed these findings with an independent assessment. It found that face recognition algorithms in 189 were less accurate for women of colour.

 

These remarkable results prompted immediate reactions and a continued discourse about equity in face recognition. IBM, top custom software development companies, and Microsoft announced they would modify their testing cohorts and improve data gathering for specific demographics to reduce bias.

 

Gender Shades' audit revealed that Black females had lower error rates. It also examined other algorithms, such as Amazon's Recognition. The algorithm also revealed a bias towards darker-skinned women (31% error in gender classification). This confirms the American Civil Liberties Union's (ACLU) earlier assessment of Rekognition's face-matching abilities. The mugshots of 28 Congress members were incorrectly matched, as they were disproportionately made up of people of colour. The ACLU also confirmed this result.

 

Amazon's response was defensive. Instead of addressing racial bias, they cited problems with the auditors' methodology. These discrepancies are concerning because Amazon has promoted its technology to law enforcement. These services must be fair to all.

 

Law enforcement must recognize faces in cases of racial discrimination.


Its use is another key factor in racial discrimination in facial recognition. The "lantern laws" of 18th-century New York required that enslaved persons carry lanterns to make themselves visible to the public. Advocates worry that the technology developed by top software development firms could be used with the same spirit as existing racist law enforcement patterns, disproportionally harming the Black community even if face recognition algorithms can be more equitable. Face recognition could also target marginalized groups, such as undocumented immigrants, Muslim citizens, and ICE.

 

After George Floyd's murder, the Minneapolis PD highlighted discriminatory law enforcement practices. Black Americans are more likely than White Americans to be arrested for minor offences and incarcerated. Black people are more likely to be arrested and incarcerated for minor crimes than White Americans. Mugshot data uses face recognition to make predictions. This feed-forward loop results in Blacks being arrested more often than others due to racist policing strategies. NYPD, for example, has a 42,000-strong "gang affiliates" database maintained by top software development firms. This includes 99% of Black and Latinx people. 

 

There is no requirement to prove gang affiliation. Some police departments use gang member identification to incentivize false reports. Participants could be subject to harsher sentencing or higher bail if they are included in these monitoring databases.

 

How can unjustified surveillance and face recognition harm Black Americans? According to the Algorithmic Justice League, "face surveillance threatens our rights including privacy, freedom to express ourselves and due process." Face recognition is used to identify Black Lives Matter protestors. To track down and suppress prominent Black leaders and activists, the FBI has a long history. Constant surveillance can cause fear and psychological harm to subjects, making them vulnerable to targeted abuses and physical harm. The government has expanded its oversight systems to prevent people from accessing healthcare and welfare. Face recognition technology that is biased in its accuracy can misidentify suspects in criminal justice settings, resulting in innocent Black Americans being imprisoned.

 

The striking example of Project Green Light (model surveillance program) was implemented in 2016 and developed through custom software development services. It installed high-definition cameras all over Detroit. This data streams directly to Detroit PD and can be used for face recognition against criminal databases, driver's licenses, state ID photos, and more. Nearly every Michigan resident is part of this system. PGL stations aren't distributed equally. 

 

Surveillance correlates with predominantly Black areas and avoids White and Asian enclaves. Interviewing residents revealed that PGL's 2019 critical analysis found that surveillance and data collection were closely linked to diversion, insecure housing and loss of employment opportunities. Police and criminalization can also be a result of systems of face monitoring.

 

A more equitable landscape for face recognition



These inequalities are being addressed through a variety of avenues. 

 

Some of these avenues are focused on the technical algorithmic performance of software developed by custom software development services. First, algorithms can be trained on diverse and representative data sets since most standard training databases are predominantly White or male. Each individual must consent to be included in these datasets. The second is to make the data sources (photos) more equitable. Black Americans are less likely to have high-quality images due to their default camera settings. This can be reduced by setting standards for image quality and settings to photograph Black subjects. Third, an ethical audit is necessary to evaluate performance. This audit should be done regularly and especially consider intersecting identities (i.e. NIST and other independent sources can hold companies that recognize faces accountable for any methodological biases they may have.

 

Another approach focuses on the application setting. Legislation is a way to monitor the use and misuse of face recognition technology. 

 

Even if algorithms can be used with 100% accuracy, they still contribute to mass surveillance and targeted deployment against racial minority groups. Numerous advocacy groups have met with legislators to educate them on face recognition and demand transparency and accountability from producers. The Safe Face Pledge, for example, calls upon organizations to eliminate bias in technologies and evaluate the application developed by top software development companies in the world. These efforts have made some progress. The Federal Trade Commission was empowered by the 2019 Algorithmic Accounting Act to regulate companies. It enacted obligations to evaluate algorithmic training, accuracy and data privacy. 

 

Several hearings in Congress have also focused on anti-Black discrimination regarding face recognition. A significant change was also driven by the powerful protests that followed George Floyd's murder. The police reform bill was introduced by the Democrats in Congress and contained stipulations that would restrict the use of facial recognition technology. Even more remarkable was the tech response. After IBM stopped using Rekognition, Amazon announced that a one-year freeze would be in place on police use. Microsoft also halted selling its face recognition technology until federal regulations were established. These advancements support calls for progressive legislation, such as those to reform or abolish the police. The movement for fair face recognition is currently intertwined with the fight for an equitable criminal justice system.

 

Conclusion

Face recognition is a powerful technology developed by top software development companies in the world that has significant implications for criminal justice and everyday life. There are less controversial face recognition applications, such as assistive technology, that support people with visual impairments. Although we will be focusing on face recognition, the problems and solutions discussed are part of larger efforts to eliminate inequalities within the field of artificial intelligence (and machine learning). Let's not forget that face recognition is a form of racial bias. This will make the algorithms more equitable and more effective.

Comments

Popular posts from this blog

Game Physics: The Rigid Body Dynamics

A guide to MVC Architecture in 2023

Estimated Mobile App Development Cost - 2023