IBM Will Not Work on Facial Recognition Till There Are Reforms to Prevent Surveillance, Profiling & More
IBM Will Not Work on Facial Recognition Till There Are Reforms to Prevent Surveillance, Profiling & More
The decision for IBM to get out of the facial recognition business comes amid the death of George Floyd that has led to mass protests worldwide.

Technology giant IBM announced that it will no longer offer facial recognition or analysis software as the company’s new chief executive officer Arvind Krishna voiced support for policies to advance racial justice and combatting systematic racism. In a letter to the members of the United States Congress, Krishna urged a national discussion on whether domestic law enforcement agencies should be allowed to use facial recognition technology at all. IBM's decision to shut down its facial recognition business comes at a time when government officials across the United States have proposed reforms to address police brutality and racial inequity, following the death of George Floyd that has led to mass protests worldwide.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna wrote in the letter delivered to members of Congress. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies," the letter read. Krishna's letter was addressed to prominent senators in the US including the likes of Kamala Harris and Cory Booker in which he called for greater transparency and accountability to policing. Furthermore, IBM says they are willing to work with lawmakers on enacting police reform legislation that will further racial equity.

Krishna, who took over as CEO from Ginni Rometty in April, also noted that while artificial intelligence can be a powerful tool to help keep people safe, the technology needs to be tested for bias. “Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported,” he wrote.

What's your reaction?

Comments

https://tupko.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!