Microsoft is planning to implement self-designed ethical principles for its facial recognition technology by the end of March, as it urges governments to push ahead with matching regulation in the field. The company in December called for new legislation to govern artificial intelligence software for recognizing faces advocating for human review and oversight of the technology in some critical cases as a way to mitigate the risks of biased outcomes, intrusions into privacy and democratic freedoms.
Microsoft President and Chief Legal Officer Brad Smith said that the company plans to “operationalize” its principles, which involves drafting policies, building governance systems and engineering tools and testing to make sure it’s in line with its goals. It also involves setting controls for the company’s global sales and consulting teams to prevent selling the technology in cases where it risks being used for an unwanted purpose.

Read More