Microsoft demonstrates ethical leadership in calling for regulation of facial recognition

What a change a CEO makes. Microsoft’s CEO Satya Nadella has made a number of significant changes at the company which have improved its fortunes and reputation — not the least of which is taking stronger ethical positions on technology issues than his predecessor and most of his competitors and peers.

Nadella was the first major tech CEO I saw forcefully discuss the importance of ethical technology development at the company’s 2017 Build developer conference. (Apple’s Tim Cook has also done so. but in a more self-interested way.) The most recent example of Microsoft’s ethical leadership is company President Brad Smith’s call for federal regulation of facial recognition technology on Friday:

Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses . . .

[I]f there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today – a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.

In many ways, this is a remarkable thing, given that most major US tech companies favor “self-regulation,” which is often a euphemism for “trust us” and little more. That’s not what Microsoft is saying. It’s not calling for a working group or private consortium or trade group to handle the issue. It’s asking for bipartisan Congressional legislation.

Arguably, the Cambridge Analytica and “fake news” scandals that impacted the 2016 election show self-regulation does not work. And when the choice is promoting revenue growth versus “doing the right thing,” most companies will choose revenue. The quarterly pressures of being a public company almost always compromise corporate ethics.

There are cynical interpretations of Microsoft’s move that argue it’s motivated, at least in part, by self-interest. I see the company’s concerns and motivations as genuine, however. (Last month, following an employee uproar over potential AI work for the Department of Defense, Google announced a kind of AI manifesto with new rules and principles governing AI project development.)

Facial recognition, AI and other emerging technologies are ripe for potential abuse. China is already a frighting, totalitarian example of how facial recognition is being deployed to control a population. The differences between China and the US are not as great as we’d like to believe, especially given the accelerating, judicially sanctioned decline of civil liberties and civil rights.

I’m not saying that technology leaders shouldn’t advocate responsible technology development or promote voluntary standards. Ethics should be part of the conversation at every step of the way. But that’s not enough in certain contexts.

Congress is totally dysfunctional and may not be able to accomplish much of anything in the next several terms. But self-regulation has shown that it probably cannot deliver the protections required by a Constitutional democracy when fundamental rights are at stake.

About The Author

Greg Sterling is a Contributing Editor at Search Engine Land. He writes a personal blog, Screenwerk, about connecting the dots between digital media and real-world consumer behavior. He is also VP of Strategy and Insights for the Local Search Association. Follow him on Twitter or find him at Google+.