Categories
technology

On Artificial Intelligence and Real Kindness

AI is improving by leaps and bounds. How can we reconcile it with compassionate business leadership?

Scientist and researchers who specialize in Artificial Intelligence widely agree that the past decade has been a golden age of AI development. A technology accustomed for decades to incremental improvement is suddenly seeing jaw-dropping developments occur seemingly overnight. There’s a lot of thrilling potential surrounding the technology, but talk of how AI can help us typically comes with caveats of how it might harm us, especially by replacing human workers.

IESE Business School’s recent Global Alumni Reunion focused on how AI is upending the business world. The event, titled “AI as a Force for Good,” tended toward optimism about the future. I have some concerns, specifically about how Artificial Intelligence could harm the “kind management” movement I’ve been researching. Still, there was no shortage of compelling arguments made in defense of AI and how it can align with, rather than undermine, humane workplaces.

AI at a Tipping Point

Dario Gil, a senior vice president and director of research at IBM, gave a keynote address that convincingly argued for AI as a positive business changemaker. Gil first offered a history of Artificial Intelligence. The field was established in 1956 with the intention of equipping computers with what we consider intelligence, but for decades progress lagged.

Since 2012, there’s been success in creating large neural networks and data sets that provide examples of what engineers were trying to learn. But it’s only in the past two years that the advent of what Gil called self-supervision at scale has reduced costs and dramatically sped up progress, especially in understanding natural languages.

“The potential of AI is not even up for debate,” Gil said. I don’t disagree, but I do question how much of that potential is threatening. As for the benefits, Gil highlighted the understanding of not just natural languages (e.g. English, Spanish) but also the languages of chemistry and coding. AI could make back office tasks less cumbersome and improve customer service. Then there’s the sexier promise it holds for managing sophisticated systems like cybersecurity.

What’s holding back the implementation of AI by businesses? Gil mentioned stodgy organizational cultures and shaky understandings of what AI is and what it can do. “AI without a doubt is at the frontier of adoption,” he said. “But if there is discomfort with adapting new methodologies a common reaction is, ‘We’re not ready for that.’”

The gravest risk is ‘regulation mathematics.’ You start repressing it like we’ve never repressed any algorithm. At the end of the day, AI is simply code and algorithms.

Dario Gil, Microsoft

The biggest roadblock in Gil’s view is regulation, and he didn’t shrink away from decrying it. “The gravest risk is ‘regulation mathematics,’” Gil said. “You start repressing it like we’ve never repressed any algorithm. At the end of the day, AI is simply code and algorithms. And they can be understood. But there is fear associated with the term that could inhibit growth.”

I’d counter that that fear, or a good deal of it, is justified. I’ve written that the common thread connecting great leaders is the understanding that leadership is filled with contradictions. For example, strength is acquired through not by ruling with an iron fist, but by showing compassion. And the most powerful tool at a leader’s disposal is kindness.

Kindness has in fact become one of the most desirable traits to employees. A recent NBC survey found that a majority of those polled would prefer a kinder boss to a 10% pay hike. It’s an attitude reflected in the broader push by employees and customers for the companies they work for or buy from to have values (related to sustainability, inclusivity, equal pay etc.) in alignment with their own.

I’m not at all surprised by this. Ever worsening economic Darwinism, resurgent nationalism, and a coarsening of public discourse have over recent years led many to conclude that the world is getting meaner. At the same time, social media has given workers and consumers unprecedented power to hold businesses to account.

Why do I fear that AI will chip away at workplace kindness? Because, at least in its business applications, the technology has a lot to do with the bottom line and little do to with forging the sort of respectful cultures that are essential to kind leadership. I’ve shuddered while reading coverage of the Dickensian work cultures at Amazon warehouses. Given the typical arc of technological progress, I’m afraid that AI would allow an even crueler workplace control than the type now permitted by less advanced data and algorithms.

Then there’s the obvious risk of job losses. The Times recently reported that Ajeya Cotra, a senior analyst with Open Philanthropy who studies AI risk, estimated just two years ago that there was a 15% chance of “transformational AI” emerging by 2036. (That is AI at a scale to bring about economic and societal changes including the elimination of most white-collar knowledge jobs.) Ms. Cotra recently upped that to a 35% chance.

I cannot reconcile kind leadership with leaders that consider their human employees disposable.

Toward a common good?

Gil respectfully acknowledged these fears and assured the audience that the more dystopian forecasts wouldn’t come to pass. “What I always say in defense of AI and its implementation is that humans need to be at the center of those conversations and that technology should be at the service of our needs, not the other way around,” he said.

That’s an admirable statement, but a humane implementation of AI will be tough to achieve given the rapid-fire pace of the technology’s maturation. Unlike Gil, I’m in favor of commonsense regulation as the wise path forward. The aim of this regulation should be to offer safeguards against a brutal disruption by AI without handicapping the pace of innovation.

Europe is currently looking at AI through a regulatory prism, as opposed to the opportunity prism in the US. Recent history has taught us that Europe is often ahead of the curve in putting certain limits on technology that protect consumers, not to mention traditional media companies. That regulatory approach should be fine-tuned and applied in America.

Gil said in his address that his and Microsoft’s embrace of AI is “not about tech determinism that extracts from all of us human agency in selecting the society we want.” It’s a comforting thought, but thoughts need to be turned into action. Regulation by human organizations, imperfect as they are, is the best way to ensure people have a hand in determining AI’s, and society’s, future.

css.php