Four Questions Business Leaders Should Ask About Harmful AI Development
Two months ago, an open letter advocating that we pause certain AI development for six months was posted on the Future of Life Institute website. The discussions, at least on LinkedIn, continue. The genesis for the open letter was the concern that development was moving so fast that even the developers were unclear about the risks. How quickly are we moving to the point that the software controls us, rather than empowers us? Can we stop harmful AI development?
Since the letter, Sam Altman has issued a response and best I can tell, one of his concerns is that the letter doesn’t address the concern that OpenAI is working on other equally risky projects besides ChatGPT.
Once the AI genie was taken out of the bottle, so to speak, few can envision that we can stuff it back inside. And, like it or not, there are evil people in the world who act maliciously. Hackers have made many of us fear that our digital privacy is constantly at risk.
I find myself in the camp of people who believe that most people are good and mean well. I also agree that it would be better if this technology could be carefully planned, rather than rushed out — with unknown risks. And yet, it’s out there. And it’s amazing. I find myself getting more and more excited about the benefits that are just around the corner, especially my personal robot.
And yet, I do think business leaders need to get smart. Here are four questions that you should be asking about harmful AI development.
1. In such a divided world, can we even agree on what constitutes harmful AI development?
I used to think that we were getting closer to a global consensus on what constituted right and wrong, healthy and unhealthy, good and bad. But I abandoned that notion years ago. We seem to have lost our ability to have difficult discussions and develop consensus on hard issues. And the issues with AI are much harder.
I don’t discount the Asilomar AI Principles that were introduced in 2017, but there aren’t many signers on the Future of Life website. Who knows if we can draw a valid conclusion from that? Perhaps those principles are a decent starting point?
Multiple news organizations recently announced that China had proposed new rules regarding AI development in response to Alibaba’s introduction of Tongyi Qianwen, a ChatGPT competitor. The proposed rules require content to adhere to “core socialist values,” plus other laws, and threaten criminal investigations and fines. I cannot argue with the core socialist values, but the implementation gives me pause.
In my last blog, I wrote about how the business world could lead the way in developing some consensus on core values that could help the business world collaborate and innovate together. Every day I see inspiring stories of how businesses are coping with change and development. Yes, I see the harmful social media posts too, but let’s look at real results and not marketing talk.
2. How will harmful AI development impact your business?
As a small entrepreneur, I find it rather overwhelming to even think about the notion of state, federal, and international regulations that might govern AI. I remember first coming to grips with my responsibilities when Europe first introduced GDPR. And then, there are the variations in state laws, particularly product related laws.
Business leaders, whether they want to or not, are forced to spend time keeping up with the various regulations that they face. And as AI development evolves and we use more AI tools, business owners have their work cut out for them.
Consider, for example, the human resources departments that are using AI software to screen job candidates. As I mentioned in a recent blog, companies, such as retrain.ai and Eightfold AI, are already using big data to help you hire and retain the brightest and best. But this comes with the risk that hiring managers will use the data for malicious intent — to quickly eliminate those who are most likely to leave — without any retention efforts. Businesses have an obligation to understand and abide by the rules that the parties set forth in their lease/sales documents.
3. How much do your employees really care about harmful AI development?
From my experience in the US, and some of this gets confirmed in the news, people tend to live within their little bubbles. There are simply so many different websites and news organizations that it’s impossible to keep up with what is happening across the globe. And most people are trying to reduce the fears that surround them every day and so they seem to be crawling into ever shrinking bubbles.
From my chats with clients, they are far too busy to spend too much time on AI. Yes, they’ve tried ChatGPT but many complain that AI is simply not saving them nearly as much time as it could. In fact, for many companies, highly paid people are spending too much time on administrative tasks, given their salaries. They don’t have the time or the energy to explore a new tool every week or month. They are busy working on the activities that the organization needs. That’s probably a good thing.
Generally, people just don’t have the bandwidth to care about everything and future harmful development and how it impacts your organization just isn’t likely to be a big concern to many in your organization.
That said, your project and change managers can be your biggest asset, as I will discuss in greater detail.
4. If the goal is that AI should do no harm, what rights do those who are being harmed have?
This subject has been bubbling since social media first emerged, particularly for consumers. And it’s only going to get worse. And it doesn’t just apply to consumers.
Businesses can be harmed just as easily as consumers. What rights do they have, and from whom can they expect relief?
And don’t discount the time suck when your employees have been harmed. For the employer who expects employees to work during standard hours, that leaves no real good time to pursue conversations with outside doctors, lawyers, or any government agency that the employee needs to consult.
Clearly, we are not going to stop harmful AI development. Therefore, we need to get smart and find ways to cope with upcoming rapid technological changes. In my next blog, I’ll offer eight ways that project leaders can help you do that.