Do we need a business case for ethical AI?
Why AI ethics is like DEI and sustainability - or, how to encourage more business people to do the right thing, even if it's not for what we may think are the 'right' reasons
In recent years, discussions on DEI (Diversity, Equity, and Inclusion) have swirled around whether it’s a good thing or a cop-out to emphasize that there’s a solid business case for supporting DEI. There IS solid proof that DEI efforts pay off:1
companies with diverse executive teams make more money (38% in one study)
diverse teams make better decisions and deliver more innovative products (e.g., 19% more revenue from innovation by diverse teams)
firms prioritizing ESG (environmental, social, and governance) and DEI efforts tend to be rewarded over the long term with higher stock valuations and shareholder confidence
inclusive environments expand hiring pools and support happier employees and higher retention rates, reducing turnover costs
Some folks have argued that it shouldn’t be necessary to show financial benefits and ROI from inclusive practices. Some even feel it’s demeaning to try — that people ought to be inclusive and respectful to others just because it’s the morally right and decent thing to do.
Others have argued that since most businesses only care about short-term financial performance, showing that DEI is actually profitable would help to motivate more firms to take inclusion seriously and improve life for more people. And we’ve seen many articles on how to make the business case for diversity, e.g. 23.
We’ve seen similar sentiments and arguments about the Earth’s environment and sustainability 4 5.
I feel like we’re seeing the same pressures on values vs. money in the world of AI ethics … but not yet seeing the same emphasis on building the business case for AI ethics. (I found one exception, from writers in Finland: 6)
Most of the folks in the AI ethics community (including me) have mostly focused on raising awareness about AI ethics and communicating clearly about the ethical concerns around AI. Underlying this focus is optimism that awareness will be enough to convince people to ‘do the right thing’ and take ethics seriously when making decisions about AI-based tools.
However, like with DEI and sustainability, financial drivers still outweigh any concerns about AI ethics for many businesses and people. Until it provably costs them less money or makes them more money to ‘do the right thing’, they won’t.
I’m not saying all businesses and people are wrong to prioritize financial impact.
Granted, some mega-firms are crassly exploiting hype and greed, building on stolen content & with total disregard for ethics, to their immense profit and to the detriment of the rest of us.
But reality is that many households, entrepreneurs, and small businesses are struggling to survive, especially in the current turbulent business weather we’re in. They literally may not be able to afford the effort and tradeoffs required to find and use ethical AI tools - or to avoid using AI tools which are, or may be, unethical.
However, that reality doesn’t mean we should throw up our hands and say, “Oh well, AI ethics are a lost cause”. Instead, like we’ve seen with DEI and environmental sustainability:
We can work on making it easier for people to find and use the few ethically developed AI tools (or ethical alternatives) that are out there, and advocate for companies who are already trying to do the right thing.
Those of us who can afford to do it can avoid subsidizing unethical companies; we can vote with our money and our devices, and not use or pay for their products and services.
We can lobby our governments to create financial incentives for businesses to do the right things — i.e., regulatory structures that make businesses who are reaping huge profits from AI bear the true costs of the content they’ve stolen, the livelihoods being destroyed, the labor they’ve exploited, or the environmental impacts their work methods and products are causing.
We can highlight why it’s in people’s own best interests to consider AI ethics when selecting the tools they use and when designing and developing AI-based tools and solutions — i.e., show them the business case.
These four actions have already happened somewhat for DEI and sustainability. Progress and success have been uneven, and at some risk in various regions due to political turmoil, but overall, we’ve still moved forward.
So perhaps we need to adopt a similar 4-part strategy for AI ethics, and work on the business case for ethical AI (too).
What do you think? Do you have better ideas?
Are you up for working on the business case (or anything else) for AI ethics with me?

This topic was bouncing around in my head last weekend while I finished the first draft of my upcoming book, “Ethical AI In A Nutshell”. Thanks to for the engaging discussion which crystallized some thoughts on the topic and prompted this post! (All of my subscribers will automatically be notified of near-future book release announcements 😊)
“Towards a Business Case for AI Ethics”, by Mamia Agbese, Erika Halme, Rahul Mohanani, and Pekka Abrahamsson, Springer, DOI:10.1007/978-3-031-53227-6
Thank you for sharing this, Karen! I think companies can only put ethics first if it makes business sense, simply because if it doesn't, they will not survive as companies. I do believe that companies trying to do the right thing, being transparent, and using AI to empower people (your employees, your customers, your users...) have a competitive advantage in a world in which users and potential employees are more and more aware of AI's trade-offs.
That is why creating more stories of companies that become financially successful by being intentionally human-centric with their use of AI is my purpose at nodeom.com.
Now, for a company to be successful, it needs demand and it needs to attract talent... So, let's vote with our money, as you say, and with the companies we choose to work for or with.
Can definitely see a lot of business cases for AI ethics:
1. Hallucinations = worse performance, lawsuits, need for insurance
2. Copyright issues = lawsuits
3. Efficiency - Ironically alternatives/ethical AI companies may be cheaper? Or bespoke solutions that are focused on the actual needs of the company may be better than an LLM trained to do a bunch of random things on data that may be irrelevant and confuse the output?