We Use AI and AI Uses Us: Five Actions We Can All Take (part 4 of 4)
The first step in solving the ethical challenges of AI is to recognize the risks and concerns. The next step is to DO something, individually and collectively. We have more agency than we may think.
This post is the 4th and last in the series of articles on Everyday Ethical AI that I was invited to share in the weekly She Leads AI newsletter. Please see SheLeadsAI.ai to subscribe to “The SLAI Effect”, and do check out their events, including the weekly Social Saturday calls!
As we’ve covered in the SLAI Effect newsletters over the past 3 weeks, there are significant risks and ethical concerns around AI and data, along with powerful benefits. The first step in solving the ethical challenges of AI is to recognize the risks and concerns.
The next step is to do something, individually and collectively. We have more agency than we may think. Here’s a brief summary of five practical, concrete individual actions we can take to protect our families and businesses from AI risks without losing the benefits of AI.
Action 1. Choose Your AI Tools Wisely
Find out what AI tools are being used in your workplaces, in schools you or your family attend, or places where you shop or receive services. Be especially alert for EdTech tools used in classrooms and ambient medical scribes.
Find out if your preferred AI tool providers use one of the big commercial AI platforms under the hood that might use your data — even if the provider promises you they won’t use it themselves.
Do some homework on the AI tool providers & how they handle biases & confidentiality. Certifications aren’t perfect, but they may help you make better decisions. (My book includes guidance on relevant certifications, how to look for them, and a bonus resource listing companies who are ISO 42001 certified.)
Action 2. Protect Your Data
Only share the data an AI tool needs to be able to help you. For instance, anonymize names and addresses.
Set up code words to help protect your family or workplace from deepfakes.
Consider using data poisoning tools to help protect important images you post online.
Look for in-device AI or a LLM you can run locally.
Action 3. Use [Generative] AI Tools Wisely
Many needs do not require genAI to be met well. Consider non-AI alternatives (including human creators) that might save you both time and money and give you better quality. (For instance, my book bonus resources include a list of places to find good images that are free for commercial use.)
Learn how to prompt your genAI tools well with frameworks such as WISER, SPARK, and CRAFT. (Thanks to and for alerting me to the last two.)
Always be alert for biases and verify the accuracy of what an AI tool gives you.
Action 4. Seek Out Diverse Voices About AI
If you’re in a position to influence hiring, or guide AI product development, do what you can to improve equity in the workplace.
Expand your reading or listening beyond the self-serving tech bro hype that tends to dominate the news. Look for global perspectives and seek out the voices of women and under-represented groups in articles and podcasts. (One resource: the searchable SheWritesAI directory of nearly 500 women and nonbinary folks in 50+ countries.)
Go beyond Substack to connect with more thoughtful people who care about AI and how we use it. All are invited to join the Everyday Ethical AI community in the AI Vanguard Society on Mighty Networks.
Action 5. Write Down Your AI Usage Policy
First, write down your most important values. (Need inspiration? See this list of lists of values - no need to use a LLM to give you suggestions ;)
Next, list your tasks: simply note what you do now with AI and don’t, and why. (My book includes guidance on this, including the 3 task zones in ’s “Human/AI Boundary Map”, and the book bonus materials will include a template.)
Map the whys for the tasks in your 3 zones to your values, and reflect on anything that feels inconsistent.
Consider sharing (at least some of) your policy to be transparent with others about how you do & don’t use AI. Transparency is the #1 ask of the 73+ “AI, Software, & Wetware” guests I’ve interviewed. (If you write online, post a snippet or a link to your policy on your About page.)
What’s Next?
The five risks, concerns, and actions (and much more) are covered in my upcoming book, “Everyday Ethical AI: A Guide For Families & Small Businesses”. The ebook version will be released on Sept. 14 and is now available for preorder ($0.99!) A paperback copy will follow.
The world of AI doesn’t stand still. To automatically get fresh weekly articles and news about everyday ethical AI, subscribe here.
Articles in this She Leads AI series:
Thanks so much for including my Human/AI Boundary Map in this piece, Karen! I really like how you connected it to the idea of writing down your own AI usage policy