2025-09-19: What's new in everyday ethical AI this week
A new ethically-sourced chatbot (publicai.co), déjà vu on LinkedIn opt-outs, Italy's new AI law, new free AI literacy courses, and more
Hi folks! This is a new sort of weekly digest of news items in the world of AI that you might like to know about. Let me know if you find it useful.
AI Tool News
In my Everyday Ethical AI book (just released on Sunday), I mentioned the Swiss initiative to build a true open source Large Language Model (LLM). It was trained on ethically-sourced data from more than 1000 languages using renewable resources. It was expected in ‘late summer’.
Well, it launched in early September while my final manuscript was winding its way through Amazon KDP! See swiss-ai.org/apertus for more information. Apertus is a foundation model, not a chatbot - but a chat interface is now available at PublicAI: publicai.co.
Y’all know I love finding new ethically-developed AI tools, and I’m going to be kicking the tires on this one as soon I can. (Thanks to for the tip 😊)
The announcement says: “EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS) has released Apertus, Switzerland’s first large-scale open, multilingual language model — a milestone in generative AI for transparency and diversity. Trained on 15 trillion tokens across more than 1,000 languages – 40% of the data is non-English – Apertus includes many languages that have so far been underrepresented in LLMs, such as Swiss German, Romansh, and many others. Apertus serves as a building block for developers and organizations for future applications such as chatbots, translation systems, or educational tools.”
To try it out, go to chat.publicai.co, click “Continue with Public AI”, then at the bottom of the next dialog, click “Create an account”. If you try it too, please share what you think!
Other AI News
LinkedIn déjà vu : They are changing their terms and conditions on using our data for generative AI training, and reminding us all of how to opt out. (But they’re still not telling us, in their notice, about the second way to opt out.) More info in my Sept. 19 LinkedIn post here.
What to do? Decide whether you’re good with LinkedIn using your personal and public data for AI training, and update your two settings accordingly.
Italy passed its AI law on privacy, oversight, and access by children. A Reuters article is here.
summarized it in this Note.What to do? If you’re based in Italy or have operations there, check out the new law.
Phishing: Chatbots are being used to craft ‘better’ phishing messages, especially for targeting the elderly. The chatbots’ guardrails are supposed to prevent this, but people are finding their way around the guardrails. See this Reuters article for more info.
What to do? Help the folks in your life (all ages) to be aware and cautious about cleverer phishing attacks.
Fresh hype: A big new book on AI by Eliezer Yudkowsky and Nate Soares, titled “If Anyone Builds It, Everyone Dies”, dropped this week. It’s about potential threats to humanity from AGI. Lots of reactions and lots of posts about it here on Substack. had a nice short summary. had some thoughts about it that you may want to read here if you have a NYT subscription.
What to do? Unless you’re a builder involved with AGI development initiatives, ignore the hype and don’t panic. Focus on coping with the AI and data risks that are already here in our daily lives.
New AI References
New references I’ll be adding to the table on the book bonuses page:
AI Educator Tools: The Future of Learning https://aieducator.tools/
Femtech Guidance: https://merltech.org/girl-effect-guidelines-ethical-ai-chatbots/ (via Vari Matimba in a MERL workshop on AI this week)
AI Literacy Courses: New free AI literacy courses for free and paid subscribers from at Duke University:



