Is AI a threat to your expertise, or a tool to unlock new voices?
Are we asking the wrong questions about AI? Instead of fearing replacement, we can focus on how to use it as a tool to lower the barrier to expression—and give more people a voice.
In 400 BC, Plato warned about the dangers of writing. He believed it would lead to forgetfulness. That it would allow people to appear knowledgeable without being wise. He feared that people would fill their minds not with understanding, but with “the conceit of wisdom”.
“If people use this, it will implant forgetfulness in their souls. By telling them of many things without teaching them you will make them seem to know much. For the most part they know nothing, and as they are filled, not with wisdom, but with the conceit of wisdom, they will be a burden to their fellows.”
– Plato, 400BC
Fast forward 2,400 years and we’re hearing the same arguments about artificial intelligence. That people will use tools like ChatGPT to speak on things they don’t fully understand. That it will replace thinking. That it will allow people to perform credibility instead of earning it.
And those concerns are valid. But they’re only part of the picture.
A shorter distance between idea and expression
Throughout history, new tools have often given more people the chance to share their perspective.
Just as writing helped ideas travel across distance and time, AI helps them take shape more easily. It’s a bridge between knowing something and being able to communicate it.
Think of someone who has worked in a charity for 20 years. They know the ups and downs. They’ve seen what works and what does not. But they’ve never written a blog post or given a public talk about their experiences. Now, they can use tools like ChatGPT to get started. The knowledge is already there – the tool just helps shape it.
Or a young person with something important to say. They may not have fancy equipment, but they do have a unique experience. AI tools can help them create images, videos or music to help tell their story.
A mother who has had a series of blood tests that provide lots of information but little insight can use these tools to help interpret the results to discuss with her doctor.
A survivor of image-based abuse being able to submit a legal ‘take down’ letter to tech platforms to remove content with the help of Chayn’s Survivor AI.
Even someone writing to oppose a planning application – not sure of the rules or how to structure an argument – can now find help quickly. They can speak up in a way that increases the chances of them getting heard.
AI lowers the barrier to entry for expression. And in a world where many voices are still marginalised, that’s something worth valuing.
Democratisation is not without its dangers
Of course, giving more people tools doesn’t automatically lead to better outcomes. And there are risks we need to face up to.
1. A false sense of expertise - plus an opportunity
Just as Plato feared, people can now appear authoritative without understanding the subject. AI can produce content that sounds convincing, but may be incomplete, misleading or simply wrong.
This is particularly risky in areas like health, education or campaigning. In the charity sector, where trust is a hard-won currency, credibility still matters.
It’s the difference between using a large language model (LLM) to help express what you know – and using it to pretend you know more than you do. For many people, this will allow experts to show how much better they do understand topics. This will happen in real, human conversations rather than LinkedIn or Instagram posts - but when it does, there will be more impact than ever in building trust and reputations.
2. Critical thinking takes a hit - the answer is to design with people
When answers come too easily, we risk skipping the hard work of thinking and listening. AI-powered tools can make it tempting to move straight to outputs instead of taking time to co-design, debate and reflect.
This is where human-centred design becomes even more valuable. When we test ideas with real people and take time to understand their context, we build better, more equitable solutions.
AI can accelerate our process, but people should always shape the purpose.
3. Representation is still shaped by power
LLMs are trained on existing data – much of it from dominant cultural perspectives. Many of the people who work with AI, are also not representative of the communities that could stand to benefit from the proliferation of these tools.
Unless we’re intentional about inclusion, these tools can amplify bias and silence marginalised voices. So if we want AI to genuinely broaden participation, we must design our processes to centre underrepresented people – not just include them at the end.
Testing, co-creation and diverse feedback loops will become more important than ever as AI becomes more embedded in our workflows.
4. Quality risks being drowned in quantity
If everyone can create content instantly, the online world risks becoming a flood of repetition and generic output.
For charities, this means being even clearer about purpose and their unique proposition. The most effective content will be created by people who combine empathy, clarity and lived understanding – supported by tools, not replaced by them.
Just because something is easy to share, doesn’t mean it’s worth listening to.
The role of the expert is changing – not disappearing
AI tools can be a powerful assistant, but they are still a dataset, not a decision-maker. The craft, insight and originality that experts bring remain essential – especially when working with sensitive, strategic or ethical questions.
Think of AI as a helpful junior colleague: it can do the groundwork, but it needs oversight, feedback and accountability.
At William Joseph, that’s how we approach it.
The use of AI at William Joseph
Our approach to AI is guided by the same principles that underpin all our work – inclusion, curiosity, and care for people.
We see AI not as a replacement for expertise, but as a tool to extend our capacity and creativity.
We use it to make time for the deep, human work that only people can do – listening, designing and connecting.
Our principles in practice
1. Human-centred and expert-led
We use AI to support ideas, not to replace them. Every output is reviewed, edited and contextualised by a human expert.
2. Transparency and accountability
We’re open with clients about when and how we use AI, and we take full responsibility for the results.
3. Curiosity and continuous learning
We make space to explore and learn – sharing what works and what doesn’t, so our whole team grows in confidence and capability.
4. Client-focused and responsible
Our use of AI always starts with our clients’ needs. We never recommend a tool or process we don’t understand or trust.
5. Mindful and critical
We question every AI output, expect errors, and verify accuracy. The final decision always sits with people, not machines.
6. Supportive and collaborative
We help each other explore AI safely and confidently – without pressure or judgement.
7. Inclusive by design
We recognise AI’s potential to widen access and creativity, while staying aware of its biases and the exploitation sometimes involved in its creation. We’ll use our influence to push for fairer, more ethical tools.
The opportunity
AI is changing how we work, learn and express ourselves. The question is not whether we use it, but how.
Used well, it can help us share more diverse voices, tell richer stories and make better decisions.
Used carelessly, it risks diluting truth and trust.
At William Joseph, our goal is to stay curious but grounded — to explore new possibilities while holding on to what makes our work human: care, connection and creativity.
Because the more people who feel confident to speak up, the stronger and fairer our world becomes.