Why we use AI to make decisions and why it is a risk
Using AI tools to bypass indecision can help us move forward but it also means we miss the vital conversations that build trust and new ideas
The power of a confident answer
A huge amount of what people look for in their daily work is help with making decisions. Whether you are a leader shaping a strategy or a practitioner delivering a project, you often need a push to get to the next step of a journey.
AI tools like Gemini or ChatGPT are incredibly useful because they solve the bit where you do not know the answer.
When you give a person a piece of information that is not conclusive, they often get stuck.
For example, imagine a complex dress code with many rules and points of confusion. You might worry about your place in the world or fear being turned away because you did not understand the guidance.
If you put that guidance into an AI and ask if your outfit fits, it will give you a confident answer. This allows you to go about your day without the weight of indecision. The tool hides the uncertainty from you in a format that feels black and white.
Why we delegate decisions to perceived experts
In reality, the AI does not have any more expertise than you do to make that specific call. It is simply more confident in its delivery.
This is why these tools are so powerful and why people are already using them for everything from career advice to mental health support.
This is also why people use external agencies or look to leaders. You want someone to take the data, weigh the options and tell you the best way forward. Marketing and strategy are often about providing the clarity needed to take action.
As AI becomes more embedded in our lives, more people will use it to skip the “messy” middle part of thinking.
While this feels efficient, it carries a significant risk for mission-led organisations.
The value is in the grey area
So much of the value of a decision is actually found in the conversations that happen around it. When you ask a tool to jump straight to an outcome, you miss the chance to build relationships and deepen your own understanding.
By jumping to a solution, you ignore the “world of grey” where the best ideas usually live.
At William Joseph, we believe that people should always shape the purpose of a project. We use AI as a helpful assistant to do groundwork, but we never let it be the final decision-maker.
True growth comes from the debate and reflection that happens when humans collaborate. If you use AI to bypass that process, you might get an answer faster, but you lose the empathy and shared ownership that makes a project succeed.
As the fantastic Pavel Samsonev quotes in his article, the people ARE the process from Cameron Tonkinwise:
“I could have helped you in all the ways the system appears to have, probably more. I would have really enjoyed to do that. I would have learned an incredible amount from such a conversation even if my role had been primarily to be a ‘prompt’ for you. … Why are we interested in displacing those conversations?”
How to use AI responsibly
Use AI to support your ideas rather than replace your thinking
Ensure every AI output is reviewed and contextualised by a human expert
Focus on using tools to make more time for deep work like listening and connecting
Be transparent with your team and partners about when you are using these tools
If you would like to talk about how to use AI to enhance your team’s expertise without losing the human touch, we would love to help.