AI is transforming the workplace – reshaping roles, optimising workflows, and redefining how decisions are made. But for many workers, this shift feels less like progress and more like a threat.
Despite its promise, AI adoption often stirs fear and resistance. And it's not hard to see why: according to the Adecco Group’s Global Workforce of the Future 2024 research, less than half of employees trust that their leaders understand both the risks and rewards of AI.
So, how do we move forward? Not by issuing top-down mandates or ticking boxes in an AI playbook – but by focusing on something deeper: trust.
We explore five trust pillars that can guide AI implementation in a way that supports and builds lasting confidence.
Pillar 1: Transparency builds understanding
When employees don’t understand how AI is being used – or worse, feel excluded from the conversation – scepticism grows. The Adecco Group’s research shows that workers are uneasy about AI's impact, particularly in hiring and performance evaluation.
How to act:
- Replace vague announcements with open forums. AI ‘town halls’ can demystify complex systems, address concerns, and share use cases that highlight AI’s supportive – not threatening – role.
- Share real stories: instead of saying ‘AI will improve efficiency’, give real life examples of how it's helping teams reduce repetitive work or enhance decision-making where you work.
Pillar 2: Education as empowerment
Only 34% of workers have received training in ethical AI use – yet nearly 80% of those who do apply it regularly. This isn’t just a training gap; it’s a trust opportunity.
How to act:
- Develop accessible training for all levels, including frontline employees – not just leadership or tech teams.
- Expand training to include ethical frameworks, risk awareness, and human-AI collaboration techniques.
Pillar 3: Preserve the human element
In recruitment especially, trust in AI is eroding. The report reveals that 76% of workers now prefer human recruiters over automated systems – up significantly from last year.
How to act:
- Position AI as a supporting tool, not a decision-maker. Use it to streamline tasks (e.g. sorting CVs), but leave final calls to human judgement.
- Maintain a ‘human-in-the-loop’ model, especially for processes involving bias risk or ethical complexity.
Pillar 4: Lead with ethics, not just efficiency
Many companies adopt AI with speed in mind – but without ethical guardrails, that speed can backfire. Workers fear not just job loss, but unfair treatment and privacy violations.
How to act:
- Create and publish clear AI ethics policies. These should outline bias mitigation, data privacy, and accountability structures.
- Regularly audit AI tools to ensure fair outcomes – and involve employee feedback in the process.
Pillar 5: Demonstrate employee-centric benefits
AI will never gain traction if workers only see it benefiting leadership. The trust gap widens when automation feels like it’s being used on employees, not for them.
How to act:
- Showcase real examples of AI improving employee experience – reducing workload, enabling career shifts, or sparking innovation.
- Highlight success stories of individuals who’ve used AI to upskill or move into new, strategic roles.
Trust is a culture, not a checklist
There’s no single roadmap to AI trust. But by anchoring AI adoption in transparency, education, ethical leadership, human-centred design, and mutual benefit, employers can close the trust gap – and unlock AI’s full potential.
AI is reshaping how we work – but how we feel about that future will determine whether it succeeds.
Want to know how workforce sentiment is shifting?
Sign up for our Global Workforce of the Future 2025 research to explore what’s changed in the past year, if trust is improving among workers and how you can lead with confidence in the age of AI.