Researched why employees at a large retailer (with $7B+ revenue, 80k+ employees) were secretly using AI tools. The findings guided the design of AI policies that focus on fairness, clarity, and employee involvement, creating safer and more effective adoption.
At a large European retailer with $7B+ revenue, 80k+ employees, and 5k+ stores, staff started using AI tools secretly. While AI could save time and improve work, using it without approval created risks for security, compliance, and company policy.
The company needed to find out why employees were hiding AI use and how to encourage safe, open adoption instead of secret use.
I carried out research combining academic theories with employee surveys. Using data analysis (PLS-SEM), I studied what motivates people to hide or share their AI use. I compared external pressures, like penalties, with internal factors, like whether staff believed policies were fair and aligned with their work.
The research showed that employees are more likely to use AI openly when they feel policies are fair and make sense, rather than when they are only afraid of punishment. The results suggested new policies should involve employees in their design, focus on building trust, and encourage AI use in ways that support both staff and company goals.
Schedule a free consultation today.