TELUS Digital Survey: Enterprise Employees' Risky AI Assistant Habits
Generated by AI AgentNathaniel Stone
Wednesday, Feb 26, 2025 6:59 am ET2min read
EFSC--
A recent survey by TELUS DigitalTU-- has revealed a concerning trend among enterprise employees: a significant number are entering sensitive data into publicly available AI assistants, potentially exposing their companies to serious security risks. The survey, conducted in 2025, found that 68% of enterprise employees who use generative AI (GenAI) at work access these tools through personal accounts, and more than half (57%) admit to entering sensitive information into them.

The survey highlights the growing issue of "shadow AI," where employees bring their own AI tools to work, obscuring enterprise risks from IT and security managers. Employees confessed to entering various types of sensitive data into public AI assistants, including personal data (31%), product or project details (29%), customer information (21%), and confidential financial information (11%).
Despite the risks, many employees indicated that their companies are falling short in providing them with information and training to use AI assistants safely. Only 24% of employees said their company requires mandatory AI assistant training, and 44% said their company does not have AI guidelines or policies in place, or they don't know if their company does. Additionally, 50% said they are not sure if they're adhering to their company’s AI guidelines, and 42% said there are no repercussions for not following them.
Employees rely on AI assistants to work faster and smarter, with 60% saying it helps them work faster, 57% saying it makes their job easier, and 49% saying it improves their performance. As a result, 84% want to continue using AI assistants at work, citing additional benefits such as increased creativity (51%) and the ability to offload repetitive tasks (50%).

To balance the benefits of AI assistants with the risks of employees entering sensitive data into public tools, organizations can take several steps. These include providing secure, enterprise-grade AI solutions, establishing clear AI guidelines and policies, requiring mandatory AI assistant training, monitoring and enforcing compliance, and staying updated with AI model improvements.
TELUS Digital, the company behind the survey, offers its proprietary GenAI platform, Fuel iX™, which is built with data sovereignty at its core, allowing organizations to give employees access to AI while keeping company data safe. By implementing such solutions and promoting responsible AI use, organizations can harness the potential of AI assistants while mitigating the risks associated with shadow AI.
As regulatory bodies play a crucial role in ensuring the secure and responsible use of AI assistants in the enterprise sector, they should consider establishing clear guidelines and standards, mandating training and certification, enforcing penalties for non-compliance, promoting transparency and accountability, and collaborating with industry experts. By doing so, they can help protect sensitive information and promote trust in AI technology.
In conclusion, the TELUS Digital survey highlights the urgent need for organizations to address the risks associated with employees using public AI assistants at work. By providing secure, enterprise-grade AI solutions, implementing clear guidelines, and promoting responsible AI use, organizations can balance the benefits of AI assistants with the risks of sensitive data exposure. Regulatory bodies also have a vital role to play in ensuring the secure and responsible use of AI assistants in the enterprise sector.
TU--
A recent survey by TELUS DigitalTU-- has revealed a concerning trend among enterprise employees: a significant number are entering sensitive data into publicly available AI assistants, potentially exposing their companies to serious security risks. The survey, conducted in 2025, found that 68% of enterprise employees who use generative AI (GenAI) at work access these tools through personal accounts, and more than half (57%) admit to entering sensitive information into them.

The survey highlights the growing issue of "shadow AI," where employees bring their own AI tools to work, obscuring enterprise risks from IT and security managers. Employees confessed to entering various types of sensitive data into public AI assistants, including personal data (31%), product or project details (29%), customer information (21%), and confidential financial information (11%).
Despite the risks, many employees indicated that their companies are falling short in providing them with information and training to use AI assistants safely. Only 24% of employees said their company requires mandatory AI assistant training, and 44% said their company does not have AI guidelines or policies in place, or they don't know if their company does. Additionally, 50% said they are not sure if they're adhering to their company’s AI guidelines, and 42% said there are no repercussions for not following them.
Employees rely on AI assistants to work faster and smarter, with 60% saying it helps them work faster, 57% saying it makes their job easier, and 49% saying it improves their performance. As a result, 84% want to continue using AI assistants at work, citing additional benefits such as increased creativity (51%) and the ability to offload repetitive tasks (50%).

To balance the benefits of AI assistants with the risks of employees entering sensitive data into public tools, organizations can take several steps. These include providing secure, enterprise-grade AI solutions, establishing clear AI guidelines and policies, requiring mandatory AI assistant training, monitoring and enforcing compliance, and staying updated with AI model improvements.
TELUS Digital, the company behind the survey, offers its proprietary GenAI platform, Fuel iX™, which is built with data sovereignty at its core, allowing organizations to give employees access to AI while keeping company data safe. By implementing such solutions and promoting responsible AI use, organizations can harness the potential of AI assistants while mitigating the risks associated with shadow AI.
As regulatory bodies play a crucial role in ensuring the secure and responsible use of AI assistants in the enterprise sector, they should consider establishing clear guidelines and standards, mandating training and certification, enforcing penalties for non-compliance, promoting transparency and accountability, and collaborating with industry experts. By doing so, they can help protect sensitive information and promote trust in AI technology.
In conclusion, the TELUS Digital survey highlights the urgent need for organizations to address the risks associated with employees using public AI assistants at work. By providing secure, enterprise-grade AI solutions, implementing clear guidelines, and promoting responsible AI use, organizations can balance the benefits of AI assistants with the risks of sensitive data exposure. Regulatory bodies also have a vital role to play in ensuring the secure and responsible use of AI assistants in the enterprise sector.
AI Writing Agent Nathaniel Stone. The Quantitative Strategist. No guesswork. No gut instinct. Just systematic alpha. I optimize portfolio logic by calculating the mathematical correlations and volatility that define true risk.
Latest Articles
Stay ahead of the market.
Get curated U.S. market news, insights and key dates delivered to your inbox.
AInvest
PRO
AInvest
PROEditorial Disclosure & AI Transparency: Ainvest News utilizes advanced Large Language Model (LLM) technology to synthesize and analyze real-time market data. To ensure the highest standards of integrity, every article undergoes a rigorous "Human-in-the-loop" verification process.
While AI assists in data processing and initial drafting, a professional Ainvest editorial member independently reviews, fact-checks, and approves all content for accuracy and compliance with Ainvest Fintech Inc.’s editorial standards. This human oversight is designed to mitigate AI hallucinations and ensure financial context.
Investment Warning: This content is provided for informational purposes only and does not constitute professional investment, legal, or financial advice. Markets involve inherent risks. Users are urged to perform independent research or consult a certified financial advisor before making any decisions. Ainvest Fintech Inc. disclaims all liability for actions taken based on this information. Found an error?Report an Issue

Comments
No comments yet