An employee sits at their desk, rushing to finish a proposal. Instead of drafting from scratch, they paste sections of a contract with client names into ChatGPT. Another worker, struggling with a login issue, types their company credentials into Gemini to “see what happens.” In both cases, sensitive information has just been handed to a third-party AI system.

Unfortunately, this type of credential leak is increasingly common. A new survey from Smallpdf of 1,000 U.S. professionals[1] reveals how often employees are funneling confidential data into generative AI tools. For many organizations, it’s a threat that is rapidly growing inside everyday workflows.

The report highlights critical blind spots. For example, over one in four professionals admit to entering sensitive company information into AI, and nearly one in five confess to submitting actual login credentials. As businesses rush to embrace generative AI, these findings show that security, training, and policy are lagging behind adoption.

The Hidden Risks of Everyday AI Use

The past two years have seen generative AI tools like ChatGPT, Gemini, and Claude move from experimental curiosities to daily staples in the workplace. They’re used to draft emails, summarize meetings, and brainstorm strategy documents. But alongside convenience comes exposure. Professionals are pasting sensitive contracts, client details, and even login credentials into systems they don’t fully understand and aren’t entirely secure. Many professionals assume prompts are private. Yet, in reality, every entry can be stored, analyzed, or surfaced in ways beyond their control.

According to the research :

  • 26% of professionals have entered sensitive company information into a generative AI tool.
  • 19% have entered actual login credentials, from email accounts to cloud storage and financial systems.
  • 38% of AI users admit to sharing proprietary product details or internal company financials.
  • 17% say they don’t remove or anonymize sensitive details before entering prompts.
  • Nearly 1 in 10 confess to lying to their employer about how they use AI at work.

Leake of sensitive information to AI is a widespread and growing concern. With over three-quarters of U.S. professionals using AI tools at least weekly, the line between efficiency and exposure has blurred. As adoption accelerates, organizations are learning that the true risks are unfolding inside everyday prompts.

When Your Prompts Become the Leak Surface

One of the most alarming aspects of this trend is that everyday employees are pasting sensitive material into AI chats. Contracts with real client names, internal financials, and passwords are routinely dropped into tools that may feel private but aren’t.

What looks like harmless productivity can turn into data exposure at scale. The survey underscores the pattern: 26% of professionals admit to entering sensitive company information into AI tools, 19% have entered actual login credentials, and 17% don’t bother to anonymize details before they prompt. Many also misunderstand how these systems work, as 24% believe prompts remain private, and 75% say they’d still use AI even if every prompt were permanently stored.

The trust employees place in familiar interfaces like chat boxes, browser extensions, and built-in copilots has become a new attack surface. Without clear policies and training, convenience is becoming the newest attack vector, and routine prompts are becoming the breach.

Prompt Hygiene: The Achilles’ Heel

Most workplaces embraced generative AI before they built guardrails for it. That gap is where sensitive data slips out.

The survey reveals:

  • 19% of professionals have entered actual login credentials into a generative AI tool.
  • Of those, 47% entered a personal email, 43% a work email, 25% a cloud-storage login, and 18% a bank or financial account.
  • 17% don’t remove or anonymize sensitive details before prompting.
  • 24% believe their AI prompts are private, and 75% say they’d still use AI even if every prompt were permanently stored.
  • 70% report no formal training on safe AI use, and 44% say their employer has no AI policy.

Traditional data-loss defenses weren’t built to monitor chat prompts in real time. Yet many organizations remain stuck, held back by policy gaps, training deficits, and trust in tools that feel safe but aren’t.

The Readiness Gap

Awareness is rising. Preparation isn’t. That’s the most troubling theme in the findings .

Just as AI use becomes routine, many basics are missing:

  • 70% of workers report no formal training on safe AI use.
  • 44% say their employer has no official AI policy; 12% aren’t sure, and 7% haven’t read the policy they do have.
  • About 1 in 10 professionals have little to no confidence they can use AI without breaking rules or risking data.
  • 5% have already faced a warning or disciplinary action for workplace AI use.
  • 8% admit to lying about their AI use, and 7% used ChatGPT after being told not to.

This readiness gap is procedural and cultural. Policies lag behind practice, training lags behind demand, and trust in “helpful” tools is outpacing understanding of their risks. This is leaving employees anxious, inconsistent, and exposed just as AI becomes embedded in everyday work.

A Better Path Forward: From Ad-Hoc to Accountable

What does adapting to the prompt-leak problem look like? It starts with reframing AI use as a governed, privacy-first workflow. Treat every prompt like data in motion and design controls around it.

That could include phish-resistant guardrails for prompts, involving default blocks on credentials, client names, and financials. Additionally, this might include auto-redaction/anonymization before text reaches external models. Furthermore, enterprise controls ought to be prioritized over consumer chat apps. SSO, tenant isolation, retention may be switched off by default, and DLP can be set to scan for PII/IP in real time. Lastly, context-aware approvals identifying sensitive actions (e.g., summarizing contracts or uploading internal financials) can require additional validation or manager sign-off.

Altogether, these controls point to a larger imperative: restructuring ownership so AI risk isn’t siloed. A cross-functional “AI governance guild” (e.g., security, legal, IT, and business leads) should co-own policies, training, and exception handling. Meanwhile, teams can pair AI with secure document workflows (redaction, watermarking, access controls). Distributing responsibility is quickly becoming essential for tools that evolve too quickly for linear, after-the-fact reviews.

A Problem of Technology and Trust

The damage isn’t limited to leaks or fines. It reaches into client confidence, data integrity, and long-term brand equity. The findings point to a different kind of churn: workers who assume prompts are private, leaders who haven’t set boundaries, and customers who recoil when their details show up in the wrong place. Routine AI use can feel like a privacy violation in slow motion when policies lag behind practice.

AI risk exploits software and certainty. Often, people stop trusting systems and companies when a friendly chat box stores contract clauses or a “helpful” assistant accepts passwords without warning. That trust is far harder to rebuild than any stack you can refactor. Once it’s gone, every login, form, and document share starts from a deficit.

Why Most Organizations Will Stay Exposed

If the dangers are so obvious, why do so many teams remain unprepared?

The data points to three overlapping blockers:

  • Policy vacuum and training deficit. With 44% reporting no official AI policy and 70% receiving no formal training, employees default to improvisation in tools that feel safe but aren’t.
  • Misplaced trust and poor prompt hygiene. Beliefs that prompts are private (24%), combined with weak redaction habits (17% don’t anonymize) and stubborn convenience (75% would use AI even if prompts were permanently stored), keep risky behaviors entrenched.
  • Fragmented ownership and legacy workflows. AI use spreads across teams without clear governance, while document practices (contracts, financials, credentials) remain outside DLP and access controls, making copy-paste the path of least resistance.

These aren’t trivial obstacles, but they are solvable. As the costs of ungoverned AI mount, the price of inaction is climbing faster than most leaders expect.

Looking Ahead

The future of workplace AI will be defined by how quickly organizations shift from casual prompting to governed, privacy-first workflows. Leaders must move beyond ad-hoc guardrails and redesign how sensitive information is handled at the moment of prompt by treating every entry as data in motion, subject to redaction, routing, and audit.

At the root, leaders will be increasingly engaged in rethinking “productivity” in a world where contract snippets, client names, and credentials can be pasted into systems that store everything by default.

This also means resourcing the change. Give security, legal, and IT the mandate and budget to implement enterprise controls over consumer chat apps, deploy DLP that scans prompts, and roll out training that raises baseline literacy for every role. Asking teams to be safer with the same tools and no policy is how leaks become norms.

The story Smallpdf’s data tells is urgent: AI is already embedded in daily work, but the safeguards are not. The question now is whether organizations will modernize governance and prompt hygiene, or keep playing by pre-AI rules while sensitive details keep slipping through the chat box.

Methodology: This analysis draws on a September 2025 survey commissioned by Smallpdf of 1,000 full-time U.S. professionals across industries, job levels, and demographics, designed to understand how workers use generative AI and where sensitive information may be exposed in prompts and document workflows. Responses covered behaviors (e.g., anonymization habits, credential sharing), policy awareness, training, and tool usage frequency to illuminate risk patterns in everyday AI-assisted tasks. 

Read next:

• New Research Warns Multitasking Leaves Employees Exposed to Phishing[2]

• People More Willing to Cheat When AI Handles Their Tasks[3]

By admin