
Generative AI is already transforming how organisations work, with capabilities stretching far beyond simple automation. From producing sophisticated content and analysing complex data to supporting strategic decisions, these tools are changing the workplace landscape. Yet, at its most basic, Gen AI is often encountered through the simple act of writing prompts, drafting emails, summarising documents, planning presentations or brainstorming ideas. For many employees, prompts feel like a private workspace, somewhere to think out loud, test ideas and explore options quickly.
That sense of informality can be misleading for organisations. While prompts may feel conversational and low-risk to the user, they are not private in the way many assume. From an investigative and forensic perspective, AI prompts are increasingly treated as records of human intent and, in some cases, as evidence.
This creates a new kind of exposure. Employees may casually test in a prompt, early ideas, sensitive or confidential information, poorly phrased questions or lines of thinking they later abandon, can be retained, reviewed and relied upon if an issue is investigated.
As Privacy Awareness Week highlights the need to make smarter choices in an age of increasingly intelligent technology, this is a timely moment for organisations to consider how those interactions are understood, communicated and governed in the workplace
One of the most consistent themes emerging in investigations involving AI use is surprise. Employees are often taken aback to learn that their interactions with generative AI tools are not transient. In many enterprise environments, prompts are logged, retained and managed through organisational data security controls. In some cases, these records may also need to be collected, preserved, and reviewed as part of investigations or litigation processes.
This gap in understanding exists because AI use feels fundamentally different to traditional workplace activity. Typing into a prompt box can feel closer to thinking out loud than performing a formal act. From an evidentiary standpoint, however, AI use is not passive.
The human user remains central to the interaction. They decide what to ask, select the data to include, initiate the action and choose how, or whether, to use the output. Legally and forensically, the prompt itself is increasingly being treated as the act. Courts and investigators are beginning to treat prompts the same way as drafts, notes, messages, or search queries entered into other corporate systems.
As a result, prompt history is becoming investigative material. In recent matters, investigators have reviewed AI prompt histories across a wide spectrum of issues from benign use, such as drafting blogs, reports or internal documents, through to more contentious issues including:
These patterns align with broader legal commentary that AI-generated inputs and outputs now form part of the broader data universe considered in investigations and litigation, particularly where intent, preparation, or knowledge is relevant.
Importantly, investigators are examining records deliberately created by users within systems owned and administered by the employer. The interaction may feel informal, but it is still part of the organisation’s data environment and subject to the same expectations of governance and oversight.
A common misconception is that AI somehow sits between the user and responsibility. In practice, that framing does not hold. AI does not decide what problem to explore, choose which information to include, or exercise judgment about what is appropriate or risky. It simply executes the task defined by the prompt.
In a digital forensic perspective, this means accountability remains with the individual. The prompt reflects a series of human decisions, what to ask, what data to include and how the output is intended to be used. In that sense, prompt history can provide a clear record of decision-making, rather than obscuring it.
This aligns with how regulators and courts are increasingly approaching AI use more broadly. The presence of a tool does not displace responsibility. Instead, the interaction with that tool can help evidence intent, preparation and knowledge.
This is where the issue becomes one of privacy, not just misconduct. Risks arise when employees assume prompts are transient, treat them as personal rather than organisational data, or misunderstand how long they are retained and who can access them. There are also increasing concerns about sensitive, confidential, or legally privileged information being entered into AI tools without a clear understanding of how it may be stored or used. In those cases, the gap is not technical; it’s a gap in awareness, one that often needs to be addressed through clear communication and training.
Many organisations already have acceptable-use and privacy policies in place, but in practice, these often lag behind how AI tools are being used. While email, internet use and file storage are typically well covered, AI prompts are rarely addressed with the same level of clarity. As technology evolves, this is an area that increasingly requires regular review.
This creates a gap between how employees are using these tools and how organisations understand and govern that use. As AI becomes more embedded in day-to-day work, organisations should be asking:
Regulators and advisers have consistently emphasised that organisations can’t turn a blind eye to employee use of AI platforms, including bring-your-own or shadow AI tools, particularly where personal or confidential information is involved or where tools fall outside corporate control.
Privacy awareness, in this context, is closely linked to agency awareness. It requires clear policies, upfront communication and realistic training, so that AI remains a useful productivity tool, but one that is used with a clear understanding of responsibility.
If you’re looking to strengthen your approach to managing AI prompts in the workplace, reach out to the team at Infotrust to book a consultation and review your policies, communication and investigation readiness.