The hidden risk in your factory

Thinking through risk

Laura Tatton, Consultant & AI Policy Specialist at ConsuLT, explains why manufacturers can’t afford to ignore AI policy.

In manufacturing, progress has always been driven by optimisation. From the spinning jenny to robotics, every leap forward has been about improving efficiency, safety and precision. But today’s industrial revolution is not mechanical, it’s cognitive.

Artificial intelligence (AI) is no longer a concept for the future; it’s already woven into the systems that power supply chains, production lines and decision-making processes. From predictive maintenance and visual inspection to automated scheduling, AI is reshaping how manufacturers work. Yet, amid this rapid adoption, one critical element is being overlooked; policy.

When the machines start thinking, you need a plan

No factory would allow an operator to handle machinery without proper training, safety checks or written procedures. Yet, many are letting staff use powerful AI tools without any documented guidance. That’s a risk few can afford to take.

AI, particularly generative models like ChatGPT and machine learning algorithms, can be immensely useful but, when left unchecked they can also produce false or biased results, mishandle sensitive data or make opaque decisions that nobody can fully explain. In short, AI can make mistakes at scale. If your people are using AI, even casually, your business already needs an AI policy.

What an AI policy does (and why it matters)

An AI policy is the digital equivalent of your health and safety manual. It doesn’t limit innovation; it gives it structure. It ensures your teams understand how to use AI responsibly, what data can and cannot be shared, and who remains accountable for final decisions.

A strong AI policy should outline:

Purpose & scope – Which AI tools are approved and who can use them

Data governance – How sensitive information is handled, stored and protected

Risk management – How tools are tested for accuracy, bias and reliability

Tool approval process – A consistent way to validate new software before use

Roles & responsibilities – Clear accountability for oversight and review

Training & awareness – Guidance to build confidence and reduce misuse

Review schedule – A commitment to update policies as technologies evolve

It’s not bureaucracy, it’s business resilience.

Why manufacturers are especially exposed

Some leaders still believe, “We’re not a tech company, we don’t need an AI policy.” But manufacturers are already using AI more than they realise.

Examples include:

• Visual inspection systems powered by machine vision

• AI-driven demand forecasting and stock control

• Predictive maintenance platforms reducing unplanned downtime

• Chatbots streamlining customer or supplier communication

• Generative AI creating technical manuals or training materials

AI is already wovewn into the systems that power supply chains, production lines and decision-making processes
AI is already wovewn into the systems that power supply chains, production lines and decision-making processes

Often these tools are introduced informally by departments eager to save time, without oversight from IT, compliance or leadership. That’s where risk creeps in. Without shared rules, employees might upload confidential drawings or order data into a free AI system, unaware that it stores prompts on external servers. Or they could use a model that inadvertently embeds bias into production planning. Each action may seem minor until it causes real-world consequences, such as a data breach, lost contract or reputational harm.

The compliance clock is ticking

Beyond practical risks, the regulatory environment is evolving quickly. The EU AI Act, the first major global framework for AI, introduces strict requirements for managing high-risk systems, record-keeping and transparency.

These obligations overlap with GDPR and other data protection laws already affecting manufacturers. Having a policy in place is not just good governance; it’s proactive compliance. Increasingly, large buyers and public sector frameworks are auditing suppliers

for responsible AI use. Those without clear policies may soon find themselves excluded from tenders or framework opportunities.

From chaos to clarity: Real-world scenarios

Consider these common but preventable scenarios:

• A junior analyst uses ChatGPT to summarise production data, unintentionally exposing proprietary information

• A predictive algorithm repeatedly skews demand forecasts because nobody checked the bias in its training data

• A team adds an AI plug-in that introduces security vulnerabilities into your network

Each of these situations is avoidable with a simple, practical AI policy.

Confidence, not complexity

The good news? You don’t need to be a tech firm to get this right. You don’t need an in-house data scientist or a legal department. What you do need is a business-focused AI policy that reflects your processes, people and culture.

That’s where I can help. With over 20 years’ experience advising manufacturing businesses on communication, policy and strategic planning, I now help organisations adopt AI safely and confidently.

My approach is simple:

1. Assess your current AI usage – formal and informal

2. Identify risks and opportunities within your operations

3. Create a clear, plain-English policy tailored to your business

4. Train your teams to use AI responsibly and effectively

5. Future-proof your business for audits, client vetting, and regulation

You wouldn’t install a new piece of machinery without a safety framework and AI should be no different.

The next step: Start the conversation

Whether your business is already experimenting with AI or just beginning to explore its potential, now is the right time to act. The earlier you define your policy, the easier it is to embed safe and productive AI use across your organisation. If this article has made you pause and think, that’s a positive start. The next step is a conversation.

Related links:
Related articles:



modbs tv logo

Industry urged to see Clean Heat Market Mechanism target as opportunity

The UK government has confirmed that the next phase of the Clean Heat Market Mechanism (CHMM) will set an 8% target for the proportion of heat pumps to fossil fuel boiler sales.

2025 CSA Awards winners announced

The Commissioning Specialists Association (CSA) returned to London on the evening of 2nd October for the staging of its 10th Annual Awards Ceremony.