AI Policy

Last Updated: 1st May 2025

1. Client Confidentiality and Data Protection

At Metrix, protecting client confidentiality is fundamental to how we work. When using AI tools, including during integration projects, we never input sensitive or identifying client data into public or third-party models without full client approval. Where required, we use anonymised data sets and observe strict information handling procedures to safeguard internal business information, customer data, or proprietary systems.

2. AI Integration Services

As part of our core services, we work closely with businesses to integrate AI into their operations. This may involve building custom GPTs, setting up AI-powered automation tools, or embedding AI within existing systems. In these cases, we follow structured discovery, design, and testing phases to ensure all solutions are secure, purpose-driven, and tailored to the client’s workflows.

We only deploy AI integrations that have been approved by the client and that meet our ethical, security, and data protection standards. We also provide full documentation and training to ensure transparency and responsible use beyond project handover.

3. Ethical Use of AI Content

We do not use AI to create misleading, impersonating, or deceptive content. Metrix strictly avoids deepfakes, disinformation, and any misuse of AI that could harm trust or credibility. All AI-assisted outputs are reviewed by real people to maintain accuracy, fairness, and alignment with brand values.

4. Accuracy and Originality

We carefully vet all outputs from AI systems to ensure they are factually correct, original, and legally compliant. This includes avoiding unintentional plagiarism or copyright infringement. When building AI tools for clients, we ensure the models are trained, deployed, or configured in a way that meets both technical goals and ethical expectations.

5. Transparency with Clients

Clients are always informed when AI is being used, whether in content creation, campaign support, or business system integration. We explain how and where AI is applied, provide control over its use, and offer support to ensure ongoing transparency and accountability.

6. Inclusivity and Audience Awareness

We review AI-generated content and tools through the lens of inclusion, accessibility, and cultural awareness. This is especially important for client-facing outputs such as chatbots, content, or automated communications. Human review is always part of our quality control process.

7. Ongoing Training and Ethical Development

We regularly update our team’s knowledge on emerging AI tools, risks, and ethical considerations. Training includes data privacy, prompt engineering, and risk management related to AI use. This helps us stay ahead of industry standards and provide the safest, smartest solutions to our clients.

8. Feedback and Quality Assurance

All AI-related services, including integrations, are subject to internal review and ongoing feedback from clients. We encourage input at every stage and adapt our processes as needed to ensure they meet expectations and compliance requirements.

9. Environmental Responsibility

We recognise the environmental impact of AI and strive to use energy-efficient platforms, technologies, and workflows wherever possible. Sustainability is factored into our tool selection and internal practices.

10. Education and Awareness

We support our clients and partners in understanding how AI works and how it can benefit their business. Our aim is to empower people with clear, jargon-free information about the capabilities and limits of AI, so they can use it confidently and responsibly.

Contact Us

For questions regarding this AI Policy, please contact:

Metrix

📧 info@metrixmedia.co.uk

🌐 www.metrixmedia.co.uk