Summary
- Profile Type
- Business Offer
- POD Reference
- BOUA20251105004
- Term of Validity
- 5 November 2025 - 5 November 2026
- Company's Country
- Ukraine
- Type of partnership
- Investment agreement
- Supplier agreement
- Commercial agreement
- Outsourcing agreement
- Targeted Countries
- All countries
Contact the EEN partner nearest to you for more information.
Find my local partner
General information
- Short Summary
- eyreACT is an automation platform designed to support organizations subject to the European Union Artificial Intelligence Act (AI Act). This compliance automation platform helps AI system providers, developers, and system integrators document, monitor, and demonstrate compliance with the AI Act and related national regulations. The eyreACT platform simplifies operational aspects of compliance by automatically collecting evidence, mapping AI-related risks, and generating structured doc`s.
- Full Description
-
The platform is designed to support compliance officers, data scientists, and legal teams in maintaining traceable records throughout the lifecycle of AI system development, deployment, and monitoring. It enables structured data collection from software development, machine learning, and security environments to ensure that the technical and procedural evidence required by the EU AI Act is consistently recorded and auditable.
The technology consists of several key modules:
1. System Overview and Model Documentation Module captures and organises general descriptions of AI systems, including purpose, intended use, limitations, and architecture, in line with Annex IV of the EU AI Act.
2. Data and Dataset Management Module gathers information related to data collection, annotation, preprocessing, and quality assurance. It includes data provenance tracking, dataset integrity verification, and statistical bias or drift analysis.
3. Risk Management and Human Oversight Module stores risk assessments, mitigation actions, oversight procedures, and incident logs to demonstrate the fulfilment of Articles 9–15.
4. Model Performance and Monitoring Module records validation metrics, performance benchmarks, and post-deployment monitoring results to ensure continuous system reliability and compliance with robustness and accuracy requirements.
5. Security and Access Control Module compiles evidence of cybersecurity controls, including access logs, vulnerability scans, and encryption standards in accordance with Article 15.
6. Deployment and Change Management Module documents deployment history, version control, and rollback procedures for traceability and audit purposes.
7. Compliance and Audit Reports Module – consolidates all evidence into machine-readable summaries suitable for internal audits or conformity assessments.
The system operates through integrations with commonly used tools in machine learning operations (e.g. model registries, version-control systems, continuous integration and deployment pipelines, and cloud monitoring services). Evidence generated by these integrations is automatically classified and stored under relevant documentation categories that correspond to EU AI Act obligations.
The innovation lies in the many-to-many evidence mapping framework. A single piece of technical evidence (for example, a model-training report or bias-detection result) can be used to satisfy multiple regulatory articles. This approach reduces duplication of documentation and increases traceability between compliance requirements and operational data.
The technology has been developed by a European team with experience in artificial intelligence governance, cybersecurity, and regulatory compliance. Its architecture was informed by existing standards such as ISO/IEC 23894 (AI risk management), ISO/IEC 42001 (AI management systems), and GDPR data governance principles.
The current implementation supports cloud-based and on-premise deployment and is suitable for both AI system providers and deployers. It is being evaluated in collaboration with industry experts and early-stage users to ensure interoperability with existing compliance workflows and readiness for conformity assessment once the EU AI Act enters full application.
The system represents a step towards operationalising legal and ethical AI governance through automation, transparency, and structured documentation aligned with the upcoming European regulatory framework
The company is seeking collaboration and pilot projects with technology providers, consultancies, or government agencies seeking to test and implement compliance automation ahead of the EU Artificial Intelligence Act. - Advantages and Innovations
-
The technology introduces a structured, automation-based approach to AI regulatory compliance that replaces fragmented manual documentation with continuous, data-driven evidence collection. Unlike traditional compliance reporting tools or static templates, it integrates directly with development and deployment environments, ensuring that technical records and risk documentation are automatically linked to specific regulatory obligations.
Key innovations include:
• Evidence reuse and mapping: Each item of technical evidence (e.g. model-training report, data validation record) is automatically connected to multiple relevant EU AI Act articles, reducing documentation effort by up to 60%.
• Continuous compliance monitoring: The system detects missing or outdated evidence and notifies teams before audits or internal reviews, minimising non-conformity risk.
• Cross-functional collaboration: Legal, data science, and engineering teams work within one structured workspace, reducing interpretation gaps between technical and legal compliance.
• Interoperability with existing tools: Integrations with standard DevOps, MLOps, and security systems allow adoption without additional infrastructure or retraining.
• Audit-ready outputs: All evidence can be exported in standardised “compliance binder” format compatible with conformity assessments and external audits.
Economic and operational benefits:
• Reduction of manual compliance workload and associated costs.
• Improved readiness for third-party audits and certification procedures.
• Faster internal documentation cycles and easier evidence validation for AI models.
• Lower organisational risk through early detection of governance and security gaps.
By aligning technical evidence generation with legal documentation requirements, the platform transforms AI compliance from a reactive process into an operational, automated workflow suitable for both startups and large enterprises preparing for EU AI Act enforcement. - Stage of Development
- Lab tested
- Sustainable Development Goals
- Goal 9: Industry, Innovation and Infrastructure
- Goal 16: Peace and Justice Strong Institutions
- Goal 8: Decent Work and Economic Growth
- Goal 17: Partnerships to achieve the Goal
- IPR status
- IPR applied but not yet granted
Partner Sought
- Expected Role of a Partner
-
eyreACT seeks collaboration with industry partners, research organisations, public-sector institutions, and professional associations involved in artificial intelligence development, deployment, or regulation.
1. Industrial and technological partners
• AI developers, system integrators, and software providers who wish to pilot or integrate compliance automation within their development and deployment workflows.
• Expected contribution: provide access to non-confidential process data or compliance use cases to test the automation of evidence collection and documentation mapping.
• Benefit: early operational readiness for conformity assessment under the EU AI Act and improved efficiency in governance processes. Other benefits include 70% discount on yearly plan and free access to 36-hour EU AI Act Compliance Mastery course.
2. Research and academic partners
• Universities and applied research centres with expertise in AI ethics, regulatory technology, or trustworthy AI frameworks.
• Expected contribution: co-develop methodologies for AI risk evaluation, model transparency, and evidence standardisation aligned with EU requirements.
• Collaboration format: joint research or validation projects, participation in Horizon Europe or national innovation programmes.
3. Legal, compliance, and advisory partners
• Consultancies and professional service firms specialising in regulatory compliance, audit, cybersecurity, or risk management.
• Expected contribution: validate legal interpretation, participate in pilot assessments, and help define practical compliance workflows for clients.
• Collaboration format: advisory partnerships, distribution agreements, or white-label integration within existing compliance services.
4. Public-sector and policy partners
• Government bodies, standards organisations, and digital innovation hubs seeking to develop tools and best practices for AI governance.
• Expected contribution: participate in pilot adoption, feedback on interoperability with national and EU digital compliance frameworks, and contribution to knowledge exchange.
The proposer is open to pilot cooperation, joint research activities, integration partnerships, or strategic alliances that advance practical adoption of AI compliance automation in Europe.
Partners should have an interest in operationalising the EU AI Act, improving trust and transparency in AI systems, or providing compliance solutions to enterprises working in regulated sectors such as healthcare, finance, transport, or public administration. - Type and Size of Partner
- SME 11-49
- Big company
- SME 50 - 249
- University
- R&D Institution
- Type of partnership
- Investment agreement
- Supplier agreement
- Commercial agreement
- Outsourcing agreement
Dissemination
- Technology keywords
- 01004016 - Analysis Risk Management
- 01004005 - e-Government
- 11008 - Creative services
- 01004004 - ASP Application Service Providing
- Market keywords
- 02006007 - Databases and on-line information services
- 02006005 - Big data management
- 02006004 - Data processing, analysis and input services
- Sector Groups Involved
- Digital
- Creative Industries
- Targeted countries
- All countries