EU AI Act Compliance Engine

Your Code.
Legally Compliant.

Connect your GitHub repository. We scan your dependencies, classify your AI system under the EU AI Act, and generate a signed compliance report. In minutes, not months.

Enterprise-grade compliance·View_Plans →
Private_&_Public_ReposSHA-256_SignedAnnex_III_Mapped
LexOculus // Risk_AssessmentLIVE
Repository
acme-corp/recommendation-engine
Risk_Classification
HIGH_RISK
78
Matched_Annex_III_Articles
ART_6 BiometricsART_22 Essential ServicesART_40 Recommender
GPAI_Provider_Detected
OpenAIDeployer
SYSTEMIC_RISK: NO
200+
AI Libraries
14
Annex III Articles
10
GPAI Providers
4
Risk Levels
System_Architecture

Five stages. Zero ambiguity.

From repository scan to signed compliance report. Each stage is deterministic, auditable, and documented.

Phase_01 // INSPECT

Deep Dependency Scan

Connect your GitHub repo. We parse every dependency file — package.json, requirements.txt, pyproject.toml — and map them against our database of 200+ AI libraries.

Python, JS, Go, RustFile tree analysisNo code leaves your environment
// INSPECT_OUTPUTSTEP_01
requirements.txt[PARSED]
src/model/transformer.py[DETECTED]
config/hyperparams.yaml[READ]
data/processors/pii_scrub.ts[FLAGGED]
... 842 files scanned[COMPLETE]
Phase_02 // ANALYZE

AI Capability Detection

LLM-powered analysis identifies what your system does — computer vision, NLP, biometric processing, emotion recognition — and which AI frameworks are in use.

Groq LLM analysisCapability mappingConfidence scoring
// ANALYZE_OUTPUTSTEP_02
Computer Vision[DETECTED]
NLP / Text Processing[DETECTED]
Biometric Processing[FLAGGED]
Generative AI[DETECTED]
Confidence: 94.2%[HIGH]
Phase_03 // CLASSIFY

Risk Classification

Capabilities are mapped to EU AI Act Annex III articles. The system determines if your AI is Unacceptable, High-Risk, Limited-Risk, or Minimal-Risk.

Annex III article matchingGPAI provider detectionConstraint validation
// CLASSIFY_OUTPUTSTEP_03
ART_6 Biometrics[MATCH]
ART_22 Essential Services[MATCH]
ART_39 Generative AI[MATCH]
Classification[HIGH_RISK]
Risk Score[78/100]
Phase_04 // VERIFY

Context Verification

A dynamic questionnaire refines the preliminary assessment. Your answers about deployment context, human oversight, and safeguards adjust the final classification.

Dynamic question generationRisk score refinement ±15ptEvidence collection
// VERIFY_OUTPUTSTEP_04
Deployment region?[EU/EEA]
Human oversight?[YES]
Biometric opt-out?[AVAILABLE]
Testing procedure?[PROVIDED]
Final Classification[VERIFIED]
Phase_05 // REPORT

Signed Compliance Report

A SHA-256 signed PDF report is generated with your classification, matched articles, evidence summary, and compliance roadmap. Tamper-evident and audit-ready.

20+ page PDFDigital signatureSupabase storage
// REPORT_OUTPUTSTEP_05
Executive Summary[GENERATED]
Risk Assessment[GENERATED]
Compliance Roadmap[GENERATED]
SHA-256 Hash[SIGNED]
Report Status[COMPLETE]

Full pipeline takes under 5 minutes.

Run_Your_First_Scan
New_Modules

Beyond classification.

Full-spectrum compliance tooling. Not just a risk label — a complete system for tracking, assessing, and proving conformity.

Module_01

GPAI Classification Engine

Detects General Purpose AI model usage — OpenAI, Anthropic, Google, Meta, Mistral, and 5 more providers. Determines if you are a provider or deployer and flags systemic risk obligations.

GPAI_Scan
ProviderOpenAI
RoleDEPLOYER
SystemicNOT_APPLICABLE
Obligations6 CONSTRAINTS
Module_02

Conformity Assessment Tracker

Determines whether your system needs Module A (self-assessment) or Module B+C (notified body audit). Tracks your progress through each conformity pathway step.

Conformity_Path
ModuleMODULE_A
QMSIN_PROGRESS
Technical DocPENDING
DeclarationNOT_STARTED
Module_03

Compliance Timeline

Article 113 defines staggered enforcement deadlines. Our timeline dashboard shows exactly which deadlines apply to your system and how much time you have left.

Timeline_Status
Banned AIFEB 2025 ✕
GPAI RulesAUG 2025 ✕
High-RiskAUG 2026
Full ActAUG 2027

All modules included in Pro. Full-spectrum EU AI Act coverage.

Start_Audit
Guardian_Protocol // CI/CD

Every pull request.
Automatically audited.

Install the LexOculus Guardian GitHub Action. Every PR triggers a compliance scan. High-risk changes are flagged before they reach main. Continuous compliance, not one-time audits.

GitHub ActionPR BlockingAuto-ScanPro Feature
Enable_Guardian
PR #247 // feature/new-modelSCANNING
Commit
feat: integrate GPT-4 for content generation
Guardian_Checks
Dependency Scan[PASS]
GPAI Detection[FLAGGED]
Risk Classification[HIGH_RISK]
Annex III Match[3 ARTICLES]
Verdict
MERGE_BLOCKED
Review_Required
Regulation // Active

The EU AI Act
is already in force.

Banned AI practices are prohibited since February 2025. GPAI obligations apply from August 2025. High-risk system requirements take effect August 2026. Non-compliance fines reach €35 million or 7% of global turnover.

You are either compliant, or you are liable.
Article_113 // Enforcement_Timeline
FEB 2025
Banned AI practices prohibited
[ENFORCED]
AUG 2025
GPAI model obligations apply
[ENFORCED]
AUG 2026
High-risk AI system requirements
[UPCOMING]
AUG 2027
Full Act enforcement
[UPCOMING]
Source: Regulation (EU) 2024/1689, Article 113
Target_Operators

Built for teams that ship AI.

Engineering_Teams

You deploy AI models in production. You need to know if your system is classified as high-risk before your next release.

Startup_Founders

You are raising funds or entering the EU market. Investors and partners will ask about your EU AI Act compliance status.

Compliance_Officers

You need audit-ready documentation and a clear risk classification. Not a 200-page legal opinion — a technical assessment.

Comparison // Approach

The old way is expensive.

Manual EU AI Act compliance audits take months and cost tens of thousands. LexOculus does it from your codebase in minutes.

Traditional_Compliance_Audit
Timeline3–6 months
Cost€15,000 – €50,000+
Output200-page legal opinion
MethodManual document review
MaintenanceOutdated on delivery
EvidenceSelf-reported questionnaire
LexOculus_Automated_Audit
TimelineUnder 5 minutes
CostFlexible plans
OutputSHA-256 signed PDF report
MethodAutomated code analysis
MaintenanceRe-scan on every PR
EvidenceDependency graph + LLM analysis
Initiate_Scan

Know your risk.
Before the regulator does.

Connect your GitHub. Get your classification. Generate your report. One scan is all it takes to know where you stand.

SHA-256 signed reports • Annex III mapped • Zero data retention