Your Code.
Legally Compliant.
Connect your GitHub repository. We scan your dependencies, classify your AI system under the EU AI Act, and generate a signed compliance report. In minutes, not months.
Five stages. Zero ambiguity.
From repository scan to signed compliance report. Each stage is deterministic, auditable, and documented.
Deep Dependency Scan
Connect your GitHub repo. We parse every dependency file — package.json, requirements.txt, pyproject.toml — and map them against our database of 200+ AI libraries.
AI Capability Detection
LLM-powered analysis identifies what your system does — computer vision, NLP, biometric processing, emotion recognition — and which AI frameworks are in use.
Risk Classification
Capabilities are mapped to EU AI Act Annex III articles. The system determines if your AI is Unacceptable, High-Risk, Limited-Risk, or Minimal-Risk.
Context Verification
A dynamic questionnaire refines the preliminary assessment. Your answers about deployment context, human oversight, and safeguards adjust the final classification.
Signed Compliance Report
A SHA-256 signed PDF report is generated with your classification, matched articles, evidence summary, and compliance roadmap. Tamper-evident and audit-ready.
Full pipeline takes under 5 minutes.
Run_Your_First_ScanBeyond classification.
Full-spectrum compliance tooling. Not just a risk label — a complete system for tracking, assessing, and proving conformity.
GPAI Classification Engine
Detects General Purpose AI model usage — OpenAI, Anthropic, Google, Meta, Mistral, and 5 more providers. Determines if you are a provider or deployer and flags systemic risk obligations.
Conformity Assessment Tracker
Determines whether your system needs Module A (self-assessment) or Module B+C (notified body audit). Tracks your progress through each conformity pathway step.
Compliance Timeline
Article 113 defines staggered enforcement deadlines. Our timeline dashboard shows exactly which deadlines apply to your system and how much time you have left.
All modules included in Pro. Full-spectrum EU AI Act coverage.
Start_AuditEvery pull request.
Automatically audited.
Install the LexOculus Guardian GitHub Action. Every PR triggers a compliance scan. High-risk changes are flagged before they reach main. Continuous compliance, not one-time audits.
The EU AI Act
is already in force.
Banned AI practices are prohibited since February 2025. GPAI obligations apply from August 2025. High-risk system requirements take effect August 2026. Non-compliance fines reach €35 million or 7% of global turnover.
Built for teams that ship AI.
You deploy AI models in production. You need to know if your system is classified as high-risk before your next release.
You are raising funds or entering the EU market. Investors and partners will ask about your EU AI Act compliance status.
You need audit-ready documentation and a clear risk classification. Not a 200-page legal opinion — a technical assessment.
The old way is expensive.
Manual EU AI Act compliance audits take months and cost tens of thousands. LexOculus does it from your codebase in minutes.
Know your risk.
Before the regulator does.
Connect your GitHub. Get your classification. Generate your report. One scan is all it takes to know where you stand.