Federal Deployment

Trusted Autonomy
for Federal Agencies

Oceum is an agentic AI governance platform engineered for federal deployment. OMB M-26-04 disclosure pack drafted, M-25-22 vendor lock-in protections built into the standard delivery framework, and M-22-09 Zero Trust alignment via blind-relay credential mediation. Customer-deployable to AWS GovCloud, Azure Government, or fully on-prem / air-gapped with in-network LLMs.

M-26-04
Disclosure Pack
M-25-22
Acquisition Posture
M-22-09
Zero Trust Alignment
Air-Gap
Deployable
01 — OMB M-26-04

LLM Disclosure Pack

OMB Memorandum M-26-04 — "Increasing Public Trust in Artificial Intelligence Through Unbiased AI Principles" (issued December 11, 2025) — applies to every Large Language Model procured by federal agencies, regardless of deployment method. The required disclosure pack is drafted (v0.1) and available on request. Most small-business AI vendors have not yet addressed this requirement; we have.

Acceptable Use Policy
Permitted and prohibited uses of the platform's LLM-driven capabilities, with explicit treatment of the Truth-Seeking and Ideological Neutrality principles. Drafted v0.1 — Lakeshore Federal owner; legal review pending June 30, 2026.
drafted v0.1
Model Cards
Per-LLM disclosure covering Orion (local GGUF / hosted endpoint), Claude Haiku 4.5 (commercial fallback), OpenAI text-embedding-3-small, and Cohere Rerank 3.5. Routing, fallback behavior, and category mapping documented.
drafted v0.1
System Card
Deployment-context disclosure covering the governed runtime, integration boundaries, audit-logging behavior, and human-in-the-loop checkpoint design. Ties into the Acceptable Use Policy at the operational layer.
drafted v0.1
Data Card
Training-data and evaluation-data provenance for each LLM in the routing path. No-training-on-customer-data is contractually enforced for hosted models and architecturally enforced for local Orion deployments.
drafted v0.1
End-User Resources
User-facing materials so federal end-users can understand the system's capabilities, limitations, and intended use. Required by M-26-04 transparency obligations; available to bundle into agency end-user communications.
drafted v0.1
Feedback Channel
Documented intake path for end-user and agency feedback on LLM behavior, with severity routing into our incident response process. Required by M-26-04; mapped to existing operations.
drafted v0.1
Status & Caveats The pack is drafted at v0.1 and available on request for federal evaluators. Internal and legal review are pending ahead of v1.0 (target June 30, 2026). The pack is not a guarantee of compliance — M-26-04 enforcement evolves and agency interpretations vary. Federal contract attorneys should review before use in any actual proposal. Enhanced transparency materials (pre/post-training activities, model-bias evaluations, third-party modifications, governance-tool detail) are available on agency request.
02 — OMB M-25-22

Acquisition Posture

OMB Memorandum M-25-22 — "Driving Efficient Acquisition of Artificial Intelligence in Government" (April 3, 2025; effective October 1, 2025) — governs how federal agencies acquire AI: Buy American emphasis, vendor lock-in protections, knowledge transfer, data and model portability, and privacy compliance. These are baked into our standard delivery framework.

Buy American (§3c)
U.S.-LLC, U.S.-citizen-owned. Engineering bench is U.S.-based (New Hampshire). Commercial LLM dependencies route to U.S.-headquartered providers; local Orion deployments require no external dependency at all.
aligned
Knowledge Transfer
Standard delivery framework includes engineering documentation, runbook handoff, and customer-team training. Successor-vendor transitions are de-risked by architectural documentation rather than tribal knowledge.
delivery-standard
Data Portability
Customer data is exportable in open formats at any time — not held hostage by proprietary schema. Agent state, integration configurations, and audit logs travel with the agency, not with the vendor.
no lock-in
Model Portability
LLM router abstracts the underlying model (Orion, Claude, or any compatible provider). Agencies can swap models per category without code changes — protecting against single-vendor model lock-in.
router-abstracted
No Training on Agency Data (§3a)
Hosted-model contracts include no-training language. Local Orion deployments process agency data within the agency's own boundary — no exfiltration path exists. Embeddings are computed against in-network endpoints when required.
contract + architecture
Licensing Transparency
Open-source dependencies are tracked. Commercial dependencies are documented with license terms surfaced for agency review. No hidden royalty structures or per-seat reseller arrangements.
documented
03 — OMB M-22-09 + RMF

Zero Trust by Architecture

OMB M-22-09 ("Federal Zero Trust Architecture Strategy") and the NIST Risk Management Framework expect identity verification, least-privilege enforcement, and continuous monitoring on every operation. Oceum's governance primitives implement these properties as architectural defaults, not retrofitted controls.

Blind-Relay Credential Mediation
Agents act on credentials they cannot read. AES-256-GCM encryption with per-organization HMAC-SHA256 key derivation. Plaintext never reaches agent prompt context. This is PAM-for-AI-agents — credentialed action without credential exposure.
M-22-09 §IV-B
Governed Execution Engine
Every agent action passes through policy enforcement before reaching the network. Per-action audit records, scope-bounded execution, integration bindings at the unit-of-work level. Production-tested across multi-tenant operation.
policy-gated
Append-Only Audit Journal
Agent actions, policy decisions, and integration calls are journaled to an append-only, monthly-partitioned table. RLS plus database triggers prevent UPDATE and DELETE — even from service-role contexts. Designed for ATO-package evidence collection.
append-only
Multi-Tenant Isolation
Per-tenant cryptographic isolation at the vault layer. Service-role queries enforce explicit organization scoping. Cross-tenant data flow is architecturally prevented — RLS plus application-layer scoping plus key isolation, three layers deep.
RLS + crypto + scope
Continuous Monitoring
Per-step cost attribution, situation-hash gated heartbeats, and a 5-minute Pulse cron that surfaces platform-state anomalies as they happen. The same telemetry that powers operations also powers RMF continuous-monitoring evidence.
RMF-aligned
Hash-Anchored State Mutations
Every mutable record (vault, memory, integration writes) carries a content hash. Stale writes return HTTP 409 instead of silently clobbering — agents cannot overwrite state another process changed underneath them. Designed for safe concurrent operation under audit.
optimistic concurrency
For Federal Reviewers These properties are documented in detail on the Security Architecture page. The runtime has been in continuous multi-tenant production for ~12 months. Continuous-ATO and RMF automation packages are available as a delivery path through Lakeshore Federal Services LLC.
04 — Deployment

Federal Deployment Paths

Customer-deployable to commercial cloud, federal cloud, or fully on-prem / air-gapped. The LLM router supports commercial providers OR locally-hosted in-network models, so closed-environment and sovereign-data deployments do not require external LLM connectivity.

Tier 1
AWS GovCloud (US)
FedRAMP-authorized AWS environment for ITAR / CUI workloads. Standard deployment template; agency-owned account.
  • Cloud-native architecture
  • Customer-owned account
  • Standard CloudFormation
  • VPC isolation
Tier 1
Azure Government
FedRAMP-authorized Microsoft Azure cloud for federal customers. Equivalent deployment template to AWS GovCloud.
  • Cloud-native architecture
  • Customer-owned subscription
  • ARM templates
  • Private endpoints
Tier 2
On-Prem / Self-Hosted
Full deployment inside the agency's existing data center. Standard Linux + Postgres footprint; no cloud dependencies.
  • Linux + Postgres
  • Local LLM (Orion GGUF)
  • Internal embedding endpoint
  • No external network calls
Tier 3
Air-Gapped / Sovereign
Fully disconnected deployment for classified or sovereign-data environments. Locally-hosted LLMs; no commercial-cloud or public-internet dependency at any layer.
  • Zero external connectivity
  • Local Orion + local embeddings
  • Bundled installer
  • Manual update path
LLM Router Flexibility The same Oceum runtime serves all four deployment tiers. The internal LLM router is provider-abstracted: commercial Claude / Orion in commercial cloud, locally-hosted Orion in on-prem and air-gapped environments. Agencies can switch model providers per work category without code changes.
05 — Compliance

Compliance Status

Honest framing: what is architecturally implemented today, what is drafted and pending review, and what is on the roadmap. We don't claim what we cannot back up to a federal evaluator.

Architectural
OMB M-22-09 Zero Trust
Identity verification, least-privilege enforcement, encryption at rest, audit logging, and continuous monitoring implemented as architectural defaults. Documented on the Security Architecture page.
Architectural
Multi-Tenant Isolation
RLS, application-layer org scoping, and per-tenant cryptographic key isolation. Cross-tenant data leakage is structurally prevented across three independent layers.
Architectural
Append-Only Audit Trail
Monthly-partitioned agent_journal with RLS- and trigger-enforced append-only semantics. UPDATE and DELETE rejected at the database layer even for service role.
Drafted v0.1
OMB M-26-04 Disclosure Pack
Six required cards (AUP, Model Cards, System Card, Data Card, End-User Resources, Feedback Channel) drafted v0.1. Internal review and legal review pending. v1.0 target June 30, 2026.
Aligned
OMB M-25-22 Acquisition
Buy American posture, vendor lock-in protections, knowledge transfer, data and model portability, and no-training-on-agency-data baked into the standard delivery framework.
Aligned
RMF / Continuous-ATO Support
Continuous monitoring, audit evidence collection, and per-action policy decisions designed for RMF-package generation. Available as a Lakeshore Federal delivery path.
Roadmap
FedRAMP Authorization
Not currently FedRAMP-authorized. Path forward depends on agency sponsorship; existing controls are designed to map cleanly onto FedRAMP Moderate. Inquire if your acquisition requires sponsored FedRAMP.
Roadmap
SOC 2 Type II
Controls implemented; auditor engagement pending. Most underlying evidence (test coverage, change management, audit trails) is already in place via the existing security audit history.
06 — Procurement

Federal Procurement Path

Oceum is delivered to federal customers through Lakeshore Federal Services LLC — a New Hampshire small business, U.S.-LLC, U.S.-citizen-owned, with a Treasury-vertical contracting strategy. Lakeshore holds the federal contract; Oceum is the platform underneath.

NAICS Codes
CodeDescription
541511 *Custom Computer Programming Services
541512Computer Systems Design Services
541519Other Computer Related Services
541611Administrative Management Consulting
541990Other Professional, Scientific & Technical

* Primary NAICS

Product Service Codes
CodeDescription
DA01Application/App Development Support
DD01ITSM, PM, OpsCenter
DF01IT Management Support Services
Engagement Models Lakeshore Federal Services welcomes micro-purchase awards (under $10K), simplified acquisitions (under $250K), and sources sought / RFI responses to establish capability and past performance with new agency partners. Vertical focus is Treasury (IRS, BFS, OCC, FinCEN, Departmental Offices) and adjacent federal financial regulators; we also respond to agency-agnostic RFIs where the technical fit is real. Capability statement and full SAM.gov registration available at lakeshorefederal.com.
For Federal Evaluators

Request the
Disclosure Pack

The M-26-04 disclosure pack, capability statement, and technical briefings are available on request. We respond to sources sought, RFIs, and capability inquiries within 48 hours.

Request Pack Lakeshore Federal Security Detail