Architecture
Authrim is a Unified Identity & Access Platform built entirely on the Cloudflare Workers ecosystem. This page covers the core technical architecture: the multi-worker system, database abstraction layer, PII partition routing, and Durable Object region sharding.
System Overview
Authrim is composed of multiple specialized Workers connected via Service Bindings. The ar-router Worker acts as the central entry point, dispatching requests to domain-specific Workers.
flowchart LR
router["ar-router
(entry point)"]
subgraph endpoints["OIDC / Auth"]
discovery["ar-discovery
(OIDC meta)"]
auth["ar-auth
(AuthZ EP)"]
token["ar-token
(Token EP)"]
userinfo["ar-userinfo
(UserInfo EP)"]
end
subgraph federation["Federation"]
saml["ar-saml
(SAML IdP)"]
bridge["ar-bridge
(External IdP)"]
end
subgraph ops["Operations"]
mgmt["ar-management
(Admin API)"]
async["ar-async
(Background)"]
end
router --> discovery & auth & token & userinfo
router --> saml & bridge
router --> mgmt & async
| Worker | Responsibility |
|---|---|
| ar-router | Request routing, rate limiting, CORS |
| ar-discovery | /.well-known/openid-configuration, JWKS |
| ar-auth | Authorization endpoint, consent, login flows |
| ar-token | Token endpoint (code exchange, refresh, device) |
| ar-userinfo | UserInfo endpoint |
| ar-management | Admin API (users, clients, roles, policies) |
| ar-saml | SAML IdP and SP |
| ar-bridge | External IdP federation (social login, enterprise SSO) |
| ar-async | Background jobs (key rotation, cleanup, SCIM sync) |
All Workers share a common library ar-lib-core which provides the database abstraction, repositories, utilities, and Durable Object definitions.
Database Abstraction Layer
DatabaseAdapter Interface
All database operations go through the DatabaseAdapter interface, which abstracts the underlying storage engine:
interface DatabaseAdapter { query<T>(sql: string, params?: unknown[]): Promise<T[]>; queryOne<T>(sql: string, params?: unknown[]): Promise<T | null>; execute(sql: string, params?: unknown[]): Promise<ExecuteResult>; transaction<T>(fn: (tx: TransactionContext) => Promise<T>): Promise<T>; batch(statements: PreparedStatement[]): Promise<ExecuteResult[]>; isHealthy(): Promise<HealthStatus>;}The primary implementation is D1Adapter for Cloudflare D1 (serverless SQLite). The adapter includes retry logic with exponential backoff and health check monitoring.
Transaction semantics: D1 does not support traditional SQL transactions. Instead, the D1Adapter collects all statements within a transaction() call and executes them as a D1 batch — providing all-or-nothing semantics.
BaseRepository Pattern
All entity repositories extend BaseRepository<T>, which provides:
- CRUD operations —
findById,create,update,delete - Pagination — cursor-based with configurable sort
- Filtering — type-safe conditions with operator support (
eq,in,like, etc.) - Soft delete — via
is_activeflag (default behavior) - SQL injection prevention — field name validation against an allowlist
Three-Database Separation
Authrim uses three separate D1 databases to isolate data by sensitivity:
| Database | Purpose | Content |
|---|---|---|
| DB_CORE | Authentication core | Users (non-PII), clients, sessions, tokens, roles |
| DB_PII | Personal data | Email, name, address — partitioned by geography |
| DB_ADMIN | Platform management | Admin users, audit logs, tenant settings |
This separation ensures PII can be stored in a jurisdiction-appropriate database while authentication operations only touch DB_CORE.
PII Partition Router
The PIIPartitionRouter routes PII data access to the correct database partition. Each partition maps to a separate DatabaseAdapter instance (potentially in different geographic regions).
Trust Level Hierarchy
When determining which partition to store a new user’s PII, the router evaluates a trust hierarchy (highest trust first):
| Priority | Method | Trust Level | Description |
|---|---|---|---|
| 1 | Tenant policy | High | Tenant-specific partition override |
| 2 | Declared residence | High | User’s self-declared country of residence |
| 3 | Custom rules | Medium | Attribute-based routing rules (plan, role, etc.) |
| 4 | IP routing | Low | Cloudflare geo headers (fallback only) |
| 5 | Default partition | — | Last resort |
Partition Configuration
Partition settings are stored in KV (with in-memory caching, 10s TTL) and configurable per tenant via the Admin API:
- Available partitions — registered database adapters (e.g.,
eu,us,apac,default) - Tenant overrides — force all users of a tenant to a specific partition
- Custom rules — attribute-based conditions with priority ordering
- IP routing toggle — enable/disable geographic fallback
The users_core.pii_partition column tracks which partition contains each user’s PII, enabling correct routing for subsequent reads.
Durable Object Region Sharding
Durable Objects (DOs) provide strongly consistent, stateful storage — used in Authrim for sessions, authorization codes, challenges, refresh tokens, and more. Region sharding distributes these DOs across multiple shards and geographic regions.
Why Region Sharding?
A single Durable Object instance can handle approximately 50-100 requests/second. For a platform serving thousands of concurrent authentication flows, a single DO per resource type would become a bottleneck. Region sharding solves this by:
- Horizontal scaling — distributing load across N shards
- Geographic locality — placing DOs near users via
locationHint - Predictable routing — embedding shard info in resource IDs for zero-lookup routing
Shard Configuration
Shard Key Algorithm
Authrim uses FNV-1a (32-bit) hash to determine shard assignment:
shardIndex = fnv1a32(shardKey) % totalShardsThe shardKey varies by resource type:
- Session: random secure ID (uniform distribution)
- AuthCode / RefreshToken:
userId:clientId(colocated by user-client pair) - PAR / DeviceCode / CIBA / DPoP:
clientId - Challenge: random ID
Region ID Format
Every region-sharded resource ID embeds routing information:
g{generation}:{region}:{shard}:{prefix}_{randomPart}Examples:
g1:apac:3:ses_X7g9kPq2Lm4R— Session in APAC, shard 3, generation 1g1:enam:1:acd_9f8a2b1c— AuthCode in US East, shard 1
The corresponding DO instance name follows:
{tenantId}:{region}:{typeAbbrev}:{shard}Example: default:apac:ses:3
Region Distribution
Shards are divided among geographic regions based on a percentage distribution. The default configuration (4 total shards):
| Region | Percentage | Shards | Range |
|---|---|---|---|
| enam (US East) | 50% | 2 | 0–1 |
| weur (West Europe) | 25% | 1 | 2 |
| apac (Asia Pacific) | 25% | 1 | 3 |
The calculateRegionRanges() function converts percentages to concrete shard ranges, ensuring all shards are covered.
Placement and Colocation
locationHint Placement
When creating a DO stub, Authrim passes locationHint to Cloudflare:
namespace.get(id, { locationHint: 'apac' });This hint is only effective on the first get() call for a given DO ID — it determines where Cloudflare physically places the DO. Subsequent calls route to the already-placed instance.
Colocation Groups
DOs that must route the same shard key to the same shard must have identical shard counts. Authrim defines colocation groups to enforce this:
| Group | Shard Count | Members | Reason |
|---|---|---|---|
| user-client | 4 | AuthCode, RefreshToken | Same userId:clientId key |
| random-high-rps | 4 | Revocation | High throughput |
| random-medium-rps | 4 | Session, Challenge | Medium throughput |
| client-based | 4 | PAR, DeviceCode, CIBA, DPoP | Same clientId key |
| vc | 4 | CredOffer, VPRequest | Verifiable Credentials |
Mismatched shard counts within a colocation group cause intermittent authentication failures — a user’s AuthCode and RefreshToken would land on different shards, breaking the code exchange flow.
Migration and Routing
Generation-Based Migration
When shard configuration changes (e.g., scaling from 4 to 32 shards), Authrim uses a generation-based approach:
- The current generation config is archived to
previousGenerations - A new generation is created with updated shard count and distribution
- New resources use the new generation
- Existing resources continue routing to their original generation (info is embedded in their ID)
This means no data migration is required — old and new resources coexist with different shard configurations. Up to 5 previous generations are retained.
Resource Creation Flow
1. Generate random ID / compute shard key2. Get RegionShardConfig from KV (cached 10s)3. Calculate: shardIndex = fnv1a32(shardKey) % totalShards4. Resolve region from shard ranges5. Create resource ID: g{gen}:{region}:{shard}:{prefix}_{random}6. Build DO instance name: {tenant}:{region}:{type}:{shard}7. Get DO stub with locationHint8. Send request to DOExisting Resource Access Flow
1. Parse resource ID → extract generation, region, shard2. Build DO instance name from parsed info3. Get DO stub with locationHint (routes to existing placement)4. Send request to DONo configuration lookup is needed for existing resources — all routing information is embedded in the ID itself.
Caching Strategy
Authrim uses a three-tier caching strategy to minimize database reads:
block-beta
columns 1
kv["KV Cache (global, ~60s)
JWKS, OIDC metadata, settings"]
do["DO In-Memory (per-shard)
Sessions, tokens, codes"]
d1["D1 Database (persistent)
Source of truth"]
- KV: Global key-value store with eventual consistency. Used for configuration, public keys, and read-heavy data.
- DO in-memory: Each Durable Object maintains in-memory state. Provides strongly consistent reads within a shard.
- D1: The persistent store and source of truth. All writes go to D1; reads are served from cache when possible.
Configuration caches (region shard config, partition settings) use a 10-second TTL to balance freshness with performance.
Next Steps
- Edge Computing — Why edge-native architecture for identity
- Identity Hub — Unified identity federation concept
- PII Separation — Database-level PII isolation details