SerenAI
SerenAI
Seren-ai-logo

Four Technical Questons about SerenAI from Shane Sabine and Punch Capital

Taariq Lewis
Taariq Lewis
97 min read

Thanks to Shane Sabine of Punch Capital. Shane came up with four great questions that we put to Claude to jump into given all that we have built on SerenAI so far.

Thanks Shane for reading the blog and for these thoughtful technical questions! They get right to the core of what we're building. Let's address each one with updated context from our November 2025 architecture:

Q1: Agent Identity & Context Propagation - Complete Architecture

Your Question:

"You mentioned eliminating the MCP server layer to reduce latency. How does SerenAI handle agent identity and context propagation natively within Postgres?"

Three-Layer Architecture

SerenAI is now a complete platform with three architectural layers:

Layer 1: SerenCore (Control Plane) - NEW

What It Does:

  • Agent authentication via API keys (seren_<key_id>_<secret>)
  • Agent identity management (agentic_type: 'claude-code', 'claude-computer')
  • Agentic database provisioning (ephemeral databases with TTL)
  • Payment verification (x402 protocol integration)
  • Usage metering (per-query tracking for billing)
  • Quota enforcement (rate limits, concurrent database limits)

API Key Structure for Agents:

# Agent API key format
seren_<key_id>_<secret>

# Example
seren_01H9X2K4M7N8P9Q0R1S2T3U4_xyz789abc...

# API key metadata (stored in SerenCore)

typescript
1{
2  "agentic_type": "claude-code",
3  "rate_limit_per_minute": 60,
4  "max_concurrent_databases": 5,
5  "allowed_templates": ["yearn_agent", "risk_off_agent"],
6  "organization_id": "org_123",
7  "expires_at": "2026-01-01T00:00:00Z"
8}


Agent Authentication Flow:

1. Agent presents API key: Authorization: Bearer seren_<key_id>_<secret>
2. SerenCore validates key (hash check)
3. SerenCore loads agent context: agentic_type, quotas, permissions
4. SerenCore provisions/retrieves agentic database
5. Agent receives database connection string
6. All queries metered via usage_events table

Layer 2: SerenDB (Data Layer)

What It Does:

  • Persistent memory for stateful agents (not just database storage)
  • Agent-context security (Row-Level Security policies)
  • Time-travel queries (LSN timeline branching)
  • Audit logging (SerenpgAudit extension for agent activity)
  • Edge deployment (300+ global locations via the Cloud)

How Agent Identity Propagates from SerenCore to SerenDB:

// 1. Agent authenticates with SerenCore (Layer 1)

typescript
1const apiKey = 'seren_01H9X2K4M7N8P9Q0R1S2T3U4_xyz789abc...';
2const response = await fetch('<https://control.serendb.com/api/agentic-databases>', {
3  method: 'POST',
4  headers: { 'Authorization': `Bearer ${apiKey}` },
5  body: JSON.stringify({
6    template: 'yearn_agent',
7    ttl_seconds: 3600,
8    agent_metadata: {
9      agent_name: 'yearn_rebalancer',
10      workflow_id: 'workflow_456'
11    }
12  })
13});



// 2. SerenCore provisions ephemeral database and returns connection string

typescript
1const { connection_string, branch_id, expires_at } = await response.json();
2// connection_string = "postgresql://agent_xyz:pass@edge.serendb.com:5432/db_abc?options=-c%20serenai.agent_id=agent_xyz"



// 3. Agent connects to SerenDB (Layer 2) with identity embedded

typescript
1const db = await postgres(connection_string);



// 4. SerenDB session automatically has agent identity
// Connection string includes: options=-c serenai.agent_id=agent_xyz
// This sets session variables automatically on connection

// 5. All queries carry agent context via session variables

typescript
1await db`SET SESSION serenai.workflow_id = 'workflow_456'`;



// 6. Queries are logged with full agent context

typescript
1const vaults = await db`SELECT * FROM yearn_vault_snapshots WHERE apy > 8`;
2// pgAudit logs: {agent_id: 'agent_xyz', workflow_id: 'workflow_456', query: '...', timestamp: ...}


Session Variables + Row-Level Security:

-- Agent identity embedded in session (set automatically on connection)
-- These are in the connection string from SerenCore

typescript
1current_setting('serenai.agent_id') = 'agent_xyz'-- Row-Level Security enforces agent-only data access
2CREATE POLICY agent_data_isolation ON agent_data
3  USING (
4    agent_id = current_setting('serenai.agent_id')::text
5  );



-- Agent A can only see its own data
-- Agent B can only see its own data
-- No middleware needed—Postgres enforces natively

Agentic Audit (pgAudit Extension):

-- Enable pgAudit for agent activity tracking

sql
1CREATE EXTENSION IF NOT EXISTS pgaudit;



-- Configure for agent workloads

sql
1ALTER SYSTEM SET pgaudit.log = 'read, write, ddl';
2ALTER SYSTEM SET serenai.audit_agent_context = 'on';
3ALTER SYSTEM SET serenai.audit_policy_violations = 'on';



-- Example audit log entry

bash
1{
2  "timestamp": "2025-11-18T10:30:00Z",
3  "agent_id": "agent_xyz",
4  "workflow_id": "workflow_456",
5  "query": "SELECT * FROM yearn_vault_snapshots WHERE apy > 8",
6  "result": "GRANTED",
7  "rows_returned": 47,
8  "execution_time_ms": 8,
9  "data_provider": "yearn_protocol",
10  "usage_cost": 0.01  // Metered for billing
11}


Layer 3: Agent Runtime (LangChain Integration)

What It Does:

  • Agent orchestration (LangChain/LangGraph native)
  • Multi-agent communication (agent-to-agent messaging)
  • Tool execution (custom agent functions)
  • Persistent memory (stateful agent sessions)

Complete Request Flow:

Agent Request
→ SerenCore (Layer 1)
• Authenticate API key
• Validate payment (x402)
• Check quotas (rate limit, concurrent DBs)
• Provision/retrieve agentic database
• Return connection string with embedded agent_id

→ SerenDB (Layer 2)
• Direct connection (no middleware)
• Session variables carry agent context
• Row-Level Security enforces isolation
• pgAudit logs all activity
• Usage metered per query

→ Agent Runtime (Layer 3)
• LangChain agent executes workflow
• Persistent memory (stateful)
• Multi-step reasoning (1000+ DB calls)
• Tool execution (custom functions)

Response
← Usage recorded to control plane
← Payment settlement (if metered)
← Audit trail immutable

Why This Architecture Enables Runtime Aggregation

  1. Agents are BUILT on SerenAI runtime (not just using our database)
    • Persistent memory (data layer)
    • Orchestration (LangChain integration)
    • Edge deployment (300+ locations)
    • Developer services (auth, storage, functions, realtime)
  2. A complete solution for the enterprise - Compliance and Payments
    • Agent identity management
    • Session persistence
    • Audit compliance
    • Edge co-location
    • Payment integration
  3. Data flows naturally through runtime
    • Agents already deployed here
    • Adding data providers is incremental
    • No need to control distribution
    • Marketplace emerges organically
  4. Comparable to AWS Marketplace
    • AWS doesn't control end-user distribution
    • But SaaS vendors publish where compute runs
    • We don't control where agents find users
    • But data providers publish where agents run

Example: A Risk-Off Trading Agent

// Agent uses SerenAI runtime (all three layers)

typescript
1import { SerenAI } from '@serenai/sdk';



// 1. Authenticate with SerenCore

typescript
1const serenai = new SerenAI({
2  apiKey: 'seren_01H9X2K4M7N8P9Q0R1S2T3U4_xyz789abc...',
3  template: 'risk_off_agent'
4});



// 2. SerenCore provisions database with agent context

typescript
1await serenai.connect();



// 3. Agent subscribes to data providers (via runtime marketplace)

typescript
1const providers = await serenai.marketplace.subscribe([
2  'fred_economic_data',
3  'us_treasury_yields',
4  'cbonds_credit_spreads',
5  'metals_api_gold_prices'
6]);



// 4. Agent executes workflow with persistent memory

typescript
1const agent = new LangChainAgent({
2  memory: serenai.memory,  // Persistent across sessions
3  tools: providers,        // Data provider tools
4  llm: new ChatAnthropic()
5});



// 5. Detect market turmoil and activate risk-off strategy

typescript
1await agent.run(`
2  Monitor S&P 500 volatility.
3  If VIX > 30:
4    1. Query FRED for recession indicators
5    2. Check US Treasury yields (flight to safety?)
6    3. Analyze corporate bond spreads (credit risk?)
7    4. Monitor gold prices (safe-haven demand?)
8    5. Generate trade recommendations
9`);



// 6. All queries metered, usage recorded, providers paid via revenue share

SerenAI Revenue Model:

  • Agent developer pays SerenAI: Monthly database hosting
  • SerenAI pays data providers: data-commerce revenue share

Value Proposition to Data Providers and Agents

  • Agent is BUILT on our runtime (Agents & Data live on same back-end)
  • Data providers come to us (where agents run)
  • We aggregate data consumption (runtime layer)
  • Complementary to payment all rails

Q2: Onboarding Path for Existing Postgres Users

Your Question:

"What's the migration path for an existing Postgres user?"

Answer: Replication Over Migration

We promote REPLICATION, not migration. Customers keep their data in the source database while onboarding to SerenAI.

Why Replication:

  • ✅ Zero downtime - continuous sync, no service interruption
  • ✅ Keep existing data - source database stays operational
  • ✅ Safe onboarding - test SerenAI without commitment
  • ✅ Gradual transition - move workloads at your pace
  • ✅ Rollback capability - can always go back to source

Our Tool: postgres-seren-replicator

How Replication Works (5-Phase Workflow)

Phase 1: Validate

Check that both databases meet replication requirements:

./postgres-seren-replicator validate \\
--source "postgresql://user:pass@source-host:5432/db" \\
--target "postgresql://serendb_owner@ep-orbiting-transit-9b1e3772.c-1.us-east-1.dev.serendb.com:5432/serendb?sslmode=require&channel_binding=require"

Note: SerenDB connection strings use the format ep-<name>-<id>.<cluster>.us-east-1.dev.serendb.com. Password credentials are stored in .pgpass file for security (not in command line).

Checks:

  • PostgreSQL 12+ on both source and target
  • Source has REPLICATION privilege
  • Network connectivity
  • Extension compatibility

Phase 2: Init (Initial Snapshot)

Perform initial data replication using parallel dump/restore:

./postgres-seren-replicator init \\
--source "postgresql://user:pass@source-host:5432/db" \\
--target "postgresql://serendb_owner@ep-orbiting-transit-9b1e3772.c-1.us-east-1.dev.serendb.com:5432/serendb?sslmode=require&channel_binding=require"

Features:

  • Size estimation before starting
  • Parallel operations (auto-detects CPU cores, up to 8 workers)
  • Maximum compression (level 9 for faster transfers)
  • Selective replication (choose specific databases/tables)
  • Checkpointing (resume if interrupted)

Example Output:

Analyzing database sizes...

Database Size Est. Time
──────────────────────────────────────────────────
myapp 15.0 GB ~45.0 minutes
analytics 250.0 GB ~12.5 hours
──────────────────────────────────────────────────
Total: 265.0 GB (estimated ~13.1 hours)

Proceed with replication? [y/N]:

Phase 3: Sync (Continuous Replication)

Set up logical replication for ongoing change synchronization:

./postgres-seren-replicator sync \\
--source "postgresql://user:pass@source-host:5432/db" \\
--target "postgresql://serendb_owner@ep-orbiting-transit-9b1e3772.c-1.us-east-1.dev.serendb.com:5432/serendb?sslmode=require&channel_binding=require"

What Happens:

  • Creates publication on source database
  • Creates subscription on target database
  • Continuous synchronization of all changes (INSERT, UPDATE, DELETE)
  • Both databases stay in sync in real-time

Phase 4: Status (Monitor Replication)

Track replication health and lag in real-time:

./postgres-seren-replicator status \\
--source "postgresql://user:pass@source-host:5432/db" \\
--target "postgresql://serendb_owner@ep-orbiting-transit-9b1e3772.c-1.us-east-1.dev.serendb.com:5432/serendb?sslmode=require&channel_binding=require"

Monitors:

  • Replication lag (milliseconds)
  • Sync status (active, idle, error)
  • WAL position
  • Last received LSN

Phase 5: Verify (Data Integrity)

Validate that all tables match with checksum verification:

./postgres-seren-replicator verify \\
--source "postgresql://user:pass@source-host:5432/db" \\
--target "postgresql://serendb_owner@ep-orbiting-transit-9b1e3772.c-1.us-east-1.dev.serendb.com:5432/serendb?sslmode=require&channel_binding=require"

Checks:

  • Row counts match
  • Checksums match
  • Schema consistency
  • Data integrity

Selective Replication (Fine-Grained Control)

Database-Level Filtering

Replicate only specific databases:

./postgres-seren-replicator init \\
--source "postgresql://user:pass@source-host:5432/postgres" \\
--target "postgresql://serendb_owner@ep-orbiting-transit-9b1e3772.c-1.us-east-1.dev.serendb.com:5432/postgres?sslmode=require&channel_binding=require" \\
--include-databases "myapp,analytics"

Table-Level Filtering

Replicate only specific tables or exclude certain tables:

# Include only specific tables
./postgres-seren-replicator init \\
--source "$SOURCE" \\
--target "$TARGET" \\
--include-tables "myapp.users,myapp.orders,analytics.events"# Exclude specific tables (logs, cache, temp data)
./postgres-seren-replicator init \\
--source "$SOURCE" \\
--target "$TARGET" \\
--exclude-tables "myapp.logs,myapp.cache,analytics.temp_data"

Schema-Only Tables (Structure Without Data)

Skip data for heavy archives while keeping schema in sync:

./postgres-seren-replicator init \\
--source "$SOURCE" \\
--target "$TARGET" \\
--schema-only-tables "myapp.audit_logs,analytics.evmlog_strides"

Time-Based Filters (For Time-Series Data)

Replicate only recent data:

./postgres-seren-replicator init \\
--source "$SOURCE" \\
--target "$TARGET" \\
--time-filter "analytics.metrics:created_at:6 months" \\
--time-filter "billing_events:event_time:1 year"

Interactive Mode (User-Friendly)

Interactive mode is the default - no filter flags needed:

./postgres-seren-replicator init \\
--source "postgresql://user:pass@source-host:5432/postgres" \\
--target "postgresql://serendb_owner@ep-orbiting-transit-9b1e3772.c-1.us-east-1.dev.serendb.com:5432/postgres?sslmode=require&channel_binding=require"

Workflow:

  1. Select Databases - Multi-select checklist (arrow keys, space, enter)
  2. Select Tables to Exclude - Optional per database
  3. Review Configuration - Summary of what will be replicated
  4. Confirm - Proceed or cancel

Example:

Select databases to replicate:
(Use arrow keys to navigate, Space to select, Enter to confirm)

> [x] myapp
[x] analytics
[ ] staging
[ ] test

✓ Selected 2 database(s): myapp, analytics

Select tables to EXCLUDE from 'myapp':
[ ] users
[ ] orders
[x] logs
[x] cache

✓ Excluding 2 table(s): myapp.logs, myapp.cache

Multi-Provider Support

Works seamlessly with any PostgreSQL-compatible provider:

Neon → SerenAI

Supabase → SerenAI

AWS RDS → SerenAI

Hetzner Cloud → SerenAI

Self-Hosted → SerenAI


What You Gain by Onboarding to SerenAI

After replication is complete, you get the full AI-agentic platform:

SerenCore (Control Plane)

  • ✅ Agent identity management (API keys with agent types)
  • ✅ Agentic database provisioning (ephemeral databases with TTL)
  • ✅ Payment verification (x402 protocol)
  • ✅ Usage metering (per-query billing)
  • ✅ Quota enforcement (rate limits, concurrent database limits)

SerenDB (Data Layer)

  • ✅ Agent identity tracking (Seren_pgAudit)
  • ✅ Time-travel debugging (timeline branching)
  • ✅ Row-Level Security for agent isolation
  • ✅ Edge deployment (300+ locations)
  • ✅ Agent-optimized connection pooling

Agent Runtime

  • ✅ LangChain/LangGraph native integration
  • ✅ Data provider marketplace (29+ providers)
  • ✅ Persistent memory (stateful agent sessions)
  • ✅ Multi-agent communication
  • ✅ Developer services (auth, storage, functions, realtime)

Complete Onboarding Example

Scenario: Neon → SerenAI with AI Agent Integration

# 1. Validate both databases
# 5. Verify data integrity

Security Note: Password credentials are stored in ~/.pgpass file on the machine running the replicator, not in the command line. The tool automatically uses .pgpass for authentication to prevent credentials from appearing in process listings or shell history.

Now onboard your application to SerenAI platform:

// OLD: Direct Neon connection

typescript
1const db = await postgres(process.env.NEON_DATABASE_URL);
2await db`SELECT * FROM users`;
3


// NEW: SerenAI platform with agent features

typescript
1import { SerenAI } from '@serenai/sdk';



// 1. Authenticate with SerenCore (agent identity)

typescript
1const serenai = new SerenAI({
2  apiKey: 'seren_01H9X2K4M7N8P9Q0R1S2T3U4_xyz789abc...',
3  database: 'mydb'  // Points to replicated database
4});
5


// 2. Connect (agent identity automatically propagated)

typescript
1await serenai.connect();



// 3. Use database (now with agent audit logging)

typescript
1await serenai.db`SELECT * FROM users`;  // Logged with agent_id



// 4. Add agent orchestration

typescript
1const agent = new LangChainAgent({
2  memory: serenai.memory,  // Persistent memory
3  tools: serenai.marketplace.tools([
4    'yearn_protocol',
5    'fred_economic_data'
6  ]),
7  llm: new ChatAnthropic()
8});
9
10await agent.run("Analyze user churn and recommend retention strategies");


Key Benefits:

  • ✅ Zero downtime - Neon database stayed operational during the entire replication
  • ✅ Data stays with data partner - continuous sync, no data loss
  • ✅ Test SerenAI safely - can rollback if needed
  • ✅ Gradual transition - move agent workloads incrementally
  • ✅ Full platform features - agent orchestration, data providers, edge deployment

Q3: Pilot Customers & Latency

Your Question:

"Are you already running pilots with customers who have agent-generated query loads at scale? If so, what's the observed latency delta vs. centralized Postgres?"

Current Status: Pre-product with enterprise sign-ups committed to launch

What We Have:

  • ✅ SerenCore control plane
  • ✅ Database schema complete (agentic_databases, x402_payments, usage_events)
  • ✅ API key authentication with agent types

Q4: Auditing Agent Access Decisions (Enhanced with SerenCore)

Your Question:

"How do you audit or explain why an agent was granted/denied access to a data subset?"

Three-Layer Audit Trail

Complete audit trail across all three layers:

Layer 1: SerenCore Control Plane Audit

-- API key usage events

sql
1SELECT
2  api_key_id,
3  agentic_type,  -- 'claude-code', 'claude-computer'
4  organization_id,
5  event_type,    -- 'database_create', 'payment_verify', 'quota_check'
6  event_timestamp,
7  metadata       -- {rate_limit_hit: false, payment_verified: true}
8FROM usage_events
9WHERE api_key_id = 'key_xyz'ORDER BY event_timestamp DESC;



-- Agentic database provisioning

sql
1SELECT
2  agentic_database_id,
3  template_id,           -- 'yearn_agent', 'risk_off_agent'
4  provisioning_tier,     -- 'pool', 'clone', 'on_demand'
5  ttl_seconds,
6  expires_at,
7  agent_metadata,        -- {workflow_id: 'abc', agent_name: 'yearn_rebalancer'}
8  created_via_api_key
9FROM agentic_databases
10WHERE api_key_id = 'key_xyz';



-- Payment verification

sql
1SELECT
2  payment_id,            -- x402 payment identifier
3  transaction_hash,      -- Blockchain tx
4  payer_address,         -- 0x... wallet
5  amount, currency,
6  verified, verified_at,
7  resource_type,         -- 'agentic_database', 'query_execution'
8  resource_id
9FROM x402_payments
10WHERE payer_address = '0xABC...';


Layer 2: SerenDB Data Layer Audit (Seren_pgAudit)

-- Query-level audit (pgAudit extension)

sql
1SELECT
2  timestamp,
3  agent_id,              -- From session variable
4  user_id,               -- On whose behalf
5  workflow_id,           -- Which task
6  query,                 -- SQL executed
7  result,                -- 'GRANTED' or 'DENIED'
8  policy_matched,        -- Which RLS policy applied
9  rows_returned,
10  execution_time_ms,
11  data_provider,         -- 'yearn_protocol', 'fred_economic_data'
12  usage_cost             -- Metered for billing
13FROM agent_audit_log
14WHERE agent_id = 'agent_xyz'ORDER BY timestamp DESC;



-- Policy violation audit

sql
1SELECT
2  timestamp,
3  agent_id,
4  query,
5  result,                -- 'DENIED'
6  policy_violated,       -- 'agent_vault_access'
7  reason,                -- "vault_address '0xXYZ...' not in allowed_vaults"
8  attempted_resource
9FROM agent_audit_log
10WHERE result = 'DENIED'AND agent_id = 'agent_xyz';


Layer 3: Agent Runtime Audit (Application-Level)

// Agent workflow audit (LangChain integration)

javascript
1const auditTrail = await agent.getAuditTrail({
2  workflowId: 'workflow_456',
3  includeDecisionReasoning: true
4});
5
6console.log(auditTrail);
7// {
8//   workflow_id: 'workflow_456',
9//   agent_id: 'agent_xyz',
10//   steps: [
11//     {
12//       step: 1,
13//       action: 'query_data_provider',
14//       provider: 'yearn_protocol',
15//       query: 'Get vaults with APY > 8%',
16//       result: 'GRANTED',
17//       rows_returned: 47,
18//       reasoning: 'Agent has yearn_protocol access via API key permissions'
19//     },
20//     {
21//       step: 2,
22//       action: 'analyze_performance',
23//       tool: 'internal_analytics',
24//       result: 'GRANTED',
25//       reasoning: 'Internal tool, no external data access'
26//     },
27//     {
28//       step: 3,
29//       action: 'execute_trade',
30//       provider: 'x402_payment_gateway',
31//       result: 'DENIED',
32//       reason: 'Insufficient payment balance, payment_id: pay_123 not verified',
33//       recommended_action: 'User must approve payment or add funds'
34//     }
35//   ]
36// }


Complete Access Decision Flow

Example: Agent Attempts to Access Yearn Vault Data

1. SerenCore (Layer 1): API Key Check
✅ API key valid (not expired, not revoked)
✅ Agent type: 'claude-code'
✅ Rate limit: 45/60 per minute (quota OK)
✅ Concurrent databases: 3/5 (quota OK)
✅ Payment verified: payment_id 'pay_123' verified via x402
→ PROCEED

2. SerenCore (Layer 1): Provisioning Check
✅ Agentic database exists: db_xyz
✅ Template: 'yearn_agent' (allowed)
✅ TTL: Expires in 45 minutes (not expired)
→ RETURN connection string with agent_id embedded

3. SerenDB (Layer 2): Query Execution
Agent executes: SELECT * FROM yearn_vault_snapshots WHERE apy > 8

RLS Policy Check:
✅ Session variable: serenai.agent_id = 'agent_xyz'
✅ Policy: agent_vault_access
✅ Condition: vault_address IN (SELECT allowed_vaults FROM agent_permissions WHERE agent_id = 'agent_xyz')
✅ Result: GRANTED

pgAudit logs:

json
1 {
2     "agent_id": "agent_xyz",
3     "query": "SELECT * FROM yearn_vault_snapshots WHERE apy > 8",
4     "result": "GRANTED",
5     "policy_matched": "agent_vault_access",
6     "rows_returned": 47,
7     "data_provider": "yearn_protocol",
8     "usage_cost": 0.01
9   }



4. SerenCore (Layer 1): Usage Metering

sql
1   INSERT INTO usage_events (
2     organization_id, project_id, api_key_id,
3     event_type, quantity, unit,
4     metadata
5   ) VALUES (
6     'org_123', 'proj_456', 'key_xyz',
7     'query_execute', 47, 'rows',
8     '{"provider": "yearn_protocol", "cost": 0.01}'
9   )



5. Agent Runtime (Layer 3): Decision Recorded
Agent workflow logs decision:

json
1   {
2     "step": 1,
3     "action": "query_yearn_vaults",
4     "result": "SUCCESS",
5     "rows_returned": 47,
6     "reasoning": "Agent has yearn_protocol access, payment verified, quotas OK"
7   }



If Access Was DENIED (Example: Exceeded Quota):

1. SerenCore (Layer 1): API Key Check
❌ Rate limit: 61/60 per minute (QUOTA EXCEEDED)
→ DENIED

Response:

json
1  {
2     "error": "RateLimitExceeded",
3     "message": "API key 'key_xyz' exceeded rate limit (61/60 per minute)",
4     "retry_after_seconds": 45
5   }



2. Usage Event Recorded:

sql
1   INSERT INTO usage_events (
2     event_type, result, metadata
3   ) VALUES (
4     'quota_check', 'DENIED', '{"reason": "rate_limit_exceeded"}'
5   )
6
73. Agent Runtime: Error Handled
8   Agent workflow logs:
9   {
10     "step": 1,
11     "action": "query_yearn_vaults",
12     "result": "DENIED",
13     "reason": "Rate limit exceeded (61/60 per minute)",
14     "recommended_action": "Wait 45 seconds or upgrade API key tier"
15   }
16


Why a Three-Layer Audit is Powerful

Complete Explainability:

  1. SerenCore explains: Was the API key valid? Payment verified? Quotas OK?
  2. SerenDB explains: Which RLS policy applied? Which rows were accessed? Why denied?
  3. Agent Runtime explains: What was the agent trying to do? What was the reasoning?

Compliance-Ready:

  • ✅ Immutable audit trail (SerenAgenticAudit + WAL-backed Safekeeper)
  • ✅ Prompt vs Agent distinction (human approval vs autonomous action)
  • ✅ Payment verification trail (x402 protocol)
  • ✅ Usage metering (per-query billing)
  • ✅ Policy violation trackping (why access denied)

Use Cases:

  • Regulatory Audit: "Show me all agent trades that required human approval."
  • Debugging: "Why did this agent fail to access vault data?"
  • Billing: "How many queries did agent_xyz make to yearn_protocol?"
  • Security: "Which agents accessed sensitive data last week?"
  • Optimization: "Which data providers are most expensive for this agent?"

What's Different: Database → AI-Agentic Platform

  • "Complete AI-agentic development stack"
  • Three layers:
    1. SerenCore (control plane): Agent identity, payments, metering, provisioning
    2. SerenDB (data layer): Agent-optimized Postgres at edge
    3. Agent Runtime: LangChain integration, orchestration, persistent memory
  • Onboarding = replication-first: postgres-seren-replicator tool, continuous sync, zero downtime, keep source data
  • Strategic moat: Runtime aggregation (agents BUILT on our platform create value for enterprise partners)

Data Provider Marketplace

November 2025 Progress:

  • ✅ 100 risk-off data providers identified (treasuries, bonds, commodities, bankruptcy data)
  • ✅ Provider segmentation complete (Premium, Mid-Market, Niche, Public/Government)

Example: Risk-Off Data Provider Integration

// 1. Data provider publishes to SerenAI marketplace
// Provider: FRED St. Louis Fed (macroeconomic data)

typescript
1const provider = {
2  provider_id: 'fred_stlouisfed',
3  name: 'FRED St. Louis Fed',
4  category: 'Macroeconomics',
5  api_url: '<https://api.stlouisfed.org/fred>',
6  pricing: {
7    per_query: 0.001,  // $0.001 per query
8    revenue_share: 0.70  // 70% to provider, 30% to SerenAI
9  }
10};



// 2. Agent subscribes via SerenAI runtime

typescript
1const agent = new LangChainAgent({
2  memory: serenai.memory,
3  tools: [
4    serenai.marketplace.tool('fred_stlouisfed'),  // Automatic integration
5    serenai.marketplace.tool('us_treasury_yields'),
6    serenai.marketplace.tool('cbonds_credit_spreads')
7  ]
8});



// 3. Agent queries provider data

typescript
1await agent.run("Analyze recession risk based on FRED indicators");



// 4. SerenCore meters usage and settles payment
// usage_events: {provider: 'fred_stlouisfed', queries: 47, cost: 0.047}
// Provider receives: 0.047 × 0.70 = $0.033
// SerenAI receives: 0.047 × 0.30 = $0.014

Why Providers Come to SerenAI:

  • ✅ Agents are already deployed on our runtime (distribution)
  • ✅ Usage metering built-in (no backend work)
  • ✅ Payment settlement automatic (x402 protocol)
  • ✅ Revenue share model fair
  • ✅ Time-to-market: Hours (not 3-4 months with 4-6 engineers)

Next Steps

If these answers clarify the technical architecture with November 2025 context, I'd love to discuss:

  1. Your specific use case: What agent workloads are you building?
  2. Migration concerns: Are you on Neon, Supabase, self-hosted?
  3. Data provider interest: Are you a data provider interested in the marketplace?
  4. Strategic partnership: Interested in SerenAI for your agent platform?

Available for a follow-up call whenever it works. Just send us a note to info@serendb.com and let us know what data you need agents to access.

Share:
Taariq Lewis

About Taariq Lewis

Exploring how to make developers faster and more productive with AI agents

Related Posts