Local First Mode
Local First mode is designed for environments where internet connectivity to Veriproof’s cloud is available but not guaranteed to be consistent during every session. All session data is stored locally at ingest time; commitment hashes are buffered and flushed to the Veriproof Commitment API when connectivity is available.
Local First mode uses the same infrastructure as Enterprise Hybrid. The difference is in the SDK’s sync behavior: Hybrid commits synchronously per session, Local First buffers and batches.
When to Use Local First
- Factory floor or on-site AI applications that may run in buildings with intermittent connectivity
- Edge deployments (IoT gateways, remote offices) that have scheduled connectivity windows
- Environments where the latency of a synchronous commitment call would impact session finalization performance
How It Works
- The SDK writes the session to your local PostgreSQL database and computes the commitment hash
- The hash is placed in a local sync queue table rather than immediately sent to Veriproof
- A periodic sync worker (configurable interval, default 5 minutes) drains the queue by sending batches of hashes to
POST /v1/enterprise/commitments - On success, the queue entries are marked as synced and the Solana transaction signatures are stored
- On failure, entries remain in the queue and are retried with exponential backoff
Configuration
builder.Services.AddVeriproof(options =>
{
options.Mode = DeploymentMode.LocalFirst;
options.LocalDatabaseConnectionString = connectionString;
options.LocalKeyVaultUri = "https://your-kv-name.vault.azure.net/";
options.CommitmentOnlyEndpoint = "https://api.veriproof.app/v1/enterprise/commitments";
options.ApiKey = "vp_enterprise_...";
// Local First specific settings
options.SyncIntervalSeconds = 300; // Flush queue every 5 minutes
options.SyncBatchSize = 500; // Max hashes per batch request
options.MaxQueueDepth = 100_000; // Alert if queue exceeds this
});Sync Worker Deployment
The sync worker is included in the Ingest Function App as a timer-triggered function. It runs on the configured interval and is stateless — multiple instances coordinate via the PostgreSQL sync queue table using advisory locks.
If the sync worker cannot reach Veriproof, it logs a warning and retries at the next interval. Sessions in the queue are fully available for querying in the portal during the offline period; they simply lack a confirmed blockchain anchor until connectivity is restored.
Monitoring the Sync Queue
Monitor queue depth and sync lag from the portal under Monitoring → Sync Status. Alerts can be configured for:
- Queue depth exceeding a threshold (e.g. > 10,000 sessions unsynced)
- Sync lag exceeding a threshold (e.g. oldest unsynced session > 1 hour old)
- Consecutive sync failures
Offline Operation
During an offline window, the portal continues to function normally for all operations that read from your local database. The only limitation is that newly ingested sessions will show Pending blockchain anchor status until the sync worker successfully flushes them.
Once connectivity is restored, the sync worker catches up automatically. Backfilled commitment receipts are written to existing session records.
Infrastructure
Local First mode uses the same infrastructure as Enterprise Hybrid. Follow the same deployment steps, substituting DeploymentMode.LocalFirst in the SDK configuration.