pro mode and an in-memory queue in pico/mini mode. Both implement the same addTask / getTask / cancelTask / retryTask interface.
Task Lifecycle
BullMQ Configuration
priority: 1 (critical) are processed before priority: 4 (low).
In-Memory Queue
src/queue/memory-queue.ts provides a Map<string, Task> backed queue with:
- Immediate execution (no separate worker process)
- Same
TaskStatusstate machine - No persistence across restarts
- Cursor-based iteration not supported (offset only)
REDIS_URL is not set.
Queue Index
src/queue/index.ts exposes the unified API:
Failure Handler
src/queue/failure-handler.ts implements the retry and DLQ logic:
- Increments
retryCounton the task - If
retryCount < maxRetries(default 3): re-queues with exponential backoff delay (backoff * 2^attempt) - If exhausted: moves to DLQ via
initDeadLetterQueue() - Creates an in-app notification for DLQ entries
Dead Letter Queue
The DLQ (src/queue/failure-handler.ts + route src/routes/dlq.ts) holds tasks that have exhausted retries:
GET /api/dlq, retry them with POST /api/dlq/:id/retry, or discard with POST /api/dlq/:id/discard.
Notification Queue
A separate BullMQ queue (ai-task-notifications) handles async notifications after task completion. This keeps notification delivery off the critical path - a slow webhook target doesn’t delay the next task from starting.
Webhook Queue
src/queue/webhook-queue.ts manages outbound webhook delivery with:
- Per-endpoint retry with backoff
- Deduplication (no double-delivery on restart)
- Delivery log stored in LibSQL
- Health tracking (mark endpoint as failing after N failures)
Task Store Sync
In BullMQ mode, the in-memorytaskStore Map acts as a cache. It is populated from the database on startup and kept in sync by the worker event handlers: