Skip to Content
LearnCore ConceptsBatch Loader

Batch Loader

Concept: Intelligent request batching and deduplication to prevent N+1 queries.

Status: Batch Loader interface exists but full implementation (50ms coalescing, deduplication, optimization) is in development. Current behavior: requests are made individually.

The N+1 Problem

Without batching, accessing collections for multiple models causes many network requests:

const TeamView = observer(({ team }) => { const users = useCollectionModels<User>('user') // 50 users displayed return users.map(user => ( <div key={user.id}> {user.name} {/* Each user.assignedIssues triggers a separate request */} ({user.assignedIssues.length} issues) </div> )) }) // Problem: 50 network requests (one per user)!

The Solution: Batch Loader

Batch Loader collects requests, deduplicates, and combines them:

Time: 0ms 50ms 100ms 600ms │ │ │ │ Access █ user[0] │ │ │ Triggers █ user[1] │ │ │ █ user[2] │ │ │ █ ...50 │ │ │ │ │ │ │ Batch │ █ Collect │ │ Loader │ █ Dedupe │ │ │ █ Combine │ │ │ │ │ │ Network │ │ █ 1 Request │ │ │ │ │ Response │ │ │ █ All data │ │ │ │ UI │ │ │ █ All re-render Update │ │ │ █ Together Result: 1 request instead of 50! ✓

How It Works (Planned)

Step 1: Collection Phase (50ms window)

// Within 50ms window, collect all requests: batchLoader.load('issue', 'assigneeId', 'user-1') // Request 1 batchLoader.load('issue', 'assigneeId', 'user-2') // Request 2 batchLoader.load('issue', 'assigneeId', 'user-1') // Duplicate! batchLoader.load('issue', 'assigneeId', 'user-3') // Request 3 // Wait 50ms...

Step 2: Deduplication

// Remove duplicates const uniqueRequests = [ { type: 'issue', indexKey: 'assigneeId', indexVal: 'user-1' }, { type: 'issue', indexKey: 'assigneeId', indexVal: 'user-2' }, { type: 'issue', indexKey: 'assigneeId', indexVal: 'user-3' } ] // 3 unique requests (not 4)

Step 3: Optimization

// Analyze requests - can we optimize? // All requests for same type + indexKey → batch query! // Instead of 3 separate: // WHERE assigneeId = 'user-1' // WHERE assigneeId = 'user-2' // WHERE assigneeId = 'user-3' // Send 1 combined: // WHERE assigneeId IN ('user-1', 'user-2', 'user-3')

Step 4: Single Network Request

// Make one request with all keys POST /sync/v1/batch-fetch { requests: [ { t: 'issue', indexKey: 'assigneeId', indexVal: 'user-1' }, { t: 'issue', indexKey: 'assigneeId', indexVal: 'user-2' }, { t: 'issue', indexKey: 'assigneeId', indexVal: 'user-3' } ] } // Server response: { 'issue:assigneeId:user-1': [{ t: 'issue', id: '1', ... }], 'issue:assigneeId:user-2': [{ t: 'issue', id: '2', ... }], 'issue:assigneeId:user-3': [] }

Step 5: Fulfill All Promises

// Resolve each original request requests.forEach(({ type, indexKey, indexVal, resolve }) => { const key = `${type}:${indexKey}:${indexVal}` const data = response[key] || [] resolve(data) }) // All LazyCollections populate simultaneously // All components re-render together

Current Implementation

Batch Loader interface exists:

// types.ts export type BatchLoaderAdapter = { fetchCollection: ( t: string, indexKey: string, indexVal: string ) => Promise<WireRow[]> fetchCollections?: ( keys: Array<{ t: string, indexKey: string, indexVal: string }> ) => Promise<Record<string, WireRow[]>> fetchByIds?: ( t: string, ids: string[] ) => Promise<WireRow[]> }

Current behavior: Each fetchCollection call makes individual network requests.

Planned: Coalesce multiple calls within 50ms window into single fetchCollections request.

Example: Before and After

Without Batch Loader

const users = useCollectionModels<User>('user') users.forEach(user => { console.log(user.assignedIssues.length) }) // Network requests: // GET /sync/v1/fetch?type=issue&assigneeId=user-1 // GET /sync/v1/fetch?type=issue&assigneeId=user-2 // GET /sync/v1/fetch?type=issue&assigneeId=user-3 // ... 50 requests!

With Batch Loader (Planned)

const users = useCollectionModels<User>('user') users.forEach(user => { console.log(user.assignedIssues.length) }) // Network requests: // POST /sync/v1/batch-fetch (1 request!) // Body: [ // { t: 'issue', indexKey: 'assigneeId', indexVal: 'user-1' }, // { t: 'issue', indexKey: 'assigneeId', indexVal: 'user-2' }, // ... // ]

Advanced: Query Optimization

Batch Loader can optimize queries intelligently:

// Scenario: Loading attachments for 50 issues in same team // Naive batching: WHERE issueId IN ('issue-1', 'issue-2', ..., 'issue-50') // 50 IDs // Smart optimization: WHERE teamId = 'team-1' // Load all attachments for team! // Fewer parameters, potentially faster

This is a planned optimization - analyze batched requests and find better queries.

Deduplication Example

// Component A const issue1 = useModel('issue', '123') void issue1.comments.hydrate() // Request 1 // Component B (simultaneously) const issue2 = useModel('issue', '123') void issue2.comments.hydrate() // Request 2 (duplicate!) // Without deduplication: 2 requests // With deduplication: 1 request ✓

Batch Loader recognizes duplicate requests and only makes one network call.

Configuration (Future)

When fully implemented, configure Batch Loader:

const client = SyncClient({ server, storage, models, batchLoader: { enabled: true, coalescingWindowMs: 50, // Collect requests for 50ms maxBatchSize: 100, // Max items per batch deduplicate: true // Remove duplicate requests } })

Server-Side Support

Server needs batch fetch endpoint:

// Planned endpoint app.post('/sync/v1/batch-fetch', async (req, reply) => { const { requests } = req.body const results: Record<string, WireRow[]> = {} for (const { t, indexKey, indexVal } of requests) { const key = `${t}:${indexKey}:${indexVal}` results[key] = await adapter.listByIndex(t, indexKey, indexVal) } reply.send(results) })

Or use existing adapter’s batch capabilities:

const database = PrismaAdapter({ prisma, models, batchIndexKeys: { issue: ['assigneeId', 'teamId', 'projectId'], comment: ['issueId', 'authorId'] } })

Workarounds (Current)

Until Batch Loader is complete:

1. Pre-hydrate Collections

useEffect(() => { // Load all issues upfront const users = bridge.modelsByType<User>('user') users.forEach(user => { void user.assignedIssues.hydrate() }) }, [])

2. Use Suspense Boundaries

// Suspense delays rendering until data ready <Suspense fallback={<Spinner />}> <UserList /> </Suspense>

3. Type Hydration

// Load all models of a type at once await store.hydrateType('issue') // Now all issue collections are loaded users.forEach(user => { console.log(user.assignedIssues.length) // No additional requests })

Implementation Status: Batch Loader coalescing is planned for v0.2. The interface exists and basic functionality works (individual requests), but intelligent batching with 50ms coalescing and deduplication is coming soon.

Benefits (When Complete)

Performance

Without batching: 50 requests × 200ms = 10 seconds With batching: 1 request × 200ms = 200ms 50x faster! ✓

Network Efficiency

Without batching: 50 requests × 2KB = 100KB overhead With batching: 1 request × 2KB = 2KB overhead 98% reduction in overhead ✓

Better UX

Without batching: Data appears one by one (jarring) With batching: Data appears all at once (smooth)

Next Steps

Last updated on