Connectors
Connectors are how Mako pulls data from external services into your data warehouse. Each connector implements a standard interface for chunked, resumable data fetching.
Available Connectors
Section titled “Available Connectors”| Connector | Data Source | Entities |
|---|---|---|
| Stripe | Payment data | Customers, subscriptions, invoices, charges, payments |
| Close | CRM data | Leads, contacts, activities, opportunities |
| PostHog | Product analytics | Events, persons, groups |
| BigQuery | Data warehouse | Tables and views |
| REST | Any REST API | Custom entities via configuration |
| GraphQL | Any GraphQL API | Custom queries |
How Sync Works
Section titled “How Sync Works”Mako uses a chunked sync architecture:
- Fetch chunk — The connector fetches a batch of records and returns a cursor
- Save state — The sync orchestrator persists the cursor after each chunk
- Resume — If a sync fails mid-way, it resumes from the last successful chunk
- Upsert — All writes are idempotent (upsert-based) so re-running is safe
This means syncs are:
- Resumable — network failures don’t restart from scratch
- Incremental — only fetch new/changed data on subsequent runs
- Idempotent — safe to re-run without creating duplicates
Running Syncs
Section titled “Running Syncs”From the UI
Section titled “From the UI”Navigate to Flows in the sidebar. Create a new flow, select a source connector and destination, configure entities, and run.
From the CLI
Section titled “From the CLI”# Interactive — prompts for source, destinationpnpm run sync
# Directpnpm run sync -- -s <source_id> -d <destination_id>
# Specific entities onlypnpm run sync -- -s <source_id> -d <destination_id> -e customers,invoicesScheduled (Inngest)
Section titled “Scheduled (Inngest)”Flows can be scheduled via cron in the flow configuration. Inngest handles the orchestration with automatic retry on failure.
Webhook-triggered
Section titled “Webhook-triggered”Some flows can be triggered by webhooks for real-time sync.
Building a Custom Connector
Section titled “Building a Custom Connector”Extend BaseConnector and implement three methods:
import { BaseConnector } from "./base/BaseConnector";
class MyServiceConnector extends BaseConnector { // Define what this connector can sync getMetadata() { return { name: "My Service", version: "1.0.0", description: "Syncs data from My Service", supportedEntities: ["users", "orders"], }; }
// Validate the connection works async testConnection() { const ok = await this.client.ping(); return { success: ok, message: ok ? "Connected" : "Failed" }; }
// Fetch one chunk of data, return cursor for next chunk async fetchEntityChunk(options) { const { entity, state } = options; const page = state?.page || 1;
const response = await this.client.getUsers({ page }); await options.onBatch(response.data);
return { totalProcessed: (state?.totalProcessed || 0) + response.data.length, hasMore: response.hasMore, page: page + 1, }; }}Register it in api/src/connectors/registry.ts — the registry auto-discovers connector directories, but you can also register manually.
Best Practices
Section titled “Best Practices”- Idempotency: Use upsert operations in the destination
- Rate limiting: Respect API limits with built-in delays
- Typing: Define interfaces for API responses
- Icons: Add an
icon.svgto your connector directory for the UI