Skip to content

API Examples

Arcnem Vision exposes three primary operational APIs:

  • a workflow/API-key ingestion path for automated uploads
  • a service/API-key orchestration path for project-scoped service clients
  • a dashboard/session path for operator-driven uploads, browsing, and workflow queueing

This is the automated path used by workflow-key clients or external integrations.

Terminal window
curl -X POST http://localhost:3000/api/uploads/presign \
-H "Content-Type: application/json" \
-H "x-api-key: ${API_KEY}" \
-d '{"contentType":"image/png","size":12345}'
Terminal window
curl -X PUT "${UPLOAD_URL}" --data-binary @photo.png
Terminal window
curl -X POST http://localhost:3000/api/uploads/ack \
-H "Content-Type: application/json" \
-H "x-api-key: ${API_KEY}" \
-d '{"objectKey":"uploads/.../photo.png"}'

After step 3, the API verifies the object, creates the document, and emits document/process.upload. The agents service loads the workflow key’s bound workflow and executes it.

The service API is the face-neutral orchestration surface for server-side clients.

GET /api/service/workflows
POST /api/service/uploads/presign

Body:

{
"contentType": "image/png",
"size": 12345,
"visibility": "private"
}

Visibility is declared at presign time. ack simply confirms the uploaded object and materializes the document.

POST /api/service/uploads/ack

Body:

{
"objectKey": "uploads/.../service-api/.../image.png"
}
POST /api/service/workflow-executions

Body:

{
"workflowId": "<agentGraphId>",
"documentIds": ["<documentId>"],
"initialState": {
"analysis_label": "orbit"
}
}

You can also select documents by scope:

{
"workflowId": "<agentGraphId>",
"scope": {
"apiKeyBound": false
}
}
GET /api/service/workflow-executions/:id
GET /api/service/documents?limit=20&apiKeyBound=false
GET /api/service/documents/:id
POST /api/service/documents/visibility

Body:

{
"documentIds": ["<documentId>"],
"visibility": "public"
}
GET /api/openapi.json

This spec is generated from the shared service contracts so the public surface and the implementation stay aligned.

Dashboard uploads are session-authenticated and intended for operator-driven review.

POST /api/dashboard/documents/uploads/presign

Body:

{
"projectId": "<projectId>",
"contentType": "image/png",
"size": 12345
}
POST /api/dashboard/documents/uploads/ack

Body:

{
"objectKey": "uploads/.../dashboard/.../image.png"
}

This creates the document and publishes a dashboard document event. Unlike the workflow-key path, it does not auto-run a workflow. Operators choose which saved workflow to queue next.

Queue Any Workflow Against A Saved Document

Section titled “Queue Any Workflow Against A Saved Document”
POST /api/dashboard/documents/:id/run

Body:

{
"workflowId": "<agentGraphId>"
}

Response:

{
"status": "queued",
"documentId": "<documentId>",
"workflowId": "<agentGraphId>",
"workflowName": "OCR Review Supervisor"
}

This lets operators compare workflows, rerun analysis, or process dashboard-uploaded documents without changing a workflow key’s default assignment.

  • Workflow ingestion uses API keys scoped to organization and project, with a direct agentGraphId binding.
  • Service orchestration uses API keys scoped to organization and project.
  • API keys are stored as SHA-256 hashes.
  • Dashboard operations use better-auth session cookies.
  • Local debug mode can bootstrap a seeded session when API_DEBUG=true.
GET /api/dashboard/documents?organizationId=<orgId>&query=<text>&limit=<n>&cursor=<id>

Notes:

  • organizationId is only needed when there is no authenticated dashboard session context.
  • query is optional.
  • Search always includes lexical ranking.
  • When DOCUMENT_SEARCH_MODE=hybrid, the API also blends in semantic description matches when embeddings are available.

Response fields include:

  • id
  • objectKey
  • contentType
  • sizeBytes
  • createdAt
  • description
  • thumbnailUrl
  • distance
GET /api/dashboard/documents/:id/ocr

Each OCR result includes:

  • ocrResultId
  • ocrCreatedAt
  • modelLabel
  • text
  • avgConfidence
  • result
GET /api/dashboard/documents/:id/segmentations

Each segmentation result includes:

  • segmentationId
  • segmentationCreatedAt
  • modelLabel
  • prompt
  • nested derived document data when a segmented image was stored
POST /api/dashboard/documents/chat

Notes:

  • Dashboard auth is session-based and scoped to the active organization.
  • The request body follows the TanStack AI chat shape.
  • The current UI uses organization scope, while the endpoint also accepts optional projectIds, apiKeyIds, and documentIds.
  • Responses stream over Server-Sent Events.
  • Source cards are emitted as assistant_sources events and include document metadata plus matched excerpts.
  • The dashboard bundle proxies this endpoint locally at /api/documents/chat.

The chat layer is grounded in stored document descriptions, OCR text, and related segmentation context.

GET /api/dashboard/realtime

This Server-Sent Events feed publishes:

  • document creation
  • OCR creation
  • description upserts
  • segmentation creation
  • run creation
  • run step changes
  • run completion

The dashboard bundle proxies this feed locally at /api/realtime/dashboard to power the live Docs and Runs tabs.

GET http://localhost:3000/health # API
GET http://localhost:3020/health # Agents
GET http://localhost:3021/health # MCP