BunshipBunship
Integrations

AI Image Workflow

End-to-end image generation flow from API request to result delivery.

This is the runtime pipeline your users interact with directly when they click "Generate".

Feature Page Screenshot

AI image generator page (English)

End-to-End Flow in Bunship

  1. Frontend calls POST /v1/ai/generations.
  2. Task is queued via the pluggable task queue (Trigger.dev / BullMQ).
  3. The adapter dispatches the task to the configured backend.
  4. Provider returns output, then files are uploaded to object storage.
  5. Wallet credits are consumed/refunded based on task outcome.

Core APIs

  • Create task: POST /v1/ai/generations
  • Query task: GET /v1/ai/generations/:taskId
  • Cancel task: POST /v1/ai/generations/:taskId/cancel
  • Retry task: POST /v1/ai/generations/:taskId/retry
  • User task list: GET /v1/ai/generations

Code Map

  • Route layer: apps/ship-api/src/module/ai/generations.ts
  • Task queue adapters: apps/ship-api/src/services/ai/queue/ (details)
  • Shared processor: apps/ship-api/src/services/ai/queue/processor.ts
  • Image app page: apps/ship/src/app/[locale]/(marketing)/ai/image

What Buyers Usually Change

  1. Default models and prompt presets in UI.
  2. Timeout/retry/concurrency strategy for your budget.
  3. Error messages and user-facing task state copy.
  4. Credit pricing and free-tier limits.

Launch Checklist

  1. Queue backlog and worker concurrency are monitored.
  2. Failures trigger refunds/retries as expected.
  3. Generated files are accessible by valid URLs.
  4. High-cost model usage is protected with strict limits.

Next Steps