Skip to main content

Prepare Upload

POST 

/v1/:slug/assets/upload_prepare

Step 1 of Asset Upload Process

This endpoint prepares signed URLs for uploading assets to cloud storage. The response varies based on the organization's storage provider (GCS or Backblaze). Check the storage_provider field to determine which upload flow to use.


GCS Upload Flow (4 Steps)

When storage_provider is "gcs", use the resumable upload protocol:

Step 1: Prepare Upload (this endpoint)

POST /v1/{org_slug}/assets/upload_prepare
Headers:
Authorization: Bearer {your_access_token}
Content-Type: application/json
Body:
{ "asset": { "title": "my-image.jpg", "media_type": "image/jpeg", "size": 1048576 } }

Returns: storage_provider="gcs", upload_url, signed_gcs_id, encrypted_organization_metadata, file_extension

Step 2: Initialize Resumable Upload Session (GCS API)

POST {upload_url}
Headers:
Content-Type: {media_type}
x-goog-resumable: start
x-goog-meta-encrypted-organization-metadata: {encrypted_organization_metadata}
x-goog-meta-extension: {file_extension}
Body: EMPTY

Returns: HTTP 200 with Location header containing session URL

Step 3: Upload File Data (GCS API)

PUT {session_url_from_location_header}
Headers:
Content-Type: {media_type}
Body: {binary_file_data}

Returns: HTTP 200 OK when upload completes

Step 4: Complete Upload (Playbook API)

POST /v1/{org_slug}/assets/upload_complete
Body: { "asset": { "signed_gcs_id": "{signed_gcs_id}", "title": "my-image.jpg" } }

Backblaze Upload Flow

When storage_provider is "backblaze", check if multipart_upload_id is present.

Single-Part Upload (files < 5MB)

When multipart_upload_id is null and upload_url is present:

Step 1: Call upload_prepare → get storage_provider="backblaze", upload_url, signed_gcs_id, file_extension, encrypted_organization_metadata Step 2: PUT file with required headers:

PUT {upload_url}
Headers:
Content-Type: {media_type}
x-amz-meta-extension: {file_extension}
x-amz-meta-encrypted-organization-metadata: {encrypted_organization_metadata}
Body: {binary_file_data}

Step 3: Call upload_complete with signed_gcs_id

Multipart Upload (files >= 5MB)

When multipart_upload_id and parts are present:

Step 1: Call upload_prepare → get storage_provider="backblaze", parts, multipart_upload_id, part_size, signed_gcs_id Step 2: Upload each part (no extra headers needed for parts):

# For each part in parts array:
PUT {part.url}
Body: {file_chunk} # Slice file: [(part_number-1) * part_size, part_number * part_size]

Step 3: Call upload_complete with signed_gcs_id AND multipart_upload_id


Important Notes:

  • GCS Steps 2 & 3 are Google Cloud Storage API calls (no Authorization header needed)
  • Backblaze single-part uploads require headers that match the signed URL signature
  • Signed URLs expire after 24 hours
  • For GCS: session URLs from Step 2 have shorter expiration
  • For Backblaze multipart: upload parts in order (part_number 1, 2, 3...)
  • Size parameter is optional: if omitted, single-part upload is used (works up to 5GB)

Request

Responses

Backblaze large file upload via proxy (no multipart needed)