Flag of Ukraine

Queues

Encoding is a demanding task, and submitting an encoding request is faster than executing it. Customers can flood our systems with valid requests for large amounts of video imports in just a second. To handle peaks, we have spare capacity on standby, but it is limited due to budget constraints. When the available capacity is not enough, we scale up machines in parallel within a minute to handle the additional load. During this time, jobs may start queueing up.

To prioritize Jobs that require a real-time response, we have separated high-priority (live) and low-priority (batch) traffic. Here are some key points:

  • Direct file uploads put encoding jobs into the live queue for immediate processing. The live queue acts as a fast lane, while the batch queue is for larger jobs that don't require real-time processing. This allows us to handle both large library conversions and real-time avatar resizes without blocking each other.
  • Importing multiple files using our import robots immediately places the assemblies into the batch queue. This can impact execution times when there is heavy traffic.
  • Jobs in the live queue are prioritized whenever there are more jobs than available resources.
  • If too many live jobs are sent, they may trickle down into the Batch Queue to avoid affecting other customers' live performance.
  • If too many batch jobs are sent, they may trickle down into the Backup Queue to avoid affecting other customers' batch performance.
  • Jobs in the Backup Queue will be re-enqueued into the Batch Queue as soon as batch jobs slots become available for your app.

The number of job slots required varies depending on the type of job. A video encoding job occupies 60 slots, while an image resize job uses only 5 slots.

Each plan comes with a limit on live job slots and batch job slots per region. For example, if your priority job slot limit is 120, you can have either 12 concurrent image resizes or 2 video encodings before jobs start trickling down into the batch queue. Likewise, if you have 360 batch job slots, having more than 6 video jobs enqueued in the batch queue will mean subsequent jobs get enqueued into the Backup queue.

To avoid degraded performance due to job slot limitations, you can take the following steps:

  1. Get higher job slot limit for live and batch jobs, which increases throughput. Contact us for custom values.

  2. Set waitForEncoding to false (available in, e.g., the Uppy integration). This makes processing asynchronous, so two-minute queue times won't block the end-user experience. When the files are ready, we ping the notify_url and you notify your user. This approach provides the best user experience and reliability, allowing you to enable assembly replays if needed. You can find more on this in the documentation for Assembly Notifications.

  3. By setting queue: 'batch' in your steps, you can downgrade the priority of your jobs yourself, to avoid consuming priority job slots for jobs that don't really need zero queue waiting times.

You can check current queue times for both Live Queues and Batch Queues on our Status Page, and programmatically access them via our Queues API Endpoint (values are in seconds).