
Extract thumbnails from videos
🤖/video/thumbs extracts any number of images from videos for use as previews.
Note: Even though thumbnails are extracted from videos in parallel, we sort the thumbnails before adding them to the Assembly results. So the order in which they appear there reflects the order in which they appear in the video. You can also make sure by checking the thumb_index
meta key.
Parameters
-
use
String / Array of Strings / ObjectrequiredSpecifies which Step(s) to use as input.
-
You can pick any names for Steps except
":original"
(reserved for user uploads handled by Transloadit) -
You can provide several Steps as input with arrays:
"use": [ ":original", "encoded", "resized" ]
💡 That’s likely all you need to know about
use
, but you can view advanced use cases:› Advanced use cases
-
Step bundling. Some Robots can gather several Step results for a single invocation. For example, 🤖/file/compress would normally create one archive for each file passed to it. If you'd set
bundle_steps
to true, however, it will create one archive containing all the result files from all Steps you give it. To enable bundling, provide an object like the one below to theuse
parameter:"use": { "steps": [ ":original", "encoded", "resized" ], "bundle_steps": true }
This is also a crucial parameter for 🤖/video/adaptive, otherwise you'll generate 1 playlist for each viewing quality.
Keep in mind that all input Steps must be present in your Template. If one of them is missing (for instance it is rejected by a filter), no result is generated because the Robot waits indefinitely for all input Steps to be finished.Here’s a demo that showcases Step bundling.
-
Group by original. Sticking with 🤖/file/compress example, you can set
group_by_original
totrue
, in order to create a separate archive for each of your uploaded or imported files, instead of creating one archive containing all originals (or one per resulting file). This is important for for 🤖/media/playlist where you'd typically set:"use": { "steps": [ "segmented" ], "bundle_steps": true, "group_by_original": true }
-
Fields. You can be more discriminatory by only using files that match a field name by setting the
fields
property. When this array is specified, the corresponding Step will only be executed for files submitted through one of the given field names, which correspond with the strings in thename
attribute of the HTML file input field tag for instance. When using a back-end SDK, it corresponds withmyFieldName1
in e.g.:$transloadit->addFile('myFieldName1', './chameleon.jpg')
.This parameter is set to
true
by default, meaning all fields are accepted.Example:
"use": { "steps": [ ":original" ], "fields": [ "myFieldName1" ] }
-
Use as. Sometimes Robots take several inputs. For instance, 🤖/video/merge can create a slideshow from audio and images. You can map different Steps to the appropriate inputs.
Example:
"use": { "steps": [ { "name": "audio_encoded", "as": "audio" }, { "name": "images_resized", "as": "image" } ] }
Sometimes the ordering is important, for instance, with our concat Robots. In these cases, you can add an index that starts at 1. You can also optionally filter by the multipart field name. Like in this example, where all files are coming from the same source (end-user uploads), but with different
<input>
names:Example:
"use": { "steps": [ { "name": ":original", "fields": "myFirstVideo", "as": "video_1" }, { "name": ":original", "fields": "mySecondVideo", "as": "video_2" }, { "name": ":original", "fields": "myThirdVideo", "as": "video_3" } ] }
For times when it is not apparent where we should put the file, you can use Assembly Variables to be specific. For instance, you may want to pass a text file to 🤖/image/resize to burn the text in an image, but you are burning multiple texts, so where do we put the text file? We specify it via
${use.text_1}
, to indicate the first text file that was passed.Example:
"watermarked": { "robot": "/image/resize", "use" : { "steps": [ { "name": "resized", "as": "base" }, { "name": "transcribed", "as": "text" }, ], }, "text": [ { "text" : "Hi there", "valign": "top", "align" : "left", }, { "text" : "From the 'transcribed' Step: ${use.text_1}", "valign" : "bottom", "align" : "right", "x_offset": 16, "y_offset": -10, } ] }
-
-
output_meta
Object / Boolean ⋅ default:{}
Allows you to specify a set of metadata that is more expensive on CPU power to calculate, and thus is disabled by default to keep your Assemblies processing fast.
For images, you can add
"has_transparency": true
in this object to extract if the image contains transparent parts and"dominant_colors": true
to extract an array of hexadecimal color codes from the image.For videos, you can add the
"colorspace: true"
parameter to extract the colorspace of the output video.For audio, you can add
"mean_volume": true
to get a single value representing the mean average volume of the audio file.You can also set this to
false
to skip metadata extraction and speed up transcoding. -
count
Integer(1
-999
) ⋅ default:8
The number of thumbnails to be extracted. As some videos have incorrect durations, the actual number of thumbnails generated may be less in rare cases. The maximum number of thumbnails we currently allow is 999.
-
offsets
Array of Integers / Array of Strings ⋅ default:[]
An array of offsets representing seconds of the file duration, such as
[ 2, 45, 120 ]
. Millisecond durations of a file can also be used by using decimal place values. For example, an offset from 1250 milliseconds would be represented with1.25
. Offsets can also be percentage values such as[ "2%", "50%", "75%" ]
.This option cannot be used with the
count
parameter, and takes precedence if both are specified. Out-of-range offsets are silently ignored. -
format
String ⋅ default:"jpeg"
The format of the extracted thumbnail. Supported values are
"jpg"
,"jpeg"
and"png"
. Even if you specify the format to be"jpeg"
the resulting thumbnails will have a"jpg"
file extension. -
width
Integer(1
-1920
) ⋅ default: Width of the videoThe width of the thumbnail, in pixels.
-
height
Integer(1
-1080
) ⋅ default: Height of the videoThe height of the thumbnail, in pixels.
-
resize_strategy
String ⋅ default:"pad"
One of the available resize strategies.
-
background
String ⋅ default:"00000000"
The background color of the resulting thumbnails in the
"rrggbbaa"
format (red, green, blue, alpha) when used with the"pad"
resize strategy. The default color is black. -
rotate
Integer ⋅ default: autoForces the video to be rotated by the specified degree integer. Currently, only multiples of 90 are supported. We automatically correct the orientation of many videos when the orientation is provided by the camera. This option is only useful for videos requiring rotation because it was not detected by the camera.
FFmpeg parameters
-
ffmpeg_stack
String ⋅ default:"v3.3.3"
Selects the FFmpeg stack version to use for encoding. These versions reflect real FFmpeg versions.
The current recommendation is to use
"v4.3.1"
. Other valid values can be found here.
Demos
- Use Google Cloud Storage to store your results from Transloadit
- Convert any video to animated GIF
- Save your results to Dropbox
Related blog posts
- Kicking Transloadit Into Gear for the New Year February 1, 2015
- Upgrading Encoding Engines July 31, 2015
- Transloadit Zapier Integration October 27, 2017
- Raising prices (for new customers) February 7, 2018
- Re-loadit: the /google/store Robot March 1, 2019