Flag of Ukraine

Generate waveform images from audio

🤖/audio/waveform generates waveform images for your audio files and allows you to change their colors and dimensions.

We recommend that you use an 🤖/audio/encode Step prior to your waveform Step to convert audio files to MP3. This way it is guaranteed that 🤖/audio/waveform accepts your audio file and you can also down-sample large audio files and save some money.

Similarly, if you need the output image in a different format, please pipe the result of this Robot into 🤖/image/resize.

Here is an example waveform image:

Parameters

  • use

    String / Array of Strings / Objectrequired

    Specifies which Step(s) to use as input.

    • You can pick any names for Steps except ":original" (reserved for user uploads handled by Transloadit)

    • You can provide several Steps as input with arrays:

      "use": [
        ":original",
        "encoded",
        "resized"
      ]
      

    💡 That’s likely all you need to know about use, but you can view advanced use cases:

    › Advanced use cases
    • Step bundling. Some Robots can gather several Step results for a single invocation. For example, 🤖/file/compress would normally create one archive for each file passed to it. If you'd set bundle_steps to true, however, it will create one archive containing all the result files from all Steps you give it. To enable bundling, provide an object like the one below to the use parameter:

      "use": {
        "steps": [
          ":original",
          "encoded",
          "resized"
        ],
        "bundle_steps": true
      }
      

      This is also a crucial parameter for 🤖/video/adaptive, otherwise you'll generate 1 playlist for each viewing quality.
      Keep in mind that all input Steps must be present in your Template. If one of them is missing (for instance it is rejected by a filter), no result is generated because the Robot waits indefinitely for all input Steps to be finished.

      Here’s a demo that showcases Step bundling.

    • Group by original. Sticking with 🤖/file/compress example, you can set group_by_original to true, in order to create a separate archive for each of your uploaded or imported files, instead of creating one archive containing all originals (or one per resulting file). This is important for for 🤖/media/playlist where you'd typically set:

      "use": {
        "steps": [
          "segmented"
        ],
        "bundle_steps": true,
        "group_by_original": true
      }
      
    • Fields. You can be more discriminatory by only using files that match a field name by setting the fields property. When this array is specified, the corresponding Step will only be executed for files submitted through one of the given field names, which correspond with the strings in the name attribute of the HTML file input field tag for instance. When using a back-end SDK, it corresponds with myFieldName1 in e.g.: $transloadit->addFile('myFieldName1', './chameleon.jpg').

      This parameter is set to true by default, meaning all fields are accepted.

      Example:

      "use": {
        "steps": [ ":original" ],
        "fields": [ "myFieldName1" ]
      }
      
    • Use as. Sometimes Robots take several inputs. For instance, 🤖/video/merge can create a slideshow from audio and images. You can map different Steps to the appropriate inputs.

      Example:

      "use": {
        "steps": [
          { "name": "audio_encoded", "as": "audio" },
          { "name": "images_resized", "as": "image" }
        ]
      }
      

      Sometimes the ordering is important, for instance, with our concat Robots. In these cases, you can add an index that starts at 1. You can also optionally filter by the multipart field name. Like in this example, where all files are coming from the same source (end-user uploads), but with different <input> names:

      Example:

      "use": {
        "steps": [
          { "name": ":original", "fields": "myFirstVideo", "as": "video_1" },
          { "name": ":original", "fields": "mySecondVideo", "as": "video_2" },
          { "name": ":original", "fields": "myThirdVideo", "as": "video_3" }
        ]
      }
      

      For times when it is not apparent where we should put the file, you can use Assembly Variables to be specific. For instance, you may want to pass a text file to 🤖/image/resize to burn the text in an image, but you are burning multiple texts, so where do we put the text file? We specify it via ${use.text_1}, to indicate the first text file that was passed.

      Example:

      "watermarked": {
        "robot": "/image/resize",
        "use"  : {
          "steps": [
            { "name": "resized", "as": "base" },
            { "name": "transcribed", "as": "text" },
          ],
        },
        "text": [
          {
            "text"  : "Hi there",
            "valign": "top",
            "align" : "left",
          },
          {
            "text"    : "From the 'transcribed' Step: ${use.text_1}",
            "valign"  : "bottom",
            "align"   : "right",
            "x_offset": 16,
            "y_offset": -10,
          }
        ]
      }
      
  • output_meta

    Object / Boolean ⋅ default: {}

    Allows you to specify a set of metadata that is more expensive on CPU power to calculate, and thus is disabled by default to keep your Assemblies processing fast.

    For images, you can add "has_transparency": true in this object to extract if the image contains transparent parts and "dominant_colors": true to extract an array of hexadecimal color codes from the image.

    For videos, you can add the "colorspace: true" parameter to extract the colorspace of the output video.

    For audio, you can add "mean_volume": true to get a single value representing the mean average volume of the audio file.

    You can also set this to false to skip metadata extraction and speed up transcoding.

  • format

    String ⋅ default: "image"

    The format of the result file. Can be "image" or "json". If "image" is supplied, a PNG image will be created, otherwise a JSON file.

  • width

    Integer ⋅ default: 256

    The width of the resulting image if the format "image" was selected.

  • height

    Integer ⋅ default: 64

    The height of the resulting image if the format "image" was selected.

  • background_color

    String ⋅ default: "00000000"

    The background color of the resulting image in the "rrggbbaa" format (red, green, blue, alpha), if the format "image" was selected.

  • center_color

    String ⋅ default: "000000ff"

    The color used in the center of the gradient. The format is "rrggbbaa" (red, green, blue, alpha).

  • outer_color

    String ⋅ default: "000000ff"

    The color used in the outer parts of the gradient. The format is "rrggbbaa" (red, green, blue, alpha).

Demos

Related blog posts

Uppy
20% off any plan for the Uppy community
Use the UPPY20 code when upgrading.
Sign up
tus
20% off any plan for the tus community
Use the TUS20 code when upgrading.
Sign up
Product Hunt
20% off any plan for Product Hunters
Use the PRH20 code when upgrading.
Sign up