Trifacta Standard API (v8.4.0)

Download OpenAPI specification:Download


To enable programmatic control over its objects, the Trifacta Platform supports a range of REST API endpoints across its objects. This section provides an overview of the API design, methods, and supported use cases.

Most of the endpoints accept JSON as input and return JSON responses. This means that you must usually add the following hearders to your request:

Content-type: application/json
Accept: application/json


The term resource refers to a single type of object in the Trifacta Platform metadata. An API is broken up by its endpoint's corresponding resource. The name of a resource is typically plural, and expressed in camelCase. Example: jobGroups.

Resource names are used as part of endpoint URLs, as well as in API parameters and responses.

CRUD Operations

The platform supports Create, Read, Update, and Delete operations on most resources. You can review the standards for these operations and their standard parameters below.

Some endpoints have special behavior as exceptions.


To create a resource, you typically submit an HTTP POST request with the resource's required metadata in the request body. The response returns a 201 Created response code upon success with the resource's metadata, including its internal id, in the response body.


An HTTP GET request can be used to read a resource or to list a number of resources.

A resource's id can be submitted in the request parameters to read a specific resource. The response usually returns a 200 OK response code upon success, with the resource's metadata in the response body.

If a GET request does not include a specific resource id, it is treated as a list request. The response usually returns a 200 OK response code upon success, with an object containing a list of resources' metadata in the response body.

When reading resources, some common query parameters are usually available. e.g.:

Query Parameter Type Description
embed string Comma-separated list of objects to include part of the response. See Embedding resources.
includeDeleted string If set to true, response includes deleted objects.
limit integer Maximum number of objects to fetch. Usually 25 by default
offset integer Offset after which to start returning objects. For use with limit query parameter.


Updating a resource requires the resource id, and is typically done using an HTTP PUT or PATCH request, with the fields to modify in the request body. The response usually returns a 200 OK response code upon success, with minimal information about the modified resource in the response body.


Deleting a resource requires the resource id and is typically executing via an HTTP DELETE request. The response usually returns a 204 No Content response code upon success.


  • Resource names are plural and expressed in camelCase.

  • Resource names are consistent between main URL and URL parameter.

  • Parameter lists are consistently enveloped in the following manner:

    { "data": [{ ... }] }
  • Field names are in camelCase and are consistent with the resource name in the URL or with the embed URL parameter.

    "creator": { "id": 1 },
    "updater": { "id": 2 },

Embedding Resources

When reading a resource, the platform supports an embed query parameter for most resources, which allows the caller to ask for associated resources in the response. Use of this parameter requires knowledge of how different resources are related to each other and is suggested for advanced users only.

In the following example, the sub-jobs of a jobGroup are embedded in the response for jobGroup=1:

If you provide an invalid embedding, you will get an error message. The response will contain the list of possible resources that can be embedded. e.g.*

Example error:

  "exception": {
    "name": "ValidationFailed",
    "message": "Input validation failed",
    "details": "No association * in flows! Valid associations are creator, updater, snapshots..."


It is possible to let the application know that you need fewer data to improve the performance of the endpoints using the fields query parameter. e.g.;name

The list of fields need to be separated by semi-colons ;. Note that the application might sometimes return more fields than requested.

You can also use it while embedding resources.;name&embed=flownodes(fields=id)

Limit and sorting

You can limit and sort the number of embedded resources for some associations. e.g.,fields=id,sort=-id)

Note that not all association support this. An error is returned when it is not possible to limit the number of embedded results.


The Trifacta Platform uses HTTP response codes to indicate the success or failure of an API request.

  • Codes in the 2xx range indicate success.
  • Codes in the 4xx range indicate that the information provided is invalid (invalid parameters, missing permissions, etc.)
  • Codes in the 5xx range indicate an error on the servers. These are rare and should usually go away when retrying. If you experience a lot of 5xx errors, contact support.
HTTP Status Code (client errors) Notes
400 Bad Request Potential reasons:
  • Resource doesn't exist
  • Request is incorrectly formatted
  • Request contains invalid values
403 Forbidden Incorrect permissions to access the Resource.
404 Not Found Resource cannot be found.
410 Gone Resource has been previously deleted.
415 Unsupported Media Type Incorrect Accept or Content-type header

Request Ids

Each request has a request identifier, which can be found in the response headers, in the following form:

x-trifacta-request-id: <myRequestId>

ℹ️ NOTE: If you have an issue with a specific request, please include the x-trifacta-request-id value when you contact support

Versioning and Endpoint Lifecycle

  • API versioning is not synchronized to specific releases of the platform.
  • APIs are designed to be backward compatible.
  • Any changes to the API will first go through a deprecation phase.

Trying the API

You can use a third party client, such as curl, HTTPie, Postman or the Insomnia rest client to test the Trifacta API.

⚠️ When testing the API, bear in mind that you are working with your live production data, not sample data or test data.

Note that you will need to pass an API token with each request.

For e.g., here is how to run a job with curl:

curl -X POST '' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer <token>' \
-d '{ "wrangledDataset": { "id": "<recipe-id>" } }'

Using a graphical tool such as Postman or Insomnia, it is possible to import the API specifications directly:

  1. Download the API specification by clicking the Download button at top of this document
  2. Import the JSON specification in the graphical tool of your choice.
    • In Postman, you can click the import button at the top
    • With Insomnia, you can just drag-and-drop the file on the UI

Note that with Postman, you can also generate code snippets by selecting a request and clicking on the Code button.



ℹ️ NOTE: Each request to the Trifacta Platform must include authentication credentials.

API access tokens can be acquired and applied to your requests to obscure sensitive Personally Identifiable Information (PII) and are compliant with common privacy and security standards. These tokens last for a preconfigured time period and can be renewed as needed.

You can create and delete access tokens through the Settings area of the application. With each request, you submit the token as part of the Authorization header.

Authorization: Bearer <tokenValue>

As needed, you can create and use additional tokens. There is no limit to the number of tokens you can create. See Manage API Access Tokens for more information.

Security Scheme Type HTTP
HTTP Authorization Scheme bearer


An internal object encoding the information necessary to run a part of a Trifacta jobGroup.

This is called a "Stage" on the Job Results page in the UI.

Get Jobs for Job Group

Get information about the batch jobs within a Trifacta job.

ref: getJobsForJobGroup

path Parameters


Response samples

Content type
  • "data": [


A collection of internal jobs, representing a single execution from the user, or the generation of a single Sample.

The terminology might be slightly confusing but remains for backward compatibility reasons.

  • A jobGroup is generally called a "Job" in the UI.
  • A job is called a "Stage" in the UI.

Run Job Group

Create a jobGroup, which launches the specified job as the authenticated user. This performs the same action as clicking on the Run Job button in the application.

The request specification depends on one of the following conditions:

  • The recipe (wrangledDataset) already has an output object and just needs to be run.
  • The recipe has already had a job run against it and just needs to be re-run.
  • The recipe has not had a job run, or the job definition needs to be re-specified.

In the last case, you must specify some overrides when running the job. See the example with overrides for more information.

ℹ️ NOTE: Override values applied to a job are not validated. Invalid overrides may cause your job to fail.

Request Body - Run job

To run a job, you just specify the recipe identifier ( If the job is successful, all defined outputs are generated, as defined in the outputobject, publications, and writeSettings objects associated with the recipe.

TIP: To identify the wrangledDataset Id, select the recipe icon in the flow view and take the id shown in the URL. e.g. if the URL is /flows/10?recipe=7, the wrangledDataset Id is 7.

{"wrangledDataset": {"id": 7}}

Overriding the output settings

If you must change some outputs or other settings for the specific job, you can insert these changes in the overrides section of the request. In the example below, the running environment, profiling option, and writeSettings for the job are modified for this execution.

  "wrangledDataset": {"id": 1},
  "overrides": {
    "execution": "spark",
    "profiler": false,
    "writesettings": [
        "path": "<path_to_output_file>",
        "action": "create",
        "format": "csv",
        "compression": "none",
        "header": false,
        "asSingleFile": false

Using Variables (Run Parameters)

If you have created a dataset with parameters, you can specify overrides for parameter values during execution through the APIs. Through this method, you can iterate job executions across all matching sources of a parameterized dataset. In the example below, the runParameters override has been specified for the country. In this case, the value "Germany" is inserted for the specified variable as part of the job execution.

  "wrangledDataset": {"id": 33},
  "runParameters": {
    "overrides": {
      "data": [{"key": "country", "value": "Germany"}]


The response contains a list of jobs which can be used to get a granular status of the JobGroup completion. The jobGraph indicates the dependency between each of the jobs.

  "sessionId": "79276c31-c58c-4e79-ae5e-fed1a25ebca1",
  "reason": "JobStarted",
  "jobGraph": {
    "vertices": [21, 22],
    "edges": [{"source": 21, "target": 22}]
  "id": 9,
  "jobs": {"data": [{"id": 21}, {"id": 22}