Download OpenAPI specification:Download
To enable programmatic control over its objects, the Designer Cloud Powered by Trifacta Platform supports a range of REST API endpoints across its objects. This section provides an overview of the API design, methods, and supported use cases.
Most of the endpoints accept JSON
as input and return JSON
responses.
This means that you must usually add the following headers to your request:
Content-type: application/json
Accept: application/json
Version: 10.1.0+2925054.20230626162652.eef383e4
The term resource
refers to a single type of object in the Designer Cloud Powered by Trifacta Platform metadata. An API is broken up by its endpoint's corresponding resource.
The name of a resource is typically plural, and expressed in camelCase. Example: jobGroups
.
Resource names are used as part of endpoint URLs, as well as in API parameters and responses.
The platform supports Create, Read, Update, and Delete operations on most resources. You can review the standards for these operations and their standard parameters below.
Some endpoints have special behavior as exceptions.
To create a resource, you typically submit an HTTP POST
request with the resource's required metadata in the request body.
The response returns a 201 Created
response code upon success with the resource's metadata, including its internal id
, in the response body.
An HTTP GET
request can be used to read a resource or to list a number of resources.
A resource's id
can be submitted in the request parameters to read a specific resource.
The response usually returns a 200 OK
response code upon success, with the resource's metadata in the response body.
If a GET
request does not include a specific resource id
, it is treated as a list request.
The response usually returns a 200 OK
response code upon success, with an object containing a list of resources' metadata in the response body.
When reading resources, some common query parameters are usually available. e.g.:
/v4/jobGroups?limit=100&includeDeleted=true&embed=jobs
Query Parameter | Type | Description |
---|---|---|
embed | string | Comma-separated list of objects to include part of the response. See Embedding resources. |
includeDeleted | string | If set to true , response includes deleted objects. |
limit | integer | Maximum number of objects to fetch. Usually 25 by default |
offset | integer | Offset after which to start returning objects. For use with limit query parameter. |
Updating a resource requires the resource id
, and is typically done using an HTTP PUT
or PATCH
request, with the fields to modify in the request body.
The response usually returns a 200 OK
response code upon success, with minimal information about the modified resource in the response body.
Deleting a resource requires the resource id
and is typically executing via an HTTP DELETE
request. The response usually returns a 204 No Content
response code upon success.
Resource names are plural and expressed in camelCase.
Resource names are consistent between main URL and URL parameter.
Parameter lists are consistently enveloped in the following manner:
{ "data": [{ ... }] }
Field names are in camelCase and are consistent with the resource name in the URL or with the embed URL parameter.
"creator": { "id": 1 },
"updater": { "id": 2 },
When reading a resource, the platform supports an embed
query parameter for most resources, which allows the caller to ask for associated resources in the response.
Use of this parameter requires knowledge of how different resources are related to each other and is suggested for advanced users only.
In the following example, the sub-jobs of a jobGroup are embedded in the response for jobGroup=1:
https://yourworkspace.cloud.trifacta.com/v4/jobGroups/1?embed=jobs
If you provide an invalid embedding, you will get an error message. The response will contain the list of possible resources that can be embedded. e.g.
https://yourworkspace.cloud.trifacta.com/v4/jobGroups/1?embed=*
Example error:
{
"exception": {
"name": "ValidationFailed",
"message": "Input validation failed",
"details": "No association * in flows! Valid associations are creator, updater, snapshots..."
}
}
It is possible to let the application know that you need fewer data to improve the performance of the endpoints using the fields
query parameter. e.g.
https://yourworkspace.cloud.trifacta.com/v4/flows?fields=id;name
The list of fields need to be separated by semi-colons ;
. Note that the application might sometimes return more fields than requested.
You can also use it while embedding resources.
https://yourworkspace.cloud.trifacta.com/v4/flows?fields=id;name&embed=flownodes(fields=id)
You can limit and sort the number of embedded resources for some associations. e.g.
https://yourworkspace.cloud.trifacta.com/v4/flows?fields=id&embed=flownodes(limit=1,fields=id,sort=-id)
Note that not all association support this. An error is returned when it is not possible to limit the number of embedded results.
The Designer Cloud Powered by Trifacta Platform uses HTTP response codes to indicate the success or failure of an API request.
HTTP Status Code (client errors) | Notes |
---|---|
400 Bad Request | Potential reasons:
|
403 Forbidden | Incorrect permissions to access the Resource. |
404 Not Found | Resource cannot be found. |
410 Gone | Resource has been previously deleted. |
415 Unsupported Media Type | Incorrect Accept or Content-type header |
Each request has a request identifier, which can be found in the response headers, in the following form:
x-trifacta-request-id: <myRequestId>
ℹ️ NOTE: If you have an issue with a specific request, please include the
x-trifacta-request-id
value when you contact support
The Designer Cloud Powered by Trifacta Platform applies a per-minute limit to the number of request received by the API for some endpoints.
Users who send too many requests receive a HTTP status code 429
error response.
For applicable endpoints, the quota is documented under the endpoint description.
Treat these limits as maximums and don't try to generate unnecessary load. Notes:
In case you need to trigger many requests on short interval, you can watch for the 429
status code and build a retry mechanism.
The retry mechanism should follow an exponential backoff schedule to reduce request volume. Adding some randomness to the backoff schedule is recommended.
For endpoints which are subject to low rate-limits, response headers will be included in the request and indicate how many requests are left for the current interval. You can use these to avoid blindly retrying.
Example response headers for an endpoint limited to 30 requests/user/min and 60 requests/workspace/min
Header name | Description |
---|---|
x-rate-limit-user-limit |
The maximum number of requests you're permitted to make per user per minute (e.g. 30 ) |
x-rate-limit-user-remaining |
The number of requests remaining in the current rate limit window. (e.g. 28 ) |
x-rate-limit-user-reset |
The time at which the current rate limit window resets in UTC epoch seconds (e.g. 1631095033096 ) |
x-rate-limit-workspace-limit |
The maximum number of requests you're permitted to make per workspace per minute (e.g. 60 ) |
x-rate-limit-workspace-remaining |
The number of requests remaining in the current rate limit window. (e.g. 38 ) |
x-rate-limit-workspace-reset |
The time at which the current rate limit window resets in UTC epoch milliseconds (e.g. 1631095033096 ) |
x-retry-after |
Number of seconds until the current rate limit window resets (e.g. 42 ) |
If you exceed the rate limit, an error response is returned:
curl -i -X POST 'https://api.clouddataprep.com/v4/jobGroups' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer <token>' \
-d '{ "wrangledDataset": { "id": "<recipe-id>" } }'
HTTP/1.1 429 Too Many Requests
x-rate-limit-user-limit: 30
x-rate-limit-user-remaining: 0
x-rate-limit-user-reset: 1631096271696
x-retry-after: 57
{
"exception": {
"name": "TooManyRequestsException",
"message": "Too Many Requests",
"details": "API quota reached for \"runJobGroup\". Wait 57 seconds before making a new request. (Max. 30 requests allowed per minute per user.)"
}
}
You can use a third party client, such as curl, HTTPie, Postman or the Insomnia rest client to test the Designer Cloud Powered by Trifacta API.
⚠️ When testing the API, bear in mind that you are working with your live production data, not sample data or test data.
Note that you will need to pass an API token with each request.
For e.g., here is how to run a job with curl:
curl -X POST 'https://yourworkspace.cloud.trifacta.com/v4/jobGroups' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer <token>' \
-d '{ "wrangledDataset": { "id": "<recipe-id>" } }'
Using a graphical tool such as Postman or Insomnia, it is possible to import the API specifications directly:
Note that with Postman, you can also generate code snippets by selecting a request and clicking on the Code button.
ℹ️ NOTE: Each request to the Designer Cloud Powered by Trifacta Platform must include authentication credentials.
API access tokens can be acquired and applied to your requests to obscure sensitive Personally Identifiable Information (PII) and are compliant with common privacy and security standards. These tokens last for a preconfigured time period and can be renewed as needed.
You can create and delete access tokens through the Settings area of the application. With each request, you submit the token as part of the Authorization header.
Authorization: Bearer <tokenValue>
As needed, you can create and use additional tokens. There is no limit to the number of tokens you can create. See Manage API Access Tokens for more information.
Security Scheme Type | HTTP |
---|---|
HTTP Authorization Scheme | bearer |
An object used to provide a simpler and more secure way of accessing the REST API endpoints of the Designer Cloud Powered by Trifacta Platform. Access tokens limit exposure of clear-text authentication values and provide an easy method of managing authentication outside of the browser. See the Authentication section for more information.
Create an API Access Token. See the Authentication section for more information about API Access Token.
⚠️ API tokens inherit the API access of the user who creates them. Treat tokens as passwords and keep them in a secure place.
This request requires you to be authenticated.
If you do not have a valid access token to use at this time, you must first create one using the UI.
If you have a valid access token, you can submit that token in your Authentication header with this request.
ref: createApiAccessToken
lifetimeSeconds required | integer Lifetime in seconds for the access token. Set this value to -1 to create a non-expiring token. |
description | string User-friendly description for the access token |
{- "lifetimeSeconds": -1,
- "description": "API access token description"
}
{- "tokenValue": "eyJ0b2tlbklkIjoiYmFiOTA4ZjctZGNjMi00OTYyLTg1YmQtYzFlOTZkMGNhY2JkIiwic2VjcmV0IjoiOWIyNjQ5MWJiODM4ZWY0OWE1NzdhYzYxOWEwYTFkNjc4ZmE4NmE5MzBhZWFiZDk3OGRlOTY0ZWI0MDUyODhiOCJ9",
- "tokenInfo": {
- "tokenId": "0bc1d49f-5475-4c62-a0ba-6ad269389ada",
- "description": "API access token description",
- "expiredAt": "2019-08-24T14:15:22Z",
- "createdAt": "2019-08-24T14:15:22Z",
- "lastUsed": null
}
}
List API Access Tokens of the current user
ref: listApiAccessTokens
{- "data": [
- {
- "tokenId": "0bc1d49f-5475-4c62-a0ba-6ad269389ada",
- "description": "API access token description",
- "expiredAt": "2019-08-24T14:15:22Z",
- "createdAt": "2019-08-24T14:15:22Z",
- "lastUsed": null
}
], - "count": 1
}
Get an existing api access token
ref: getApiAccessToken
tokenId required | string Example: 0bc1d49f-5475-4c62-a0ba-6ad269389ada |
{- "tokenId": "0bc1d49f-5475-4c62-a0ba-6ad269389ada",
- "description": "API access token description",
- "expiredAt": "2019-08-24T14:15:22Z",
- "createdAt": "2019-08-24T14:15:22Z",
- "lastUsed": null
}
Delete the specified access token.
⚠️ If you delete an active access token, you may prevent the user from accessing the platform outside of the Trifacta application.
ref: deleteApiAccessToken
tokenId required | string Example: 0bc1d49f-5475-4c62-a0ba-6ad269389ada |
An object containing information for accessing AWS S3 storage, including details like defaultBucket, credentials, etc.
Create a new AWS config
ref: createAwsConfig
credentialProvider required | string Enum: "default" "temporary"
|
defaultBucket | string Default S3 bucket where user can upload and write results |
extraBuckets | Array of strings |
role | string AWS IAM Role, required when credential provider is set to temporary |
externalId | string This identifier is used to manage cross-account access in AWS. This value should not be modified. |
key | string AWS key string, required when credential provider is set to default |
secret | string AWS secret string, required when credential provider is set to default |
integer or string When creating an AWS configuration, an administrator can insert the personId parameter to assign the configuration to the internal identifier for the user. If this parameter is not included, the AWS configuration is assigned to the user who created it. |
{- "defaultBucket": "bucketName",
- "extraBuckets": [
- "bucket1"
], - "credentialProvider": "default",
- "role": "arn:aws:iam::xxxxxxxxxxxxx:role/sample-role",
- "externalId": "trifacta_****",
- "key": "string",
- "secret": "string",
- "personId": 1
}
{- "defaultBucket": "bucketName",
- "extraBuckets": [
- "bucket1"
], - "credentialProvider": "default",
- "role": "arn:aws:iam::xxxxxxxxxxxxx:role/sample-role",
- "externalId": "trifacta_****",
- "credential": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "activeRoleId": 1
}
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "defaultBucket": "bucketName",
- "extraBuckets": [
- "bucket1"
], - "credentialProvider": "default",
- "role": "arn:aws:iam::xxxxxxxxxxxxx:role/sample-role",
- "externalId": "trifacta_****",
- "credential": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "activeRoleId": 1
}
], - "count": 1
}
The request body contains the parameters of the awsConfigs object that you wish to modify. You do not have to include parameters that are not being modified.
The following changes the default bucket for the AWS configuration object.
{ "defaultBucket": "testing2" }
ref: updateAwsConfig
id required | integer |
id | integer unique identifier for this object. |
defaultBucket | string Default S3 bucket where user can upload and write results |
extraBuckets | Array of strings |
credentialProvider | string Enum: "default" "temporary"
|
role | string AWS IAM Role, required when credential provider is set to temporary |
key | string AWS key string, required when credential provider is set to default |
secret | string AWS secret string, required when credential provider is set to default |
integer or string When creating an AWS configuration, an administrator can insert the personId parameter to assign the configuration to the internal identifier for the user. If this parameter is not included, the AWS configuration is assigned to the user who created it. | |
externalId | string This identifier is used to manage cross-account access in AWS. This value should not be modified. |
{- "id": 1,
- "defaultBucket": "bucketName",
- "extraBuckets": [
- "bucket1"
], - "credentialProvider": "default",
- "role": "arn:aws:iam::xxxxxxxxxxxxx:role/sample-role",
- "key": "string",
- "secret": "string",
- "personId": 1,
- "externalId": "trifacta_****"
}
{- "defaultBucket": "bucketName",
- "extraBuckets": [
- "bucket1"
], - "credentialProvider": "default",
- "role": "arn:aws:iam::xxxxxxxxxxxxx:role/sample-role",
- "externalId": "trifacta_****",
- "credential": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "activeRoleId": 1
}
An object containing the AWS IAM Role ARN for authenticating aws resources when using role-base authentication, this object belongs to an awsConfig.
role required | string |
integer or string |
{- "role": "string",
- "personId": 1
}
{- "id": 1,
- "awsConfigId": 1,
- "role": "string",
- "createdFrom": "api",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deletedAt": "2019-08-24T14:15:22Z"
}
List AWS roles for a user .
ref: listAwsRoles
personId | integer person id |
{- "data": {
- "id": 1,
- "awsConfig": {
- "id": 1
}, - "role": "string",
- "createdFrom": "api",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deletedAt": "2019-08-24T14:15:22Z"
}
}
Update an existing aws role
ℹ️ NOTE: Admin role is required to use this endpoint.
ref: updateAwsRole
id required | integer |
integer or string | |
role | string |
createdFrom | string Enum: "api" "idp" shows which means created the role |
createdAt | string <date-time> The time this object was first created. |
updatedAt | string <date-time> The time this object was last updated. |
deletedAt | string <date-time> The time this object was deleted. |
{- "personId": 1,
- "role": "string",
- "createdFrom": "api",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deletedAt": "2019-08-24T14:15:22Z"
}
{- "id": 1,
- "awsConfigId": 1,
- "role": "string",
- "createdFrom": "api",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deletedAt": "2019-08-24T14:15:22Z"
}
Delete an existing aws role
ref: deleteAwsRole
id required | integer |
An object representing Designer Cloud Powered by Trifacta's connection to an external data source. connections can be used for import, publishing, or both, depending on type.
Create a new connection
ref: createConnection
vendor required | string String identifying the connection`s vendor |
vendorName required | string Name of the vendor of the connection |
type required | string Enum: "jdbc" "rest" "remotefile" Type of connection |
credentialType required | string Enum: "basic" "securityToken" "iamRoleArn" "iamDbUser" "oauth2" "keySecret" "apiKey" "awsKeySecret" "basicWithAppToken" "userWithApiToken" "basicApp" "transactionKey" "password" "apiKeyWithToken" "noAuth" "httpHeaderBasedAuth" "privateApp" "httpQueryBasedAuth"
|
name required | string Display name of the connection. |
params required | object This setting is populated with any parameters that are passed to the source duringconnection and operations. For relational sources, this setting may include thedefault database and extra load parameters. |
advancedCredentialType | string |
sshTunneling | boolean When |
ssl | boolean When |
description | string User-friendly description for the connection. |
disableTypeInference | boolean If set to false, type inference has been disabled for this connection. The default is true. When type inference has been disabled, the Designer Cloud Powered by Trifacta Platform does not apply Designer Cloud Powered by Trifacta types to data when it is imported. |
isGlobal | boolean If NOTE: After a connection has been made public, it cannot be made private again. It must be deleted and recreated. |
credentialsShared | boolean If |
host | string Host of the source |
port | integer Port number for the source |
bucket | string bucket name for the source |
oauth2StateId | string |
Array of basic (object) or securityToken (object) or iamRoleArn (object) or iamDbUser (object) or oauth2 (object) or keySecret (object) or apiKey (object) or awsKeySecret (object) or basicWithAppToken (object) or userWithApiToken (object) or basicApp (object) or transactionKey (object) or password (object) or privateApp (object) or apiKeyWithToken (object) or noAuth (object) or httpHeaderBasedAuth (object) or privateApp (object) or httpQueryBasedAuth (object) (acceptedCredentials) [ items ] If present, these values are the credentials used to connect to the database. | |
Array of sshTunnelingBasic (object) (advancedCredentialsInfo) [ items ] If present, these values are the credentials used to connect to the database. | |
Array of objects (jdbcRestEndpointsInfo) [ items ] If present, these values are the REST endpoints info required for connection |
{- "vendor": "oracle",
- "vendorName": "oracle",
- "type": "jdbc",
- "name": "example_oracle_connection",
- "description": "This is an oracle connection",
- "disableTypeInference": false,
- "isGlobal": false,
- "credentialsShared": false,
- "host": "my_oracle_host",
- "port": 1521,
- "params": {
- "service": "my_oracle_service"
}, - "credentialType": "basic",
- "credentials": [
- {
- "username": "my_oracle_username",
- "password": "my_oracle_password"
}
]
}
{- "vendor": "oracle",
- "vendorName": "oracle",
- "type": "jdbc",
- "credentialType": "basic",
- "advancedCredentialType": "string",
- "sshTunneling": true,
- "ssl": true,
- "name": "example_oracle_connection",
- "description": "string",
- "disableTypeInference": true,
- "isGlobal": true,
- "credentialsShared": true,
- "host": "example.oracle.test",
- "port": 1521,
- "id": "21",
- "uuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "credentials": [
- {
- "username": "string",
- "password": "string"
}
], - "advancedCredentials": [
- {
- "sshTunnelingUsername": "string",
- "sshTunnelingPassword": "string"
}
], - "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "params": {
- "database": "dev"
}, - "endpoints": [
- {
- "tableName": "table1",
- "httpMethod": "get",
- "endpoint": "/capsules",
- "headers": {
- "Content-Type": "application/json"
}, - "queryParams": {
- "q": "query-param-example"
}, - "requestBody": "{\"key1\": \"value1\"}",
- "pagination": {
- "paginationType": "nextPageURL",
- "pageurlpath": "$./data/nextPage"
}, - "xPath": "$.missions",
- "dataModel": "document"
}
]
}
List existing connections
ref: listConnections
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
sharedRole | string Which type of role to list the connections |
{- "data": [
- {
- "vendor": "oracle",
- "vendorName": "oracle",
- "type": "jdbc",
- "credentialType": "basic",
- "advancedCredentialType": "string",
- "sshTunneling": true,
- "ssl": true,
- "name": "example_oracle_connection",
- "description": "string",
- "disableTypeInference": true,
- "isGlobal": true,
- "credentialsShared": true,
- "host": "example.oracle.test",
- "port": 1521,
- "id": "21",
- "uuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "credentials": [
- {
- "username": "string",
- "password": "string"
}
], - "advancedCredentials": [
- {
- "sshTunnelingUsername": "string",
- "sshTunnelingPassword": "string"
}
], - "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "params": {
- "database": "dev"
}, - "endpoints": [
- {
- "tableName": "table1",
- "httpMethod": "get",
- "endpoint": "/capsules",
- "headers": {
- "Content-Type": "application/json"
}, - "queryParams": {
- "q": "query-param-example"
}, - "requestBody": "{\"key1\": \"value1\"}",
- "pagination": {
- "paginationType": "nextPageURL",
- "pageurlpath": "$./data/nextPage"
}, - "xPath": "$.missions",
- "dataModel": "document"
}
]
}
], - "count": 1
}
Count existing connections
ref: countConnections
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
sharedRole | string Which type of role to count the connections |
{- "count": 1
}
Get an existing connection
ref: getConnection
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "vendor": "oracle",
- "vendorName": "oracle",
- "type": "jdbc",
- "credentialType": "basic",
- "advancedCredentialType": "string",
- "sshTunneling": true,
- "ssl": true,
- "name": "example_oracle_connection",
- "description": "string",
- "disableTypeInference": true,
- "isGlobal": true,
- "credentialsShared": true,
- "host": "example.oracle.test",
- "port": 1521,
- "id": "21",
- "uuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "credentials": [
- {
- "username": "string",
- "password": "string"
}
], - "advancedCredentials": [
- {
- "sshTunnelingUsername": "string",
- "sshTunnelingPassword": "string"
}
], - "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "params": {
- "database": "dev"
}, - "endpoints": [
- {
- "tableName": "table1",
- "httpMethod": "get",
- "endpoint": "/capsules",
- "headers": {
- "Content-Type": "application/json"
}, - "queryParams": {
- "q": "query-param-example"
}, - "requestBody": "{\"key1\": \"value1\"}",
- "pagination": {
- "paginationType": "nextPageURL",
- "pageurlpath": "$./data/nextPage"
}, - "xPath": "$.missions",
- "dataModel": "document"
}
]
}
Delete an existing connection
ref: deleteConnection
id required | integer |
Get the connection status
ref: getConnectionStatus
id required | integer |
{- "result": "string"
}
Metadata that controls the behavior of JDBC connectors.
The terminology in the connectivity API is as follows:
The default configuration of each connector has been tuned for optimal performance and standardized type mapping behavior. If you require connector behavior changes, you can leverage the following APIs.
The specified overrides are merged into the current set of overrides for the connector. A new entry is created if no overrides currently exist.
The connector metadata stores a mapping for each Trifacta type to an official JDBC type
and database native type. When Trifacta publishes to a new table, it uses the first type specified
in the vendorTypeList
. The rest of the types are used when validating the publish
action during design time.
As an example, let’s override the type mapping behavior for the Postgres connector.
By default it publishes Trifacta integers to bigint
, but we can make it publish to
int
instead. Make a GET request to /v4/connectormetadata/postgres
to get the current
behavior. Locate the section called publishTypeMap
and identify the element in the list where
trifactaType
is INTEGER. We can see that the first element under the corresponding
vendorTypeList
is bigint
.
Since we want Postgres to write to int
when creating string columns in a new table,
move that value to the beginning of the vendorTypeList. Send a POST request to
/v4/connectormetadata/postgres/overrides
with the following body:
ℹ️ NOTE: Overriding the jdbcType is not supported behavior. Please use the same value from the default.
{
"publishMetadata": {
"publishTypeMap": [
{
"vendorTypeList": [
"int",
"bigint",
"int2",
"int4",
"int8",
"smallint",
"serial",
"bigserial",
"text",
"varchar",
"bpchar",
"char",
"character varying",
"character"
],
"jdbcType": 4,
"trifactaType": "INTEGER"
}
]
}
}
Rerun the GET request to ensure the values are reflected.
The default performance configurations have been tuned to work well with the majority of systems. There are a few parameters that can be tuned if needed:
numberOfConnections
: Number of connections that are used to write data in parallel.batchSize
: Number of records written in each database batch.{
"publishMetadata": {
"performanceParams": {
"batchSize": 10000,
"numberOfConnections": 5
}
}
}
The default performance configurations have been tuned to work well with the majority
of systems. One parameter that can be tuned is the database fetchSize
. By default it is
set to a value of -1, which uses the default specified by the database driver. The following request
body can override this value:
{
"runtimeMetadata": {
"importPerformance": {"fetchSize": 1000}
}
}
connector required | string |
object | |
object |
{- "publishMetadata": {
- "publishTypeMap": [
- {
- "vendorTypeList": [
- "int",
- "bigint",
- "int2",
- "int4",
- "int8",
- "smallint",
- "serial",
- "bigserial",
- "text",
- "varchar",
- "bpchar",
- "char",
- "character varying",
- "character"
], - "jdbcType": 4,
- "trifactaType": "INTEGER"
}
]
}
}
Get the metadata overrides for a connector in a given workspace. These overrides are applied to the base configuration for connectivity operations.
connector required | string |
{- "connectionMetadata": {
- "name": "string",
- "displayName": "string",
- "type": "string",
- "category": "relational",
- "status": "supported",
- "credentialTypes": [
- "basic"
], - "operation": "import",
- "connectionParameters": [
- {
- "name": "string",
- "displayName": "string",
- "type": "string",
- "required": true,
- "category": "string",
- "defaultValue": "string"
}
]
}, - "runtimeMetadata": {
- "defaultTypeTreatment": "WHITELIST",
- "typeMap": [
- {
- "vendorType": "string",
- "jdbcType": 1,
- "accessorClass": "string",
- "trifactaType": "ARRAY",
- "classification": "WHITELIST"
}
], - "metadataAccessors": { },
- "pathMetadata": {
- "qualifiedPath": "CATALOG"
}, - "limit": {
- "table": "string",
- "query": "string"
}, - "errorHandlers": { },
- "importPerformance": {
- "fetchSize": 1,
- "disableAutoCommit": true,
- "schemaLimit": 1,
- "ormEnabled": true,
- "unload": {
- "stream": true,
- "cli": {
- "script": "string",
- "format": "string",
- "timeout": 1
}
}
}
}, - "publishMetadata": {
- "publishMethod": "direct",
- "publishTypeMap": [
- {
- "jdbcType": 1,
- "trifactaType": "string",
- "defaultValue": "string",
- "vendorTypeList": [
- "string"
]
}
], - "publishValidation": {
- "enabled": true,
- "maxTableNameLength": 1,
- "maxColumnNameLength": 1,
- "validTableNameRegex": "string",
- "validColNameRegex": "string"
}, - "publishQueries": {
- "createTable": "string",
- "createTempTable": "string",
- "copyTable": "string",
- "dropTable": "string",
- "insertTable": "string",
- "truncateTable": "string",
- "addColumn": "string"
}, - "performanceParams": {
- "batchProcessingEnabled": true,
- "batchLoggingEnabled": true,
- "batchSize": 1,
- "numberOfConnections": 1,
- "commitFrequency": 1,
- "queueSize": 1,
- "maxOfferToQueueRetryCount": 1,
- "maxPollFromQueueRetryCount": 1
}, - "publishInfo": {
- "qualifyingPath": [
- "string"
], - "supportedActions": [
- "create"
], - "supportedProtocols": [
- "string"
], - "externalFileFormats": [
- "pqt"
]
}
}
}
Get the consolidated metadata for a connector in a given workspace. This metadata is used to defined connectivity, ingestion, and publishing for the connector.
ref: getConnectorConfig
connector required | string |
{- "connectionMetadata": {
- "name": "string",
- "displayName": "string",
- "type": "string",
- "category": "relational",
- "status": "supported",
- "credentialTypes": [
- "basic"
], - "operation": "import",
- "connectionParameters": [
- {
- "name": "string",
- "displayName": "string",
- "type": "string",
- "required": true,
- "category": "string",
- "defaultValue": "string"
}
]
}, - "runtimeMetadata": {
- "defaultTypeTreatment": "WHITELIST",
- "typeMap": [
- {
- "vendorType": "string",
- "jdbcType": 1,
- "accessorClass": "string",
- "trifactaType": "ARRAY",
- "classification": "WHITELIST"
}
], - "metadataAccessors": { },
- "pathMetadata": {
- "qualifiedPath": "CATALOG"
}, - "limit": {
- "table": "string",
- "query": "string"
}, - "errorHandlers": { },
- "importPerformance": {
- "fetchSize": 1,
- "disableAutoCommit": true,
- "schemaLimit": 1,
- "ormEnabled": true,
- "unload": {
- "stream": true,
- "cli": {
- "script": "string",
- "format": "string",
- "timeout": 1
}
}
}
}, - "publishMetadata": {
- "publishMethod": "direct",
- "publishTypeMap": [
- {
- "jdbcType": 1,
- "trifactaType": "string",
- "defaultValue": "string",
- "vendorTypeList": [
- "string"
]
}
], - "publishValidation": {
- "enabled": true,
- "maxTableNameLength": 1,
- "maxColumnNameLength": 1,
- "validTableNameRegex": "string",
- "validColNameRegex": "string"
}, - "publishQueries": {
- "createTable": "string",
- "createTempTable": "string",
- "copyTable": "string",
- "dropTable": "string",
- "insertTable": "string",
- "truncateTable": "string",
- "addColumn": "string"
}, - "performanceParams": {
- "batchProcessingEnabled": true,
- "batchLoggingEnabled": true,
- "batchSize": 1,
- "numberOfConnections": 1,
- "commitFrequency": 1,
- "queueSize": 1,
- "maxOfferToQueueRetryCount": 1,
- "maxPollFromQueueRetryCount": 1
}, - "publishInfo": {
- "qualifyingPath": [
- "string"
], - "supportedActions": [
- "create"
], - "supportedProtocols": [
- "string"
], - "externalFileFormats": [
- "pqt"
]
}
}
}
Get the default metadata for a connector without applying custom overrides. This metadata is used to defined connectivity, ingestion, and publishing for the connector.
ref: getConnectorDefaults
connector required | string |
{- "connectionMetadata": {
- "name": "string",
- "displayName": "string",
- "type": "string",
- "category": "relational",
- "status": "supported",
- "credentialTypes": [
- "basic"
], - "operation": "import",
- "connectionParameters": [
- {
- "name": "string",
- "displayName": "string",
- "type": "string",
- "required": true,
- "category": "string",
- "defaultValue": "string"
}
]
}, - "runtimeMetadata": {
- "defaultTypeTreatment": "WHITELIST",
- "typeMap": [
- {
- "vendorType": "string",
- "jdbcType": 1,
- "accessorClass": "string",
- "trifactaType": "ARRAY",
- "classification": "WHITELIST"
}
], - "metadataAccessors": { },
- "pathMetadata": {
- "qualifiedPath": "CATALOG"
}, - "limit": {
- "table": "string",
- "query": "string"
}, - "errorHandlers": { },
- "importPerformance": {
- "fetchSize": 1,
- "disableAutoCommit": true,
- "schemaLimit": 1,
- "ormEnabled": true,
- "unload": {
- "stream": true,
- "cli": {
- "script": "string",
- "format": "string",
- "timeout": 1
}
}
}
}, - "publishMetadata": {
- "publishMethod": "direct",
- "publishTypeMap": [
- {
- "jdbcType": 1,
- "trifactaType": "string",
- "defaultValue": "string",
- "vendorTypeList": [
- "string"
]
}
], - "publishValidation": {
- "enabled": true,
- "maxTableNameLength": 1,
- "maxColumnNameLength": 1,
- "validTableNameRegex": "string",
- "validColNameRegex": "string"
}, - "publishQueries": {
- "createTable": "string",
- "createTempTable": "string",
- "copyTable": "string",
- "dropTable": "string",
- "insertTable": "string",
- "truncateTable": "string",
- "addColumn": "string"
}, - "performanceParams": {
- "batchProcessingEnabled": true,
- "batchLoggingEnabled": true,
- "batchSize": 1,
- "numberOfConnections": 1,
- "commitFrequency": 1,
- "queueSize": 1,
- "maxOfferToQueueRetryCount": 1,
- "maxPollFromQueueRetryCount": 1
}, - "publishInfo": {
- "qualifyingPath": [
- "string"
], - "supportedActions": [
- "create"
], - "supportedProtocols": [
- "string"
], - "externalFileFormats": [
- "pqt"
]
}
}
}
Create a new environment parameter to be used in the workspace.
ℹ️ NOTE: Admin role is required to use this endpoint.
overrideKey required | string key/name used when overriding the value of the variable |
required | overrideValueInfoVariable (object) or overrideValueInfoSelector (object) |
{- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}
}
{- "id": 1,
- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deleted_at": "2019-08-24T14:15:22Z",
- "usageInfo": {
- "runParameters": 1
}
}
List existing environment parameters
includeUsageInfo | string Include information about where the environment parameter is used. |
filter | string Filter environment parameters using the attached overrideKey |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "id": 1,
- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deleted_at": "2019-08-24T14:15:22Z",
- "usageInfo": {
- "runParameters": 1
}
}
]
}
Import the environment parameters from the given package.
A ZIP
file as exported by the export environment parameters endpoint is accepted.
This endpoint accept a multipart/form
content type.
Here is how to send the ZIP
package using curl.
curl -X POST https://yourworkspace.cloud.trifacta.com/v4/environmentParameters/package \
-H 'authorization: Bearer <api-token>' \
-H 'content-type: multipart/form-data' \
-F 'data=@path/to/environment-parameters-package.zip'
The response lists the objects that have been created.
ℹ️ NOTE: Admin role is required to use this endpoint.
fromUI | boolean If true, will return the list of imported environment parameters for confirmation. |
{ }
{- "data": [
- {
- "id": 1,
- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deleted_at": "2019-08-24T14:15:22Z",
- "usageInfo": {
- "runParameters": 1
}
}
]
}
Retrieve a package containing the list of environment parameters.
Response body is the contents of the package. Package contents are a ZIPped version of the list of environment parameters.
The environment parameters package can be used to import the environment parameters in another environment.
ℹ️ NOTE: Admin role is required to use this endpoint.
hideSecrets | boolean If included, the secret values will be hidden. |
Get an existing environment parameter
ℹ️ NOTE: Admin role is required to use this endpoint.
id required | integer |
includeUsageInfo | string Include information about where the environment parameter is used. |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "id": 1,
- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deleted_at": "2019-08-24T14:15:22Z",
- "usageInfo": {
- "runParameters": 1
}
}
A container for wrangling logic. Contains imported datasets, recipe, output objects, and References.
Create a new flow with specified name and optional description and target folder.
ℹ️ NOTE: You cannot add datasets to the flow through this endpoint. Moving pre-existing datasets into a flow is not supported in this release. Create the flow first and then when you create the datasets, associate them with the flow at the time of creation.
ref: createFlow
name | string Display name of the flow. |
description | string User-friendly description for the flow. |
object Settings for the flow. | |
incrementName | boolean Default: false Increment the flow name if a similar flow name already exist |
folderId | integer Internal identifier for a Flow folder. |
{- "name": "string",
- "description": "string",
- "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "incrementName": false,
- "folderId": 1
}
{- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
List existing flows
ref: listFlows
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
folderId | integer Only show flow from this folder |
flowsFilter | string Which types of flows to list. One of ['all', 'shared', 'owned'] |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "count": 1
}
Import all flows from the given package.
A ZIP
file as exported by the export Flow endpoint is accepted.
Before you import, you can perform a dry-run to check for errors. See Import Flow package - Dry run.
This endpoint accept a multipart/form
content type.
Here is how to send the ZIP
package using curl.
curl -X POST https://yourworkspace.cloud.trifacta.com/v4/flows/package \
-H 'authorization: Bearer <api-token>' \
-H 'content-type: multipart/form-data' \
-F 'data=@path/to/flow-package.zip'
The response lists the objects that have been created.
ref: importPackage
folderId | integer |
fromUI | boolean If true, will return the list of imported environment parameters for confirmation if any are referenced in the flow. |
overrideJsUdfs | boolean If true, will override the conflicting JS UDFS in the target environment which impacts all the existing flows that references it. |
File required | object (importFlowPackageRequestZip) An exported flow zip file. |
Array of environmentParameterMappingToExistingEnvParam (object) or environmentParameterMappingToManualValue (object) (environmentParameterMapping) [ items ] | |
Array of objects (connectionIdMapping) [ items ] |
{- "deletedObjects": { },
- "createdObjectMapping": { },
- "importRuleChanges": {
- "object": [
- { }
], - "value": [
- { }
]
}, - "primaryFlowIds": [
- 1
], - "flows": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "datasources": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "flownodes": [
- {
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
], - "flowedges": [
- {
- "inPortId": 1,
- "outPortId": 1,
- "inputFlowNode": {
- "id": 1
}, - "outputFlowNode": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "recipes": [
- {
- "name": "string",
- "description": "string",
- "active": true,
- "nextPortId": 1,
- "currentEdit": {
- "id": 1
}, - "redoLeafEdit": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "outputobjects": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "webhookflowtasks": [
- { }
], - "release": { }
}
Test importing flow package and return information about what objects would be created.
The same payload as for Import Flow package is expected.
ref: importPackageDryRun
folderId | integer |
File required | object (importFlowPackageRequestZip) An exported flow zip file. |
Array of environmentParameterMappingToExistingEnvParam (object) or environmentParameterMappingToManualValue (object) (environmentParameterMapping) [ items ] | |
Array of objects (connectionIdMapping) [ items ] |
{- "deletedObjects": { },
- "createdObjectMapping": { },
- "importRuleChanges": {
- "object": [
- { }
], - "value": [
- { }
]
}, - "primaryFlowIds": [
- 1
], - "flows": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "datasources": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "flownodes": [
- {
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
], - "flowedges": [
- {
- "inPortId": 1,
- "outPortId": 1,
- "inputFlowNode": {
- "id": 1
}, - "outputFlowNode": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "recipes": [
- {
- "name": "string",
- "description": "string",
- "active": true,
- "nextPortId": 1,
- "currentEdit": {
- "id": 1
}, - "redoLeafEdit": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "outputobjects": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "webhookflowtasks": [
- { }
], - "release": { }
}
Create a copy of this flow, as well as all contained recipes.
ref: copyFlow
id required | integer |
name | string name of the new copied flow. |
description | string description of the new copied flow. |
copyDatasources | boolean Default: false If true, Data sources will be copied (i.e. new imported datasets will be created, no data is copied on the file system). Otherwise, the existing imported datasets are reused. |
{- "name": "string",
- "description": "string",
- "copyDatasources": false
}
{- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
Run all adhoc destinations in a flow.
(deprecated) If a scheduleExecutionId
is provided, run all scheduled destinations in the flow.
The request body can stay empty. You can optionally pass parameters:
{
"runParameters": {
"overrides": {
"data": [{"key": "varRegion", "value": "02"}]
}
}
}
You can also pass Spark Options that will be used for the Job run.
{
"sparkOptions": [
{"key": "spark.executor.memory", "value": "4GB"}
]
}
Using recipe identifiers, you can specify a subset of outputs in the flow to run. See runJobGroup for more information on specifying wrangledDataset
.
{"wrangledDatasetIds": [2, 3]}
You can also override each outputs in the Flow using the recipe name.
{
"overrides": {
"my recipe name": {
"profiler": true,
"writesettings": [
{
"path": "<path_to_output_file>",
"action": "create",
"format": "csv",
"compression": "none",
"header": false,
"asSingleFile": false
}
]
}
}
}
An array of jobGroup results is returned. Use the flowRunId
if you want to track the status of the flow run. See Get Flow Run Status for more information.
Quotas:
20 req./user/min, 40 req./workspace/min
ref: runFlow
id required | integer |
runAsync | boolean Uses queue to run individual jobgroups asynchronously and return immediately. Default value is false. |
x-execution-id | string Example: f9cab740-50b7-11e9-ba15-93c82271a00b Optional header to safely retry the request without accidentally performing the same operation twice. If a FlowRun with the same |
ignoreRecipeErrors | boolean Setting this flag to true will mean the job will run even if there are upstream recipe errors. Setting it to false will cause the Request to fail on recipe errors. |
object (runParameterOverrides) Allows to override parameters that are defined in the flow on datasets or outputs for e.g. | |
integer or string | |
Array of objects (outputObjectSparkOptionUpdateRequest) [ items ] | |
object (outputObjectSchemaDriftOptionsUpdateRequest) | |
Array of objects (databricksOptionsUpdateRequest) [ items ] | |
execution | string Enum: "photon" "emrSpark" Execution language. Indicate on which engine the job was executed. Can be null/missing for scheduled jobs that fail during the validation phase.
|
Array of integers or strings[ items ] Subset of outputs (identified by identifier of the recipe preceding the output) in this flow to run. When empty or unspecified, all outputs in the flow will be run. | |
overrides | object Overrides for each of the output object. Use the recipe name to specify the overrides. |
{ }
{- "flowRunId": 1,
- "data": [
- {
- "id": 1,
- "flowRun": {
- "id": 1
}, - "jobs": {
- "data": [
- {
- "id": 1
}
]
}, - "jobGraph": {
- "edges": [
- {
- "source": 1,
- "target": 1
}
], - "vertices": [
- 1
]
}, - "reason": "Job started",
- "sessionId": "f9cab740-50b7-11e9-ba15-93c82271a00b"
}
]
}
Count existing flows
ref: countFlows
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
folderId | integer Only show flow from this folder |
flowsFilter | string Which types of flows to count. One of ['all', 'shared', 'owned'] |
{- "count": 1
}
Get an existing flow
ref: getFlow
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
Update an existing flow based on the specified identifier.
ℹ️ NOTE: You cannot add datasets to the flow through this endpoint. Moving pre-existing datasets into a flow is not supported in this release. Create the flow first and then when you create the datasets, associate them with the flow at the time of creation.
ref: patchFlow
id required | integer |
name | string Display name of the flow. |
description | string User-friendly description for the flow. |
object Settings for the flow. | |
folderId | integer Internal identifier for a Flow folder. |
{- "name": "string",
- "description": "string",
- "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "folderId": 1
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing flow
ref: deleteFlow
id required | integer |
Retrieve a package containing the definition of the specified flow.
Response body is the contents of the package. Package contents are a ZIPped version of the flow definition.
The flow package can be used to import the flow in another environment. See the Import Flow Package for more information.
Quotas:
40 req./user/min, 50 req./workspace/min
ref: getFlowPackage
id required | integer |
comment | string comment to be displayed when flow is imported in a deployment package |
Performs a dry-run of generating a flow package and exporting it, which performs a check of all permissions required to export the package.
If they occur, permissions errors are reported in the response.
Quotas:
20 req./user/min, 40 req./workspace/min
ref: getFlowPackageDryRun
id required | integer |
List flows, with special filtering behaviour
ref: listFlowsLibrary
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowsFilter | string Which types of flows to list. One of ['all', 'shared', 'owned'] |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "count": {
- "flow": 1,
- "folder": 1,
- "all": 1
}
}
Count flows, with special filtering behaviour
ref: countFlowsLibrary
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowsFilter | string Which types of flows to count. One of ['all', 'shared', 'owned'] |
{- "count": {
- "flow": 1,
- "folder": 1,
- "all": 1
}
}
List all the inputs of a Flow. Also include data sources that are present in referenced flows.
ref: getFlowInputs
id required | integer |
{- "data": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "count": 1
}
List all the outputs of a Flow.
ref: getFlowOutputs
id required | integer |
{- "data": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "count": 1
}
Get all flows contained in this folder.
ref: getFlowsForFolder
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowsFilter | string Which types of flows to list. One of ['all', 'shared', 'owned'] |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "count": 1
}
Get the count of flows contained in this folder.
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowsFilter | string Which types of flows to count. One of ['all', 'shared', 'owned'] |
{- "count": 1
}
Replace the dataset or the specified wrangled dataset (flow node) in the flow with a new imported or wrangled dataset. This allows one to perform the same action as the "Replace" action in the flow UI.
You can get the flow node id (wrangled dataset id) and the imported it from the URL when clicking on a node in the UI.
ref: replaceDatasetInFlow
id required | integer |
required | integer or string |
required | integer or string |
{- "flowNodeId": 1,
- "newImportedDatasetId": 1
}
{- "newInputNode": {
- "id": 1,
- "scriptId": 1,
- "flowId": 1
}, - "outputNodeEdges": [
- {
- "id": 1,
- "flowId": 1,
- "inFlowNodeId": 1,
- "outFlowNodeId": 1
}
]
}
A placeholder for an object in a flow. Can represent an imported dataset, a recipe, or a Reference.
Create edges between nodes
ref: commitEdges
id required | integer |
required | object |
{- "updateInfo": {
- "deleteOrphaned": true,
- "newEdges": [
- {
- "outPortId": 1,
- "inPortId": 1,
- "outFlowNodeId": 1,
- "inFlowNodeId": 1
}
], - "edgesToRevive": [
- {
- "id": 1
}
], - "portsToDelete": [
- {
- "id": 1
}
]
}
}
{- "data": [
- {
- "inPortId": 1,
- "outPortId": 1,
- "inputFlowNode": {
- "id": 1
}, - "outputFlowNode": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
]
}
Notification settings for a flow.
Create a new flow notification settings
onJobFailure required | string Enum: "all" "scheduled" "adhoc" "never" "default" on job failure trigger condition |
onJobSuccess required | string Enum: "all" "scheduled" "adhoc" "never" "default" on job success trigger condition |
flowId required | integer |
{- "onJobFailure": "all",
- "onJobSuccess": "all",
- "flowId": 1
}
{- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "onJobFailure": "all",
- "onJobSuccess": "all",
- "flow": {
- "id": 1
}
}
An object representing a flow run.
Get an existing flow run
ref: getFlowRun
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "scheduleExecutionId": 1,
- "requestId": "string",
- "flow": {
- "id": 1
}
}
Get the status of a Flow Run. It combines the status of the underlying Job Groups.
ref: getFlowRunStatus
id required | integer |
"Complete"
Get the list of jobGroups.
ref: getFlowRunJobGroups
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "ranfrom": "ui",
- "ranfor": "recipe",
- "status": "Complete",
- "profilingEnabled": true,
- "runParameterReferenceDate": "2019-08-24T14:15:22Z",
- "snapshot": {
- "id": 1
}, - "wrangledDataset": {
- "id": 1
}, - "flowrun": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "count": 1
}
Used to override the default value of runParameter in a flow
Create a new flow run parameter override
flowId required | number |
overrideKey required | string key/name used when overriding the value of the variable |
required | overrideValueInfoVariable (object) or overrideValueInfoSelector (object) |
{- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "flowId": 0
}
{- "id": 1,
- "flowId": 1,
- "overrideKey": "string",
- "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Get an existing flow run parameter override
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Patch an existing flow run parameter override
id required | integer |
overrideKey | string key/name used when overriding the value of the variable |
overrideValueInfoVariable (object) or overrideValueInfoSelector (object) |
{- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
A collection of flows, useful for organization.
Create a new folder
ref: createFolder
name | string Display name of the folder. |
description | string User-friendly description for the folder. |
{- "name": "string",
- "description": "string"
}
{- "name": "string",
- "description": "string",
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "associatedPeople": [
- { }
]
}
List existing folders
ref: listFolders
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "associatedPeople": [
- { }
]
}
], - "count": 1
}
Count existing folders
ref: countFolders
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get all flows contained in this folder.
ref: getFlowsForFolder
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowsFilter | string Which types of flows to list. One of ['all', 'shared', 'owned'] |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "count": 1
}
Get the count of flows contained in this folder.
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowsFilter | string Which types of flows to count. One of ['all', 'shared', 'owned'] |
{- "count": 1
}
Patch an existing folder
ref: patchFolder
id required | integer |
name | string Display name of the folder. |
description | string User-friendly description for the folder. |
{- "name": "string",
- "description": "string"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing folder
ref: deleteFolder
id required | integer |
An object representing data loaded into Designer Cloud Powered by Trifacta, as well as any structuring that has been applied to it. imported datasets are the starting point for wrangling, and can be used in multiple flows.
Create an imported dataset from an available resource. Created dataset is owned by the authenticated user.
In general, importing a file is done using the following payload:
{
"uri": "protocol://path-to-file",
"name": "my dataset",
"detectStructure": true
}
See more examples in the Request Samples section.
✅ TIP: When an imported dataset is created via API, it is always imported as an unstructured dataset by default. To import a dataset with the inferred recipe, add
detectStructure: true
in the payload.
ℹ️ NOTE: Do not create an imported dataset from a file that is being used by another imported dataset. If you delete the newly created imported dataset, the file is removed, and the other dataset is corrupted. Use a new file or make a copy of the first file first.
ℹ️ NOTE: Importing a Microsoft Excel file or a file that need to be converted using the API is not supported yet.
Quotas:
40 req./user/min, 60 req./workspace/min
name required | string Display name of the imported dataset. |
uri required | string Dataset URI |
id | integer unique identifier for this object. |
jobStatus | string |
jobgroupId | string |
visible | boolean |
isPending | boolean |
numFlows | integer |
bucketName | string |
dynamicBucket | string |
dynamicHost | string |
dynamicUserInfo | string |
description | string User-friendly description for the imported dataset. |
disableTypeInference | boolean Only applicable to relational sources (database tables/views for e.g.). Prevent Designer Cloud Powered by Trifacta type inference from running and inferring types by looking at the first rows of the dataset. |
type | string Indicate the type of dataset. If not specified, the default storage protocol is used. |
isConverted | boolean Indicate if the imported dataset is converted. This is the case for Microsoft Excel Dataset for e.g. |
isDynamic | boolean Default: false indicate if the datasource is parameterized. In that case, a |
host | string Host for the dataset |
userinfo | string User info for the dataset |
mimeType | string Should be set to "application/vnd.google-apps.spreadsheet" when importing Google sheets |
detectStructure | boolean Default: false Indicate if a parsing script should be inferred when importing the dataset. By default, the dataset is imported |
dynamicPath | string Path used when resolving the parameters. It is used when running a job or collecting a sample. It is different from the one used as a storage location which corresponds to the first match. The latter is used when doing a fast preview in the UI. |
encoding | string Default: "UTF-8" Optional dataset encoding. |
sanitizeColumnNames | boolean Default: false Indicate whether the column names in the imported file should be sanitized |
ensureHeader | boolean If provided, forces first row header toggle |
Array of objects (runParameterFileBasedInfo) [ items ] Description of the dataset parameters if the dataset is parameterized. |
{- "uri": "protocol://path-to-file",
- "name": "my dataset",
- "detectStructure": true
}
{- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
Add the specified imported dataset to a flow based on its internal identifier.
ℹ️ NOTE: Datasets can be added to flows based on the permissions of the access token used on this endpoint. Datasets can be added to flows that are shared by the user.
id required | integer |
required | object The flow to add this dataset to. |
{- "flow": {
- "id": 1
}
}
{- "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
Create a copy of an imported dataset
ref: copyDataSource
id required | integer |
name | string name of the copied dataset |
{- "name": "string"
}
{- "dynamicPath": "string",
- "isSchematized": true,
- "isDynamic": true,
- "isConverted": true,
- "disableTypeInference": true,
- "hasStructuring": true,
- "hasSchemaErrors": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}
}
Fetches and updates the latest schema of a datasource
ref: asyncRefreshSchema
id required | integer |
{ }
{- "resourceTaskStateId": 1
}
List all the inputs of a Flow. Also include data sources that are present in referenced flows.
ref: getFlowInputs
id required | integer |
{- "data": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "count": 1
}
Get the specified imported dataset.
Use the following embedded reference to embed in the response data about the connection used to acquire the source dataset if it was created from a custom connection. See embedding resources for more information.
/v4/importedDatasets/{id}?embed=connection
ref: getImportedDataset
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
includeAssociatedSubjects | boolean If includeAssociatedSubjects is true, it will include entitlement associated subjects in the response |
{- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
Modify the specified imported dataset. Name, path, bucket etc. for gcs can be modified.
ℹ️ NOTE: Samples will not be updated for the recipes. This results in the recipe showing samples of the older data.
id required | integer |
id | integer unique identifier for this object. |
jobStatus | string |
jobgroupId | string |
visible | boolean |
isPending | boolean |
numFlows | integer |
bucketName | string |
dynamicBucket | string |
dynamicHost | string |
dynamicUserInfo | string |
name | string Display name of the imported dataset. |
description | string User-friendly description for the imported dataset. |
disableTypeInference | boolean Only applicable to relational sources (database tables/views for e.g.). Prevent Designer Cloud Powered by Trifacta type inference from running and inferring types by looking at the first rows of the dataset. |
type | string Indicate the type of dataset. If not specified, the default storage protocol is used. |
isConverted | boolean Indicate if the imported dataset is converted. This is the case for Microsoft Excel Dataset for e.g. |
isDynamic | boolean Default: false indicate if the datasource is parameterized. In that case, a |
host | string Host for the dataset |
userinfo | string User info for the dataset |
bucket | string The bucket is required if the datasource is stored in a bucket file system. |
raw | string Raw SQL query |
path | string |
dynamicPath | string Path used when resolving the parameters. It is used when running a job or collecting a sample. It is different from the one used as a storage location which corresponds to the first match. The latter is used when doing a fast preview in the UI. |
Array of objects (runParameterInfo) [ items ] |
{- "id": 1,
- "jobStatus": "string",
- "jobgroupId": "string",
- "visible": true,
- "isPending": true,
- "numFlows": 1,
- "bucketName": "string",
- "dynamicBucket": "string",
- "dynamicHost": "string",
- "dynamicUserInfo": "string",
- "name": "My Dataset",
- "description": "string",
- "disableTypeInference": true,
- "type": "string",
- "isConverted": true,
- "isDynamic": false,
- "host": "string",
- "userinfo": "string",
- "bucket": "string",
- "raw": "SELECT * FROM table",
- "path": "string",
- "dynamicPath": "string",
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}
{- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
Modify the specified imported dataset. Only the name and description properties should be modified.
ref: patchImportedDataset
id required | integer |
name | string Display name of the imported dataset. |
description | string User-friendly description for the imported dataset. |
{- "name": "My Dataset",
- "description": "string"
}
{- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
List all the inputs that are linked to this output object. Also include data sources that are present in referenced flows.
id required | integer |
{- "data": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "count": 1
}
List Designer Cloud Powered by Trifacta datasets.
This can be used to list both imported and reference datasets throughout the system,
as well as recipes in a given flow.
ref: listDatasetLibrary
required | string or Array of strings Which types of datasets to list.
Valid choices are: [ |
ownershipFilter | string Which set of datasets to list.
One of [ |
schematized | boolean If included, filter to only show schematized imported datasets. |
currentFlowId | integer Required for including |
datasourceFlowId | integer When included, filter included datasets to only include those associated to the given flow. |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowId | integer When provided, list datasets associated with this flow before other datasets. |
userIdFilter | integer allows admin to filter datasets based on userId |
includeAssociatedSubjects | boolean If includeAssociatedSubjects is true, it will include entitlements associated subjects in the response |
{- "data": [
- {
- "type": "datasource",
- "referenceCount": 1,
- "count": 1,
- "importedDataset": {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- null
], - "last": {
- "unit": null,
- "number": null,
- "dow": null
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
}
], - "count": {
- "imported": 1,
- "reference": 1,
- "recipe": 1,
- "all": 1
}
}
Count Designer Cloud Powered by Trifacta datasets. Gives counts for various types of datasets matching the provided filters.
ref: countDatasetLibrary
ownershipFilter | string Which set of datasets to count.
One of [ |
schematized | boolean If included, filter to only show schematized imported datasets. |
currentFlowId | integer Required for including |
datasourceFlowId | integer When included, filter included datasets to only include those associated to the given flow. |
flowId | integer When provided, count datasets associated with this flow before other datasets. |
string or Array of strings Which types of datasets to list.
Valid choices are: [ | |
filter | string Example: filter=my-object Value for fuzzy-filtering objects. See |
userIdFilter | integer allows admin to filter datasets based on userId |
{- "count": {
- "imported": 1,
- "reference": 1,
- "recipe": 1,
- "all": 1
}
}
An internal object encoding the information necessary to run a part of a Designer Cloud Powered by Trifacta jobGroup.
This is called a "Stage" on the Job Results page in the UI.
Get information about the batch jobs within a Designer Cloud Powered by Trifacta job.
ref: getJobsForJobGroup
id required | integer |
{- "data": [
- {
- "id": 1,
- "status": "Complete",
- "jobType": "wrangle",
- "sampleSize": 1,
- "percentComplete": 1,
- "jobGroup": {
- "id": 1
}, - "errorMessage": {
- "id": 1
}, - "lastHeartbeatAt": "2019-08-24T14:15:22Z",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "executionLanguage": "photon",
- "cpJobId": "string",
- "wranglescript": {
- "id": 1
}, - "emrcluster": {
- "id": 1
}
}
], - "count": 1
}
Get Job Status.
ref: getJobStatus
id required | integer |
"Complete"
Create a jobGroup, which launches the specified job as the authenticated user. This performs the same action as clicking on the Run Job button in the application.
The request specification depends on one of the following conditions:
In the last case, you must specify some overrides when running the job. See the example with overrides
for more information.
ℹ️ NOTE: Override values applied to a job are not validated. Invalid overrides may cause your job to fail.
To run a job, you just specify the recipe identifier (wrangledDataset.id). If the job is successful, all defined outputs are generated, as defined in the outputobject, publications, and writeSettings objects associated with the recipe.
✅ TIP: To identify the wrangledDataset Id, select the recipe icon in the flow view and take the id shown in the URL. e.g. if the URL is
/flows/10?recipe=7
, the wrangledDataset Id is7
.
{"wrangledDataset": {"id": 7}}
If you must change some outputs or other settings for the specific job, you can insert these changes in the overrides section of the request. In the example below, the running environment, profiling option, and writeSettings for the job are modified for this execution.
{
"wrangledDataset": {"id": 1},
"overrides": {
"execution": "spark",
"profiler": false,
"writesettings": [
{
"path": "<path_to_output_file>",
"action": "create",
"format": "csv",
"compression": "none",
"header": false,
"asSingleFile": false
}
]
}
}
If you have created a dataset with parameters, you can specify overrides for parameter values during execution through the APIs. Through this method, you can iterate job executions across all matching sources of a parameterized dataset.
In the example below, the runParameters override has been specified for the country
. In this case, the value "Germany" is inserted for the specified variable as part of the job execution.
{
"wrangledDataset": {"id": 33},
"runParameters": {
"overrides": {
"data": [{"key": "country", "value": "Germany"}]
}
}
}
The response contains a list of jobs which can be used to get a granular status of the JobGroup completion.
The jobGraph
indicates the dependency between each of the jobs.
{
"sessionId": "79276c31-c58c-4e79-ae5e-fed1a25ebca1",
"reason": "JobStarted",
"jobGraph": {
"vertices": [21, 22],
"edges": [{"source": 21, "target": 22}]
},
"id": 9,
"jobs": {"data": [{"id": 21}, {"id": 22}]}
}
When you create a new jobGroup through the APIs, the internal jobGroup identifier is returned in the response. Retain this identifier for future use. You can also acquire the jobGroup identifier from the application. In the Jobs page, the internal identifier for the jobGroup is the value in the left column.
Quotas:
30 req./user/min, 60 req./workspace/min
ref: runJobGroup
x-execution-id | string Example: f9cab740-50b7-11e9-ba15-93c82271a00b Optional header to safely retry the request without accidentally performing the same operation twice. If a JobGroup with the same |
required | object The identifier for the recipe you would like to run. |
forceCacheUpdate | boolean Setting this flag to true will invalidate any cached datasources. This only applies to SQL datasets. |
ignoreRecipeErrors | boolean Default: false Setting this flag to true will mean the job will run even if there are upstream recipe errors. Setting it to false will cause the Request to fail on recipe errors. |
testMode | boolean Setting this flag to true will not run the job but just perform some validations. |
object (runParameterOverrides) Allows to override parameters that are defined in the flow on datasets or outputs for e.g. | |
workspaceId | integer Internal. Does not need to be specified |
object Allows to override execution settings that are set on the output object. | |
ranfrom | string Enum: "ui" "schedule" "api" Where the job was executed from. Does not need to be specified when using the API.
|
{- "wrangledDataset": {
- "id": 7
}
}
{- "sessionId": "79276c31-c58c-4e79-ae5e-fed1a25ebca1",
- "reason": "JobStarted",
- "jobGraph": {
- "vertices": [
- 21,
- 22
], - "edges": [
- {
- "source": 21,
- "target": 22
}
]
}, - "id": 9,
- "jobs": {
- "data": [
- {
- "id": 21
}, - {
- "id": 22
}
]
}
}
Deprecated. Use listJobLibrary instead.
ref: listJobGroups
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowNodeId | integer |
ranfor | string Default: "recipe,plan" filter jobs based on their type |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "ranfrom": "ui",
- "ranfor": "recipe",
- "status": "Complete",
- "profilingEnabled": true,
- "runParameterReferenceDate": "2019-08-24T14:15:22Z",
- "snapshot": {
- "id": 1
}, - "wrangledDataset": {
- "id": 1
}, - "flowrun": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "count": 1
}
Cancel the execution of a running Designer Cloud Powered by Trifacta jobGroup.
ℹ️ NOTE: If the job has completed, this endpoint does nothing.
ref: cancelJobGroup
id required | integer |
{ }
{- "jobIds": [
- 1
], - "jobgroupId": 1
}
Get the specified jobGroup.
A job group is a job that is executed from a specific node in a flow. The job group may contain:
It is possible to only get the current status for a jobGroup:
/v4/jobGroups/{id}/status
In that case, the response status would simply be a string:
"Complete"
If you wish to also get the related jobs and wrangledDataset, you can use embed
. See embedding resources for more information.
/v4/jobGroups/{id}?embed=jobs,wrangledDataset
ref: getJobGroup
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "name": "string",
- "description": "string",
- "ranfrom": "ui",
- "ranfor": "recipe",
- "status": "Complete",
- "profilingEnabled": true,
- "runParameterReferenceDate": "2019-08-24T14:15:22Z",
- "snapshot": {
- "id": 1
}, - "wrangledDataset": {
- "id": 1
}, - "flowrun": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
id required | integer |
{- "profilerTypeCheckHistograms": {
- "property1": [
- {
- "key": "VALID",
- "count": 1
}
], - "property2": [
- {
- "key": "VALID",
- "count": 1
}
]
}, - "profilerValidValueHistograms": {
- "property1": [
- {
- "min": 0,
- "max": 0,
- "roundMin": 0,
- "roundMax": 0,
- "buckets": [
- {
- "pos": 1,
- "b": 1
}
], - "quartiles": {
- "q1": 0,
- "q2": 0,
- "q3": 0
}
}
], - "property2": [
- {
- "min": 0,
- "max": 0,
- "roundMin": 0,
- "roundMax": 0,
- "buckets": [
- {
- "pos": 1,
- "b": 1
}
], - "quartiles": {
- "q1": 0,
- "q2": 0,
- "q3": 0
}
}
]
}, - "profilerRules": {
- "property1": [
- {
- "id": 1,
- "type": "string",
- "comment": "string",
- "description": "string",
- "status": "pass",
- "updatedAt": "string",
- "passCount": 1,
- "failCount": 1,
- "totalCount": 1
}
], - "property2": [
- {
- "id": 1,
- "type": "string",
- "comment": "string",
- "description": "string",
- "status": "pass",
- "updatedAt": "string",
- "passCount": 1,
- "failCount": 1,
- "totalCount": 1
}
]
}, - "columnTypes": {
- "property1": [
- "string"
], - "property2": [
- "string"
]
}
}
id required | integer |
{- "profilerTypeCheckHistograms": {
- "property1": [
- {
- "key": "VALID",
- "count": 1
}
], - "property2": [
- {
- "key": "VALID",
- "count": 1
}
]
}, - "profilerValidValueHistograms": {
- "property1": [
- {
- "min": 0,
- "max": 0,
- "roundMin": 0,
- "roundMax": 0,
- "buckets": [
- {
- "pos": 1,
- "b": 1
}
], - "quartiles": {
- "q1": 0,
- "q2": 0,
- "q3": 0
}
}
], - "property2": [
- {
- "min": 0,
- "max": 0,
- "roundMin": 0,
- "roundMax": 0,
- "buckets": [
- {
- "pos": 1,
- "b": 1
}
], - "quartiles": {
- "q1": 0,
- "q2": 0,
- "q3": 0
}
}
]
}, - "profilerRules": {
- "property1": [
- {
- "id": 1,
- "type": "string",
- "comment": "string",
- "description": "string",
- "status": "pass",
- "updatedAt": "string",
- "passCount": 1,
- "failCount": 1,
- "totalCount": 1
}
], - "property2": [
- {
- "id": 1,
- "type": "string",
- "comment": "string",
- "description": "string",
- "status": "pass",
- "updatedAt": "string",
- "passCount": 1,
- "failCount": 1,
- "totalCount": 1
}
]
}, - "columnTypes": {
- "property1": [
- "string"
], - "property2": [
- "string"
]
}
}
id required | integer |
Get JobGroup Status.
ref: getJobGroupStatus
id required | integer |
"Complete"
Get the job group inputs. Return the list of datasets used when running this jobGroup.
ref: getJobGroupInputs
id required | integer |
{- "data": [
- {
- "name": "string",
- "inputs": [
- {
- "vendor": "string",
- "databaseConnectString": "string",
- "relationalPath": [
- "string"
], - "table": "string",
- "action": "string",
- "query": [
- "string"
]
}
]
}
]
}
Get the job group outputs. Return the list of tables and file paths used as output.
ref: getJobGroupOutputs
id required | integer |
{- "files": [
- {
- "uri": "string",
- "fileType": "FILE",
- "isPrimaryOutput": true
}
], - "tables": [
- {
- "vendor": "string",
- "databaseConnectString": "string",
- "relationalPath": [
- "string"
], - "table": "string",
- "action": "string",
- "query": [
- "string"
]
}
]
}
Get list of all jobGroup accessible to the authenticated user.
Note that it is possible to embed other resources while fetching the jobGroup list. e.g.:
/v4/jobLibrary/?embed=jobs,wrangledDataset
See embedding resources for more information.
It is possible to filter jobGroups based on their status.
Here is how to get all jobGroups with a Failed
status:
/v4/jobLibrary?status=Failed
It is possible to filter only scheduled jobGroups using the following request:
/v4/jobLibrary?ranfrom=schedule
It is also possible to filter the jobGroups based on the Date. Here is an example:
/v4/jobLibrary?dateFilter[createdAt][gte]=1572994800000&dateFilter[updatedAt][lt]=1581375600000
ref: listJobLibrary
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
dateFilter | object for filtering jobgroups by start and end date |
ranfrom | string filter jobs based on how they were run |
status | string filter jobs based on their status |
ranfor | string Default: "recipe,plan" filter jobs based on their type |
runBy | string Filter jobs by the users who have run them. One of ['all', 'currentUser'] |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "ranfrom": "ui",
- "ranfor": "recipe",
- "status": "Complete",
- "profilingEnabled": true,
- "runParameterReferenceDate": "2019-08-24T14:15:22Z",
- "snapshot": {
- "id": 1
}, - "wrangledDataset": {
- "id": 1
}, - "flowrun": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "count": 1
}
Count Designer Cloud Powered by Trifacta jobs with special filter capabilities. See listJobLibrary for some examples.
ref: countJobLibrary
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
dateFilter | object for filtering jobgroups by start and end date |
ranfrom | string filter jobs based on how they were run |
status | string filter jobs based on their status |
ranfor | string Default: "recipe,plan" filter jobs based on their type |
runBy | string Filter jobs by the users who have run them. One of ['all', 'currentUser'] |
{- "count": 1
}
Get information about the batch jobs within a Designer Cloud Powered by Trifacta job.
ref: getJobsForJobGroup
id required | integer |
{- "data": [
- {
- "id": 1,
- "status": "Complete",
- "jobType": "wrangle",
- "sampleSize": 1,
- "percentComplete": 1,
- "jobGroup": {
- "id": 1
}, - "errorMessage": {
- "id": 1
}, - "lastHeartbeatAt": "2019-08-24T14:15:22Z",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "executionLanguage": "photon",
- "cpJobId": "string",
- "wranglescript": {
- "id": 1
}, - "emrcluster": {
- "id": 1
}
}
], - "count": 1
}
Get list of publications for the specified jobGroup.
A publication is an export of job results from the platform after they have been initially generated.
id required | integer |
{- "data": [
- {
- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputobject": {
- "id": 1
}, - "connection": {
- "id": "21"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
], - "count": 1
}
An object containing a list of scriptLines that can be reused across recipes.
Performs an import of a macro package.
ℹ️ NOTE: You cannot import a macro that was exported from a later version of the product.
✅ TIP: You can paste the response of the exported macro page as the request.
ℹ️ NOTE: Modification of the macro definition is not supported outside of the Designer Cloud Powered by Trifacta.
ref: importMacroPackage
type required | string Type of artifact. This value is always |
kind required | string This value is |
hash required | string Hash value used to verify the internal integrity of the macro definition. |
required | object |
required | object |
{- "type": "string",
- "kind": "string",
- "hash": "string",
- "data": {
- "name": "string",
- "description": "string",
- "signature": [
- {
- "name": "Store_Nbr",
- "type": "column"
}
], - "scriptlines": [
- {
- "hash": "string",
- "task": { }
}
]
}, - "metadata": {
- "lastMigration": "20191024143300",
- "trifactaVersion": "6.8.0+4.20191104073802.8b6217a",
- "exportedAt": "2019-08-24T14:15:22Z",
- "exportedBy": 1,
- "uuid": "6b27eee0-0034-11ea-a378-9dc0586de9fb",
- "edition": "Enterprise"
}
}
{- "id": 1,
- "name": "string",
- "description": "string",
- "createdBy": 1,
- "updatedBy": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "workspaceId": 1
}
Retrieve a package containing the definition of the specified macro. Response body is the contents of the package, which is an importable version of the macro definition.
✅ TIP: The response body can be pasted as the request when you import the macro into a different environment. For more information, see Import Macro Package.
ℹ️ NOTE: Modification of the macro definition is not supported outside of the Designer Cloud Powered by Trifacta.
ref: getMacroPackage
id required | integer |
{- "type": "string",
- "kind": "string",
- "hash": "string",
- "data": {
- "name": "string",
- "description": "string",
- "signature": [
- {
- "name": "Store_Nbr",
- "type": "column"
}
], - "scriptlines": [
- {
- "hash": "string",
- "task": { }
}
]
}, - "metadata": {
- "lastMigration": "20191024143300",
- "trifactaVersion": "6.8.0+4.20191104073802.8b6217a",
- "exportedAt": "2019-08-24T14:15:22Z",
- "exportedBy": 1,
- "uuid": "6b27eee0-0034-11ea-a378-9dc0586de9fb",
- "edition": "Enterprise"
}
}
{ }
An outputObject is a definition of one or more types of outputs and how they are generated.
If an outputObject already exists for the recipe (flowNodeId
) to which you are posting, you must either modify the object instead or delete it before posting your new object.
ref: createOutputObject
execution required | string Enum: "photon" "emrSpark" Execution language. Indicate on which engine the job was executed. Can be null/missing for scheduled jobs that fail during the validation phase.
|
profiler required | boolean Indicate if recipe errors should be ignored for the jobGroup. |
isAdhoc | |
ignoreRecipeErrors | |
flowNodeId | integer FlowNode the outputObject should be attached to. (This is also the id of the wrangledDataset). |
Array of objects (writeSettingCreateRequest) [ items ] Optionally you can include writeSettings while creating the outputObject | |
Array of objects (sqlScriptCreateRequest) [ items ] Optionally you can include sqlScripts while creating the outputObject | |
Array of objects (publicationCreateRequest) [ items ] Optionally you can include publications while creating the outputObject | |
object (outputObjectSchemaDriftOptionsUpdateRequest) |
{- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "ignoreRecipeErrors": true,
- "flowNodeId": 1,
- "writeSettings": [
- {
- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "outputObjectId": 1,
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
], - "connectionId": "25"
}
], - "sqlScripts": [
- {
- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connectionId": "21",
- "runParameters": [
- {
- "type": "sql",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
]
}
], - "publications": [
- {
- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputObjectId": 1,
- "connectionId": "21",
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
], - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
], - "outputObjectSchemaDriftOptions": {
- "schemaValidation": "true",
- "stopJobOnErrorsFound": "false"
}
}
{- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
List existing output objects
ref: listOutputObjects
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "count": 1
}
Generate python script for input recipe to the output object. EXPERIMENTAL FEATURE: This feature is intended for demonstration purposes only. In a future release, it can be modified or removed without warning. This endpoint should not be deployed in a production environment.
id required | integer |
orderedColumns required | string Ordered Column Names for the input dataset |
object (cdfToPythonOverrides) |
{- "orderedColumns": "string",
- "overrides": {
- "execution": "photon",
- "profiler": true
}
}
{- "pythonScript": "string"
}
List all the outputs of a Flow.
ref: getFlowOutputs
id required | integer |
{- "data": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "count": 1
}
Count existing output objects
ref: countOutputObjects
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get the specified outputObject.
Note that it is possible to include writeSettings and publications that are linked to this outputObject. See embedding resources for more information.
/v4/outputObjects/{id}?embed=writeSettings,publications
ref: getOutputObject
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
Patch an existing output object
ref: patchOutputObject
id required | integer |
execution | string Enum: "photon" "emrSpark" Execution language. Indicate on which engine the job was executed. Can be null/missing for scheduled jobs that fail during the validation phase.
|
profiler | boolean Indicate if recipe errors should be ignored for the jobGroup. |
ignoreRecipeErrors | |
Array of objects (writeSettingCreateRequest) [ items ] | |
Array of objects (sqlScriptCreateRequest) [ items ] | |
Array of objects (publicationCreateRequest) [ items ] | |
object (outputObjectSchemaDriftOptionsUpdateRequest) | |
name | string Name of output as it appears in the flow view |
description | string Description of output |
{- "execution": "photon",
- "profiler": true,
- "ignoreRecipeErrors": true,
- "writeSettings": [
- {
- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "outputObjectId": 1,
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
], - "connectionId": "25"
}
], - "sqlScripts": [
- {
- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connectionId": "21",
- "runParameters": [
- {
- "type": "sql",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
]
}
], - "publications": [
- {
- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputObjectId": 1,
- "connectionId": "21",
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
], - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
], - "outputObjectSchemaDriftOptions": {
- "schemaValidation": "true",
- "stopJobOnErrorsFound": "false"
}, - "name": "string",
- "description": "string"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing output object
ref: deleteOutputObject
id required | integer |
List all the inputs that are linked to this output object. Also include data sources that are present in referenced flows.
id required | integer |
{- "data": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "count": 1
}
Get information about the currently logged-in user.
ref: getCurrentPerson
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
uuid | string |
workspaceId | string |
includePrivileges | boolean Include the user's maximal privileges and authorization roles |
{- "email": "joe@example.com",
- "isDisabled": false,
- "state": "active",
- "validateEmail": true,
- "validateExportCompliance": true,
- "id": 1,
- "outputHomeDir": "/home-dir/queryResults/joe@example.com",
- "uploadDir": "/uploads",
- "lastLoginTime": "2019-08-24T14:15:22Z",
- "lastStateChange": "2019-08-24T14:15:22Z",
- "maximalPrivileges": [
- {
- "operations": [
- "read"
], - "resourceType": "flow"
}
]
}
Get an existing person
ref: getPerson
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
uuid | string |
workspaceId | string |
includePrivileges | boolean Include the user's maximal privileges and authorization roles |
{- "email": "joe@example.com",
- "isDisabled": false,
- "state": "active",
- "validateEmail": true,
- "validateExportCompliance": true,
- "id": 1,
- "outputHomeDir": "/home-dir/queryResults/joe@example.com",
- "uploadDir": "/uploads",
- "lastLoginTime": "2019-08-24T14:15:22Z",
- "lastStateChange": "2019-08-24T14:15:22Z",
- "maximalPrivileges": [
- {
- "operations": [
- "read"
], - "resourceType": "flow"
}
]
}
List existing people
ref: listPerson
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
string | |
state | string |
isDisabled | string |
includePrivileges | boolean Include the user's maximal privileges and authorization roles |
noLimit | string If set to |
{- "data": [
- {
- "email": "joe@example.com",
- "isDisabled": false,
- "state": "active",
- "validateEmail": true,
- "validateExportCompliance": true,
- "id": 1,
- "outputHomeDir": "/home-dir/queryResults/joe@example.com",
- "uploadDir": "/uploads",
- "lastLoginTime": "2019-08-24T14:15:22Z",
- "lastStateChange": "2019-08-24T14:15:22Z",
- "maximalPrivileges": [
- {
- "operations": [
- "read"
], - "resourceType": "flow"
}
]
}
], - "count": 1
}
Create a new plan
ref: createPlan
name required | string Display name of the flow. |
description | string User-friendly description for the flow. |
integer or string |
{- "name": "string",
- "description": "string",
- "originalPlanId": 1
}
{- "id": 1,
- "name": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "snapshotted": true,
- "originalPlanId": 1,
- "description": "string",
- "planSnapshotRunCount": 1,
- "notificationsEnabled": true,
- "latestPlanSnapshot": { },
- "latestPlanSnapshotRun": { },
- "planNodes": {
- "data": [
- {
- "id": 1,
- "taskType": "flow",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "coordinates": {
- "x": 1,
- "y": 1
}, - "name": "string"
}
]
}
}
List existing plans
ref: listPlans
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
includeAssociatedPeople | boolean If true, the returned plans will include a list of people with access. |
ownershipFilter | string Filter plans by ownership. Valid values are 'all', 'shared', and 'owned'. |
{- "data": [
- {
- "id": 1,
- "name": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "snapshotted": true,
- "originalPlanId": 1,
- "description": "string",
- "planSnapshotRunCount": 1,
- "notificationsEnabled": true,
- "latestPlanSnapshot": { },
- "latestPlanSnapshotRun": { },
- "planNodes": {
- "data": [
- {
- "id": 1,
- "taskType": "flow",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "coordinates": {
- "x": 1,
- "y": 1
}, - "name": "string"
}
]
}
}
], - "count": 1
}
Run the plan. A new snapshot will be created if required.
If some flows or outputs referenced by the plan tasks have been deleted, it will return a
MissingFlowReferences
validation status.
If the plan is valid, it will be queued for execution.
This endpoint returns a planSnapshotRunId
that can be used to track the plan execution status
using getPlanSnapshotRun.
Quotas:
30 req./user/min, 60 req./workspace/min
ref: runPlan
id required | integer |
x-execution-id | string Example: f9cab740-50b7-11e9-ba15-93c82271a00b Optional header to safely retry the request without accidentally performing the same operation twice.
If a |
Array of objects (planNodeOverride) [ items ] Collection of run parameter overrides that should be applied to flow run parameters of the respective plan node. |
{- "planNodeOverrides": [
- {
- "handle": "string",
- "overrideKey": "string",
- "value": "string"
}
]
}
{- "validationStatus": "Valid",
- "planSnapshotRunId": 1
}
Get a list of users with whom the plan is shared.
ref: getPlanPermissions
id required | integer |
{- "data": [
- {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy"
}
]
}
Import the plan and associated flows from the given package.
A ZIP
file as exported by the export plan endpoint is accepted.
Before you import, you can perform a dry-run to check for errors. See Import plan package - dry run.
This endpoint accept a multipart/form
content type.
Here is how to send the ZIP
package using curl.
curl -X POST https://yourworkspace.cloud.trifacta.com/v4/plans/package \
-H 'authorization: Bearer <api-token>' \
-H 'content-type: multipart/form-data' \
-F 'data=@path/to/plan-package.zip'
The response lists the objects that have been created.
Quotas:
20 req./user/min, 40 req./workspace/min
ref: importPlanPackage
folderId | integer |
fromUI | boolean If true, will return the list of imported environment parameters for confirmation if any are referenced in the plan. |
packageContents required | object (importPlanPackageRequestZip) An exported plan zip file. |
Array of environmentParameterMappingToExistingEnvParam (object) or environmentParameterMappingToManualValue (object) (environmentParameterMapping) [ items ] | |
Array of objects (connectionIdMapping) [ items ] |
{- "packageContents": { },
- "environmentParameterMapping": [
- {
- "overrideKey": "myVar",
- "mappedOverrideKey": "myVar"
}
], - "connectionIdMapping": [
- {
- "connectionUuid": "string",
- "mappedConnectionUuid": "string"
}
]
}
{- "flowPackages": [
- {
- "deletedObjects": { },
- "createdObjectMapping": { },
- "importRuleChanges": {
- "object": [
- { }
], - "value": [
- { }
]
}, - "primaryFlowIds": [
- 1
], - "flows": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "datasources": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": null,
- "order": null
}
], - "value": {
- "dateRange": {
- "timezone": null,
- "formats": null,
- "last": { }
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "flownodes": [
- {
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
], - "flowedges": [
- {
- "inPortId": 1,
- "outPortId": 1,
- "inputFlowNode": {
- "id": 1
}, - "outputFlowNode": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "recipes": [
- {
- "name": "string",
- "description": "string",
- "active": true,
- "nextPortId": 1,
- "currentEdit": {
- "id": 1
}, - "redoLeafEdit": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "outputobjects": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "webhookflowtasks": [
- { }
], - "release": { }
}
], - "planPackage": {
- "id": 1,
- "name": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "snapshotted": true,
- "originalPlanId": 1,
- "description": "string",
- "planSnapshotRunCount": 1,
- "notificationsEnabled": true,
- "latestPlanSnapshot": { },
- "latestPlanSnapshotRun": { },
- "planNodes": {
- "data": [
- {
- "id": 1,
- "taskType": "flow",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "coordinates": {
- "x": 1,
- "y": 1
}, - "name": "string"
}
]
}
}, - "taskCount": 1
}
Count existing plans
ref: countPlans
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
ownershipFilter | string Filter plans by ownership. |
{- "count": 1
}
List run parameters of a plan. Parameters will be grouped by plannode. Each element in the returned list will only contain resources that have run parameters defined.
ref: planRunParameters
id required | integer |
{- "planNodeParameters": [
- {
- "handle": "string",
- "planNodeId": 1,
- "flow": {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}, - "conflicts": [
- null
], - "datasources": {
- "data": [
- {
- "dynamicPath": "string",
- "isSchematized": true,
- "isDynamic": true,
- "isConverted": true,
- "disableTypeInference": true,
- "hasStructuring": true,
- "hasSchemaErrors": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}
}
]
}, - "outputObjects": {
- "data": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
]
}, - "planOverrides": { }
}
]
}
Read full plan with all its nodes, tasks, and edges.
ref: readFull
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
includeAssociatedPeople | boolean If true, the returned plan will include a list of people with access. |
includeCreatorInfo | boolean If true, the returned plan will include info about the creators of the flows and plan such as name and email adress. |
{- "id": 1,
- "name": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "snapshotted": true,
- "originalPlanId": 1,
- "description": "string",
- "planSnapshotRunCount": 1,
- "notificationsEnabled": true,
- "latestPlanSnapshot": { },
- "latestPlanSnapshotRun": { },
- "planNodes": {
- "data": [
- {
- "id": 1,
- "taskType": "flow",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "coordinates": {
- "x": 1,
- "y": 1
}, - "name": "string"
}
]
}
}
List of all schedules configured in the plan.
ref: getSchedulesForPlan
id required | integer |
{- "data": [
- {
- "name": "string",
- "triggers": [
- {
- "id": 1,
- "timeBased": {
- "cron": {
- "expression": "15 10 * * MON-FRI"
}, - "timezone": "Europe/Berlin"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1,
- "creator": {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy",
- "isDisabled": false,
- "state": "active",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "userAvatar": {
- "dataUrl": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
}
}
], - "count": 1
}
Retrieve a package containing the definition of the specified plan.
Response body is the contents of the package. Package contents are a ZIPped version of the plan definition.
The plan package can be used to import the plan in another environment. See the Import Plan Package for more information.
Quotas:
20 req./user/min, 40 req./workspace/min
ref: getPlanPackage
id required | integer |
comment | string comment to be displayed when plan is imported in a deployment package |
Update plan properties, e.g. name and description
ref: updatePlan
id required | integer |
name | string Display name of the flow. |
description | string User-friendly description for the flow. |
notificationsEnabled | boolean Indicate if notification will be sent for this plan |
{- "name": "string",
- "description": "string",
- "notificationsEnabled": true
}
{- "id": 1,
- "name": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "snapshotted": true,
- "originalPlanId": 1,
- "description": "string",
- "planSnapshotRunCount": 1,
- "notificationsEnabled": true,
- "latestPlanSnapshot": { },
- "latestPlanSnapshotRun": { },
- "planNodes": {
- "data": [
- {
- "id": 1,
- "taskType": "flow",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "coordinates": {
- "x": 1,
- "y": 1
}, - "name": "string"
}
]
}
}
Delete plan and remove associated schedules.
ref: deletePlan
id required | integer |
A node representing a task in the plan graph.
Create a new plan node
ref: createPlanNode
required | integer or string |
taskType required | string Enum: "flow" "http" "storage" "workflow" "ml_project_init" "ml_predict" "ml_retrain" "script_sql" |
name required | string |
object Location of the plan node | |
planFlowTaskCreateRequest (object) or planHTTPTaskCreateRequest (object) or planStorageTaskCreateRequest (object) or planWorkflowTaskCreateRequest (object) | |
Array of integers or strings[ items ] | |
Array of integers or strings[ items ] |
{- "coordinates": {
- "x": 1,
- "y": 1
}, - "planId": 1,
- "taskType": "flow",
- "task": {
- "flowId": 1,
- "flowNodeIds": [
- 1
]
}, - "name": "string",
- "inPlanNodeIds": [
- 1
], - "outPlanNodeIds": [
- 1
]
}
{- "id": 1,
- "taskType": "flow",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "coordinates": {
- "x": 1,
- "y": 1
}, - "name": "string"
}
List run parameters of a plan node. Only resources with run parameters will be included in the response.
id required | integer |
{- "handle": "string",
- "planNodeId": 1,
- "conflicts": [
- null
], - "flow": {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}, - "datasources": {
- "data": [
- {
- "dynamicPath": "string",
- "isSchematized": true,
- "isDynamic": true,
- "isConverted": true,
- "disableTypeInference": true,
- "hasStructuring": true,
- "hasSchemaErrors": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}
}
]
}, - "outputObjects": {
- "data": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
]
}, - "planOverrides": { }
}
Delete an existing plan node
ref: deletePlanNode
id required | integer |
Used to override the default value of runParameter in a plan for future executions.
Create a new plan override
ref: createPlanOverride
required | integer or string |
overrideKey required | string key/name used when overriding the value of the variable |
required | planRunParameterVariableSchema (object) or planRunParameterSelectorSchema (object) |
{- "planNodeId": 1,
- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "string"
}
}
}
{ }
Update an existing plan override
ref: updatePlanOverride
id required | integer |
required | integer or string |
overrideKey required | string key/name used when overriding the value of the variable |
required | planRunParameterVariableSchema (object) or planRunParameterSelectorSchema (object) |
{- "planNodeId": 1,
- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "string"
}
}
}
{ }
An execution of a plan's snapshot state
Cancel the plan execution.
id required | integer |
{ }
{- "id": 1,
- "status": "Complete",
- "createdAt": "2019-08-24T14:15:22Z",
- "finishedAt": "2019-08-24T14:15:22Z",
- "startedAt": "2019-08-24T14:15:22Z",
- "submittedAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "failedToCancelSomeJobs": true,
- "plan": {
- "id": 1
}, - "nextRun": {
- "id": "string"
}, - "previousRun": {
- "id": "string"
}
}
List existing plan snapshot runs
ref: listPlanSnapshotRuns
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
status | string filter plan executions based on their status |
dateFilter | object for filtering plan runs by start and end date |
ranfrom | string filter plan runs based on how they were run |
runBy | string Filter plans by the users who have run them. One of ['all', 'currentUser'] |
{- "data": [
- {
- "id": 1,
- "status": "Complete",
- "createdAt": "2019-08-24T14:15:22Z",
- "finishedAt": "2019-08-24T14:15:22Z",
- "startedAt": "2019-08-24T14:15:22Z",
- "submittedAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "failedToCancelSomeJobs": true,
- "plan": {
- "id": 1
}, - "nextRun": {
- "id": "string"
}, - "previousRun": {
- "id": "string"
}
}
], - "count": 1
}
Count existing plan snapshot runs
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
status | string filter plan executions based on their status |
dateFilter | object for filtering plan runs by start and end date |
ranfrom | string filter plan runs based on how they were run |
runBy | string Filter plans by the users who have run them. One of ['all', 'currentUser'] |
{- "count": 1
}
Return a plan snapshot run that contains the current status of a plan execution
ref: getPlanSnapshotRun
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
includeFlowCreatorInfo | string Include info about flow creators such as name and email adress. |
{- "id": 1,
- "status": "Complete",
- "createdAt": "2019-08-24T14:15:22Z",
- "finishedAt": "2019-08-24T14:15:22Z",
- "startedAt": "2019-08-24T14:15:22Z",
- "submittedAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "failedToCancelSomeJobs": true,
- "plan": {
- "id": 1
}, - "nextRun": {
- "id": "string"
}, - "previousRun": {
- "id": "string"
}
}
Get the schedule definition that triggered the plan snapshot run.
id required | integer |
{- "name": "string",
- "triggers": [
- {
- "id": 1,
- "timeBased": {
- "cron": {
- "expression": "15 10 * * MON-FRI"
}, - "timezone": "Europe/Berlin"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1,
- "creator": {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy",
- "isDisabled": false,
- "state": "active",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "userAvatar": {
- "dataUrl": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
}
}
A publication object is used to specify a table-based output and is associated with an outputObject. Settings include the connection to use, path, table type, and write action to apply.
Create a new publication
ref: createPublication
path required | Array of strings path to the location of the table/datasource. |
tableName required | string name of the table |
targetType required | string e.g. |
action required | string Enum: "create" "load" "createAndLoad" "truncateAndLoad" "dropAndLoad" "upsert" Type of writing action to perform with the results
|
outputObjectId | integer outputObject to attach this publication to. |
connectionId | string (connectionIdInfo) Internal identifier of the connection to use when publishing. When connection type is BigQuery, the id is |
Array of objects (runParameterDestinationInfo) [ items ] Optional Parameters that can be used to parameterized the | |
parameters | object Additional publication parameters specific to each JDBC data source. Example: isDeltaTable=true for Databricks connections to produce Delta Lake Tables |
{- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputObjectId": 1,
- "connectionId": "21",
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
], - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
{- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputobject": {
- "id": 1
}, - "connection": {
- "id": "21"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
List existing publications
ref: listPublications
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputobject": {
- "id": 1
}, - "connection": {
- "id": "21"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
], - "count": 1
}
Count existing publications
ref: countPublications
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get an existing publication
ref: getPublication
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputobject": {
- "id": 1
}, - "connection": {
- "id": "21"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
Patch an existing publication
ref: patchPublication
id required | integer |
path | Array of strings path to the location of the table/datasource. |
tableName | string name of the table |
targetType | string e.g. |
action | string Enum: "create" "load" "createAndLoad" "truncateAndLoad" "dropAndLoad" "upsert" Type of writing action to perform with the results
|
parameters | object Additional publication parameters specific to each JDBC data source. Example: isDeltaTable=true for Databricks connections to produce Delta Lake Tables |
{- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing publication
ref: deletePublication
id required | integer |
Contains information about repeated execution of a flow.
Create a new schedule
ref: createSchedule
name required | string name of the schedule |
required | Array of objects (timeBasedTrigger) [ items ] |
required | Array of runFlowTaskSchema (objects) or Array of runPlanTaskSchema (objects) or Array of runWorkflowTaskSchema (objects) |
{- "name": "string",
- "triggers": [
- {
- "id": 1,
- "timeBased": {
- "cron": {
- "expression": "15 10 * * MON-FRI"
}, - "timezone": "Europe/Berlin"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
]
}
{- "name": "string",
- "triggers": [
- {
- "id": 1,
- "timeBased": {
- "cron": {
- "expression": "15 10 * * MON-FRI"
}, - "timezone": "Europe/Berlin"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1,
- "creator": {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy",
- "isDisabled": false,
- "state": "active",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "userAvatar": {
- "dataUrl": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
}
}
list schedules owned by the current user
ref: listSchedules
filter | string Filter schedules using the attached flow name |
workflowId | string Filter schedules using workflowId |
taskTypeFilter | Array of strings Items Enum: "runFlow" "runPlan" "runWorkflow" Example: taskTypeFilter=runFlow Filter schedules by task types. If not specified, all types are allowed. |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "name": "string",
- "triggers": [
- {
- "id": 1,
- "timeBased": {
- "cron": {
- "expression": "15 10 * * MON-FRI"
}, - "timezone": "Europe/Berlin"
}, - "nextFireDate": "2030-12-03T10:15:30Z"
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1,
- "id": 1,
- "name": "string",
- "description": "string",
- "deleted_at": "2019-08-24T14:15:22Z",
- "cpProject": "string",
- "workspaceId": 1,
- "folderId": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "creator": {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy",
- "isDisabled": false,
- "state": "active",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "userAvatar": {
- "dataUrl": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
}, - "updatedBy": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1,
- "creator": {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy",
- "isDisabled": false,
- "state": "active",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "userAvatar": {
- "dataUrl": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
}
}
], - "count": 1
}
Enable a schedule
ref: enableSchedule
id required | integer |
{ }
{- "name": "string",
- "triggers": [
- {
- "id": 1,
- "timeBased": {
- "cron": {
- "expression": "15 10 * * MON-FRI"
}, - "timezone": "Europe/Berlin"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1,
- "creator": {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy",
- "isDisabled": false,
- "state": "active",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "userAvatar": {
- "dataUrl": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
}
}
Disable a schedule
ref: disableSchedule
id required | integer |
{ }
{- "name": "string",
- "triggers": [
- {
- "id": 1,
- "timeBased": {
- "cron": {
- "expression": "15 10 * * MON-FRI"
}, - "timezone": "Europe/Berlin"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1,
- "creator": {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy",
- "isDisabled": false,
- "state": "active",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "userAvatar": {
- "dataUrl": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
}
}
count schedules owned by the current user
ref: countSchedules
filter | string Filter schedules using the attached flow name |
workflowId | string Filter schedules using workflowId |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Fetch a schedule
ref: getSchedule
id required | integer |
{- "name": "string",
- "triggers": [
- {
- "id": 1,
- "timeBased": {
- "cron": {
- "expression": "15 10 * * MON-FRI"
}, - "timezone": "Europe/Berlin"
}, - "nextFireDate": "2030-12-03T10:15:30Z"
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1,
- "id": 1,
- "name": "string",
- "description": "string",
- "deleted_at": "2019-08-24T14:15:22Z",
- "cpProject": "string",
- "workspaceId": 1,
- "folderId": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "creator": {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy",
- "isDisabled": false,
- "state": "active",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "userAvatar": {
- "dataUrl": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
}, - "updatedBy": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1,
- "creator": {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy",
- "isDisabled": false,
- "state": "active",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "userAvatar": {
- "dataUrl": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
}
}
Update an existing schedule
ref: updateSchedule
id required | integer |
name | string name of the schedule |
Array of objects (timeBasedTrigger) [ items ] | |
Array of runFlowTaskSchema (objects) or Array of runPlanTaskSchema (objects) or Array of runWorkflowTaskSchema (objects) |
{- "name": "string",
- "triggers": [
- {
- "id": 1,
- "timeBased": {
- "cron": {
- "expression": "15 10 * * MON-FRI"
}, - "timezone": "Europe/Berlin"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
]
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing schedule
ref: deleteSchedule
id required | integer |
A sqlScript object is used to specify arbitrary SQLs to be run and is associated with an outputObject. Settings include the connection to use and sql type (pre/post),
Create a new sql script
ref: createSqlScript
sqlScript required | string String of SQL queries to be executed. |
type required | string Identifier to decide if the SQLs will be executed before or after a job. |
vendor required | string e.g. |
outputObjectId | integer outputObject to attach this sqlScript to. |
connectionId | string (connectionIdInfo) Internal identifier of the connection to use when publishing. When connection type is BigQuery, the id is |
Array of objects (runParameterSqlScriptInfo) [ items ] Optional Parameters that can be used to parameterized the |
{- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connectionId": "21",
- "runParameters": [
- {
- "type": "sql",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
]
}
{- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connection": {
- "id": "21"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
List existing sql scripts
ref: listSqlScripts
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connection": {
- "id": "21"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "count": 1
}
Count existing sql scripts
ref: countSqlScripts
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get an existing sql script
ref: getSqlScript
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connection": {
- "id": "21"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
Patch an existing sql script
ref: patchSqlScript
id required | integer |
sqlScript | string String of SQL queries to be executed. |
type | string Identifier to decide if the SQLs will be executed before or after a job. |
vendor | string e.g. |
connectionId | string (connectionIdInfo) Internal identifier of the connection to use when publishing. When connection type is BigQuery, the id is |
{- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "connectionId": "21"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing sql script
ref: deleteSqlScript
id required | integer |
Webhook Tasks allows to make HTTP calls to external services after jobs completion in a flow.
Create a new webhook flow task
name required | string Webhook name |
required | integer or string Id of the flow the webhook belongs to |
url required | string Webhook url |
method required | string Enum: "post" "get" "put" "patch" "delete" HTTP method |
triggerEvent required | string Enum: "onJobFailure" "onJobSuccess" "onJobDone" Event that will trigger the webhook |
triggerObject required | string Indicate which objects will trigger the webhooks any any some |
body | string Webhook body |
headers | object Webhook HTTP headers |
secretKey | string Optional secret key used to sign the webhook |
sslVerification | boolean Enable SSL verification |
retryOnFailure | boolean Retry if the status code is not in the 200-299 range |
{- "name": "string",
- "flowId": 1,
- "url": "string",
- "method": "post",
- "triggerEvent": "onJobFailure",
- "triggerObject": "any",
- "body": "string",
- "headers": {
- "property1": "string",
- "property2": "string"
}, - "secretKey": "string",
- "sslVerification": true,
- "retryOnFailure": true
}
{- "id": 1,
- "name": "string",
- "flow": {
- "id": 1
}, - "url": "string",
- "method": "post",
- "triggerEvent": "onJobFailure",
- "triggerObject": "any",
- "flowNodeIds": [
- 1
], - "body": "string",
- "headers": {
- "property1": "string",
- "property2": "string"
}, - "secretKey": "string",
- "sslVerification": true,
- "retryOnFailure": true,
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Allow to test a webhook task without running a job
Quotas:
20 req./user/min, 30 req./workspace/min
ref: testWebhook
url required | string Webhook url |
method required | string Enum: "post" "get" "put" "patch" "delete" HTTP method |
body | string Webhook body |
headers | object Webhook HTTP headers |
secretKey | string Optional secret key used to sign the webhook |
sslVerification | boolean Enable SSL verification |
{- "url": "string",
- "method": "post",
- "body": "string",
- "headers": {
- "property1": "string",
- "property2": "string"
}, - "secretKey": "string",
- "sslVerification": true
}
{- "statusCode": 1,
- "error": { }
}
Get an existing webhook flow task
ref: getWebhookFlowTask
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "id": 1,
- "name": "string",
- "flow": {
- "id": 1
}, - "url": "string",
- "method": "post",
- "triggerEvent": "onJobFailure",
- "triggerObject": "any",
- "flowNodeIds": [
- 1
], - "body": "string",
- "headers": {
- "property1": "string",
- "property2": "string"
}, - "secretKey": "string",
- "sslVerification": true,
- "retryOnFailure": true,
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing webhook flow task
id required | integer |
{- "id": 1,
- "name": "string",
- "flow": {
- "id": 1
}, - "url": "string",
- "method": "post",
- "triggerEvent": "onJobFailure",
- "triggerObject": "any",
- "flowNodeIds": [
- 1
], - "body": "string",
- "headers": {
- "property1": "string",
- "property2": "string"
}, - "secretKey": "string",
- "sslVerification": true,
- "retryOnFailure": true,
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
A self-contained, configurable space shared by several users,
containing flows, Dataset
s, connections, and other Designer Cloud Powered by Trifacta objects.
Delete Workspace configuration settings override (reset the settings to their initial values).
settings required | Array of strings |
{- "settings": [
- "feature.myFeature"
]
}
{- "numberOfRowsDeleted": 1
}
Delete Workspace configuration settings override (reset the settings to their initial values).
id required | integer |
settings required | Array of strings |
{- "settings": [
- "feature.myFeature"
]
}
{- "numberOfRowsDeleted": 1
}
Get workspace configuration. Settings set to null use the default configuration.
It is possible to filter the configuration to a specific key using the query parameter key
:
/v4/workspaces/:id/configuration?key=outputFormats.JSON
[{ "key": "outputFormats.JSON", "value": true }]
key | string |
[- {
- "key": "feature.feature1",
- "value": 42,
- "schema": {
- "type": "integer",
- "default": 10,
- "description": "some example description"
}
}, - {
- "key": "feature.anotherFeature.usingDefaultValue",
- "value": null,
- "schema": {
- "type": "boolean",
- "default": false,
- "description": "some example description"
}
}
]
Update the workspace configuration for the specified keys. To reset a configuration value to its default, use the delete endpoint.
Use the getConfigurationSchema endpoint to get the list of editable configuration values.
required | Array of objects (configurationKeyValueSchema) [ items ] |
{- "configuration": [
- {
- "key": "feature.feature1",
- "value": false
}, - {
- "key": "feature.feature2",
- "value": "some value"
}
]
}
[- true
]
Get workspace configuration. Settings set to null use the default configuration.
It is possible to filter the configuration to a specific key using the query parameter key
:
/v4/workspaces/:id/configuration?key=outputFormats.JSON
[{ "key": "outputFormats.JSON", "value": true }]
id required | integer |
key | string |
[- {
- "key": "feature.feature1",
- "value": 42,
- "schema": {
- "type": "integer",
- "default": 10,
- "description": "some example description"
}
}, - {
- "key": "feature.anotherFeature.usingDefaultValue",
- "value": null,
- "schema": {
- "type": "boolean",
- "default": false,
- "description": "some example description"
}
}
]
Update the workspace configuration for the specified keys. To reset a configuration value to its default, use the delete endpoint.
Use the getConfigurationSchema endpoint to get the list of editable configuration values.
id required | integer |
required | Array of objects (configurationKeyValueSchema) [ items ] |
{- "configuration": [
- {
- "key": "feature.feature1",
- "value": false
}, - {
- "key": "feature.feature2",
- "value": "some value"
}
]
}
[- true
]
Get configuration schema for the specified workspace.
id required | integer |
{- "property1": {
- "type": "string",
- "default": null,
- "allowedValues": [
- "string"
]
}, - "property2": {
- "type": "string",
- "default": null,
- "allowedValues": [
- "string"
]
}
}
Get configuration schema for the current workspace.
{- "property1": {
- "type": "string",
- "default": null,
- "allowedValues": [
- "string"
]
}, - "property2": {
- "type": "string",
- "default": null,
- "allowedValues": [
- "string"
]
}
}
Transfer Designer Cloud Powered by Trifacta assets to another user in the current workspace. For the given workspace, assigns ownership of all the user's contents to another user. This includes flows, datasets, recipes, and connections–basically any object that can be created and managed through the Designer Cloud Powered by Trifacta UI.
ℹ️ NOTE: This API endpoint does not delete the original user account. To delete the user account, another API call is needed.
ℹ️ NOTE: The asset transfer endpoint cannot be applied to deleted users. You must transfer the assets first before deleting the user.
required | integer or string the id of the person to transfer assets from |
required | integer or string the id of the person to transfer assets to |
object Asset IDs that need to be transferred. To specify all assets of a certain type, use "all" instead of integer array. If assets payload is not provided, all assets of all types will be transferred. |
{- "fromPersonId": 2,
- "toPersonId": 5,
- "assets": {
- "connections": [
- 702,
- 704
], - "datasources": [
- 111,
- 112,
- 113
], - "flows": [
- 201,
- 202
], - "macros": "all",
- "userdefinedfunctions": [
- 310,
- 307,
- 308
], - "plans": [
- 510,
- 512
]
}
}
Represents the data produced by running a recipe on some input.
ℹ️ NOTE: In the Designer Cloud Powered by Trifacta application UI, the WrangledDataset object is called a recipe.
Create a new wrangled dataset
required | object |
required | object |
name required | string |
inferredScript | object |
{- "importedDataset": {
- "id": 1
}, - "inferredScript": { },
- "flow": {
- "id": 1
}, - "name": "string"
}
{- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "associatedPeople": { },
- "referenceinfo": {
- "id": 1,
- "name": "string",
- "description": "string"
}, - "wrangled": true
}
List existing wrangled datasets
ref: listWrangledDatasets
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "associatedPeople": { },
- "referenceinfo": {
- "id": 1,
- "name": "string",
- "description": "string"
}, - "wrangled": true
}
], - "count": 1
}
Add this wrangled dataset to a flow as a reference.
id required | integer |
required | object The flow to add this dataset to. |
{- "flow": {
- "id": 1
}
}
{- "flow": {
- "id": 1
}, - "referencedFlowNode": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
Count existing wrangled datasets
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get an existing wrangled dataset
ref: getWrangledDataset
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "associatedPeople": { },
- "referenceinfo": {
- "id": 1,
- "name": "string",
- "description": "string"
}, - "wrangled": true
}
Update a wrangled dataset. This can mean one of two things.Either this will update the flownode object in our database or the editable script object.
ref: patchWrangledDataset
id required | integer |
activesampleId | integer Internal identifier of the currently active |
referenceId | integer Internal identifier for referenceInfo, which contains the name and description of the reference object associated with this flow node. This is how the reference dataset will appear when used in other flows. |
sampleLoadLimit | integer If not null, stores user selected sample size in MBs |
deletedAt | string <date-time> The time this object was deleted. |
{- "activesampleId": 1,
- "referenceId": 1,
- "sampleLoadLimit": 1,
- "deletedAt": "2019-08-24T14:15:22Z"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "updatedAt": "2019-08-24T14:15:22Z"
}
Get the dataset that is the primary input for this wrangled dataset. This can be either an imported dataset or a wrangled dataset.
ref: getInputDataset
id required | integer |
{- "wrangledDataset": {
- "id": 1
}
}
This action performs a dataset swap for the source of a wrangled dataset, which can be done through the UI.
Update the primary input dataset for the specified wrangled dataset. Each wrangled dataset must have one and only one primary input dataset, which can be an imported or wrangled dataset. If a wrangled dataset from another flow is selected, a reference will be used.
✅ TIP: After you have created a job via API, you can use this API to swap out the source data for the job's dataset. In this manner, you can rapidly re-execute a pre-existing job using fresh data.
ref: updateInputDataset
id required | integer |
required | object |
{- "wrangledDataset": {
- "id": 1
}
}
{- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "associatedPeople": { },
- "referenceinfo": {
- "id": 1,
- "name": "string",
- "description": "string"
}, - "wrangled": true
}
A writeSetting object defines file-based outputs within an outputObject. Settings include path, format, compression, and delimiters.
To specify multiple outputs, you can include additional writeSetting objects in the request.
For example, if you want to generate output to csv
and json
, you can duplicate the writeSettings object for csv and change the format value in the second one to json.
Create a new writesetting
ref: createWriteSetting
path required | string The fully qualified path to the output location where to write the results. |
action required | string Enum: "create" "append" "overwrite" If the output file or directory exists, you can specify one of the following actions
|
format required | string Enum: "csv" "json" "avro" "pqt" "hyper" Output format for the results. Specify one of the supported values.
|
compression | string Enum: "none" "gzip" "bzip2" "snappy" For csv and json results,
you can optionally compress them using
|
header | boolean For csv results with action set to |
asSingleFile | boolean For |
delim | string The delimiter between field values in an output row. Only relevant if the chosen |
hasQuotes | boolean If true, each field in the output is wrapped in double-quotes. |
includeMismatches | boolean If true, write out mismatched values as a string. |
outputObjectId | integer outputObject to attach this writeSetting to. |
Array of objects (runParameterDestinationInfo) [ items ] Optional Parameters that can be used to parameterized the path | |
connectionId | string Internal identifier of the connection to use when writing to a SFTP destination. |
{- "path": "/path/to/file.csv",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "outputObjectId": 7,
- "runParameters": [
- {
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}, - "overrideKey": "myVar"
}
}
], - "connectionId": "5"
}
{- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "connectionId": "25"
}
List existing write settings
ref: listWriteSettings
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "connectionId": "25"
}
], - "count": 1
}
Count existing write settings
ref: countWriteSettings
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get an existing write setting
ref: getWriteSetting
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "connectionId": "25"
}
Patch an existing write setting
ref: patchWriteSetting
id required | integer |
path | string The fully qualified path to the output location where to write the results. |
action | string Enum: "create" "append" "overwrite" If the output file or directory exists, you can specify one of the following actions
|
format | string Enum: "csv" "json" "avro" "pqt" "hyper" Output format for the results. Specify one of the supported values.
|
compression | string Enum: "none" "gzip" "bzip2" "snappy" For csv and json results,
you can optionally compress them using
|
header | boolean For csv results with action set to |
asSingleFile | boolean For |
delim | string The delimiter between field values in an output row. Only relevant if the chosen |
hasQuotes | boolean If true, each field in the output is wrapped in double-quotes. |
includeMismatches | boolean If true, write out mismatched values as a string. |
connectionId | string Internal identifier of the connection to use when writing to a SFTP destination. |
{- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "connectionId": "25"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing write setting
ref: deleteWriteSetting
id required | integer |