Download OpenAPI specification:Download
To enable programmatic control over its objects, the Designer Cloud Powered by Trifacta Platform supports a range of REST API endpoints across its objects. This section provides an overview of the API design, methods, and supported use cases.
Most of the endpoints accept JSON
as input and return JSON
responses.
This means that you must usually add the following headers to your request:
Content-type: application/json
Accept: application/json
Version: 9.7.0+2203516.20230127074458.3b1de3a3
The term resource
refers to a single type of object in the Designer Cloud Powered by Trifacta Platform metadata. An API is broken up by its endpoint's corresponding resource.
The name of a resource is typically plural, and expressed in camelCase. Example: jobGroups
.
Resource names are used as part of endpoint URLs, as well as in API parameters and responses.
The platform supports Create, Read, Update, and Delete operations on most resources. You can review the standards for these operations and their standard parameters below.
Some endpoints have special behavior as exceptions.
To create a resource, you typically submit an HTTP POST
request with the resource's required metadata in the request body.
The response returns a 201 Created
response code upon success with the resource's metadata, including its internal id
, in the response body.
An HTTP GET
request can be used to read a resource or to list a number of resources.
A resource's id
can be submitted in the request parameters to read a specific resource.
The response usually returns a 200 OK
response code upon success, with the resource's metadata in the response body.
If a GET
request does not include a specific resource id
, it is treated as a list request.
The response usually returns a 200 OK
response code upon success, with an object containing a list of resources' metadata in the response body.
When reading resources, some common query parameters are usually available. e.g.:
/v4/jobGroups?limit=100&includeDeleted=true&embed=jobs
Query Parameter | Type | Description |
---|---|---|
embed | string | Comma-separated list of objects to include part of the response. See Embedding resources. |
includeDeleted | string | If set to true , response includes deleted objects. |
limit | integer | Maximum number of objects to fetch. Usually 25 by default |
offset | integer | Offset after which to start returning objects. For use with limit query parameter. |
Updating a resource requires the resource id
, and is typically done using an HTTP PUT
or PATCH
request, with the fields to modify in the request body.
The response usually returns a 200 OK
response code upon success, with minimal information about the modified resource in the response body.
Deleting a resource requires the resource id
and is typically executing via an HTTP DELETE
request. The response usually returns a 204 No Content
response code upon success.
Resource names are plural and expressed in camelCase.
Resource names are consistent between main URL and URL parameter.
Parameter lists are consistently enveloped in the following manner:
{ "data": [{ ... }] }
Field names are in camelCase and are consistent with the resource name in the URL or with the embed URL parameter.
"creator": { "id": 1 },
"updater": { "id": 2 },
When reading a resource, the platform supports an embed
query parameter for most resources, which allows the caller to ask for associated resources in the response.
Use of this parameter requires knowledge of how different resources are related to each other and is suggested for advanced users only.
In the following example, the sub-jobs of a jobGroup are embedded in the response for jobGroup=1:
http://example.com:3005/v4/jobGroups/1?embed=jobs
If you provide an invalid embedding, you will get an error message. The response will contain the list of possible resources that can be embedded. e.g.
http://example.com:3005/v4/jobGroups/1?embed=*
Example error:
{
"exception": {
"name": "ValidationFailed",
"message": "Input validation failed",
"details": "No association * in flows! Valid associations are creator, updater, snapshots..."
}
}
It is possible to let the application know that you need fewer data to improve the performance of the endpoints using the fields
query parameter. e.g.
http://example.com:3005/v4/flows?fields=id;name
The list of fields need to be separated by semi-colons ;
. Note that the application might sometimes return more fields than requested.
You can also use it while embedding resources.
http://example.com:3005/v4/flows?fields=id;name&embed=flownodes(fields=id)
You can limit and sort the number of embedded resources for some associations. e.g.
http://example.com:3005/v4/flows?fields=id&embed=flownodes(limit=1,fields=id,sort=-id)
Note that not all association support this. An error is returned when it is not possible to limit the number of embedded results.
The Designer Cloud Powered by Trifacta Platform uses HTTP response codes to indicate the success or failure of an API request.
HTTP Status Code (client errors) | Notes |
---|---|
400 Bad Request | Potential reasons:
|
403 Forbidden | Incorrect permissions to access the Resource. |
404 Not Found | Resource cannot be found. |
410 Gone | Resource has been previously deleted. |
415 Unsupported Media Type | Incorrect Accept or Content-type header |
Each request has a request identifier, which can be found in the response headers, in the following form:
x-trifacta-request-id: <myRequestId>
ℹ️ NOTE: If you have an issue with a specific request, please include the
x-trifacta-request-id
value when you contact support
✅ TIP: You can use the request identifier value to scan the logs to identify technical details for an issue with a specific request.
You can use a third party client, such as curl, HTTPie, Postman or the Insomnia rest client to test the Designer Cloud Powered by Trifacta API.
⚠️ When testing the API, bear in mind that you are working with your live production data, not sample data or test data.
Note that you will need to pass an API token with each request.
For e.g., here is how to run a job with curl:
curl -X POST 'http://example.com:3005/v4/jobGroups' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer <token>' \
-d '{ "wrangledDataset": { "id": "<recipe-id>" } }'
Using a graphical tool such as Postman or Insomnia, it is possible to import the API specifications directly:
Note that with Postman, you can also generate code snippets by selecting a request and clicking on the Code button.
ℹ️ NOTE: Each request to the Designer Cloud Powered by Trifacta Platform must include authentication credentials.
API access tokens can be acquired and applied to your requests to obscure sensitive Personally Identifiable Information (PII) and are compliant with common privacy and security standards. These tokens last for a preconfigured time period and can be renewed as needed.
You can create and delete access tokens through the Settings area of the application. With each request, you submit the token as part of the Authorization header.
Authorization: Bearer <tokenValue>
As needed, you can create and use additional tokens. There is no limit to the number of tokens you can create. See Manage API Access Tokens for more information.
Security Scheme Type | HTTP |
---|---|
HTTP Authorization Scheme | bearer |
An object used to provide a simpler and more secure way of accessing the REST API endpoints of the Designer Cloud Powered by Trifacta Platform. Access tokens limit exposure of clear-text authentication values and provide an easy method of managing authentication outside of the browser. See the Authentication section for more information.
Create an API Access Token. See the Authentication section for more information about API Access Token.
⚠️ API tokens inherit the API access of the user who creates them. Treat tokens as passwords and keep them in a secure place.
This request requires you to be authenticated.
ref: createApiAccessToken
lifetimeSeconds required | integer Lifetime in seconds for the access token. Set this value to -1 to create a non-expiring token. |
description | string User-friendly description for the access token |
{- "lifetimeSeconds": -1,
- "description": "API access token description"
}
{- "tokenValue": "eyJ0b2tlbklkIjoiYmFiOTA4ZjctZGNjMi00OTYyLTg1YmQtYzFlOTZkMGNhY2JkIiwic2VjcmV0IjoiOWIyNjQ5MWJiODM4ZWY0OWE1NzdhYzYxOWEwYTFkNjc4ZmE4NmE5MzBhZWFiZDk3OGRlOTY0ZWI0MDUyODhiOCJ9",
- "tokenInfo": {
- "tokenId": "0bc1d49f-5475-4c62-a0ba-6ad269389ada",
- "description": "API access token description",
- "expiredAt": "2019-08-24T14:15:22Z",
- "createdAt": "2019-08-24T14:15:22Z",
- "lastUsed": null
}
}
List API Access Tokens of the current user
ref: listApiAccessTokens
{- "data": [
- {
- "tokenId": "0bc1d49f-5475-4c62-a0ba-6ad269389ada",
- "description": "API access token description",
- "expiredAt": "2019-08-24T14:15:22Z",
- "createdAt": "2019-08-24T14:15:22Z",
- "lastUsed": null
}
], - "count": 1
}
Get an existing api access token
ref: getApiAccessToken
tokenId required | string Example: 0bc1d49f-5475-4c62-a0ba-6ad269389ada |
{- "tokenId": "0bc1d49f-5475-4c62-a0ba-6ad269389ada",
- "description": "API access token description",
- "expiredAt": "2019-08-24T14:15:22Z",
- "createdAt": "2019-08-24T14:15:22Z",
- "lastUsed": null
}
Delete the specified access token.
⚠️ If you delete an active access token, you may prevent the user from accessing the platform outside of the Trifacta application.
ref: deleteApiAccessToken
tokenId required | string Example: 0bc1d49f-5475-4c62-a0ba-6ad269389ada |
An object containing information for accessing AWS S3 storage, including details like defaultBucket, credentials, etc.
Create a new AWS config
ref: createAwsConfig
credentialProvider required | string Enum: "default" "temporary"
|
defaultBucket | string Default S3 bucket where user can upload and write results |
extraBuckets | Array of strings |
role | string AWS IAM Role, required when credential provider is set to temporary |
key | string AWS key string, required when credential provider is set to default |
secret | string AWS secret string, required when credential provider is set to default |
personId | integer When creating an AWS configuration, an administrator can insert the personId parameter to assign the configuration to the internal identifier for the user. If this parameter is not included, the AWS configuration is assigned to the user who created it. |
workspaceId | integer When creating an AWS configuration, an administrator can insert the workspaceId parameter to assign the configuration to the internal identifier for the workspace. |
{- "defaultBucket": "bucketName",
- "extraBuckets": [
- "bucket1"
], - "credentialProvider": "default",
- "role": "arn:aws:iam::xxxxxxxxxxxxx:role/sample-role",
- "key": "string",
- "secret": "string",
- "personId": 1,
- "workspaceId": 1
}
{- "defaultBucket": "bucketName",
- "extraBuckets": [
- "bucket1"
], - "credentialProvider": "default",
- "role": "arn:aws:iam::xxxxxxxxxxxxx:role/sample-role",
- "credential": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "activeRoleId": 1
}
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "defaultBucket": "bucketName",
- "extraBuckets": [
- "bucket1"
], - "credentialProvider": "default",
- "role": "arn:aws:iam::xxxxxxxxxxxxx:role/sample-role",
- "credential": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "activeRoleId": 1
}
], - "count": 1
}
The request body contains the parameters of the awsConfigs object that you wish to modify. You do not have to include parameters that are not being modified.
The following changes the default bucket for the AWS configuration object.
{ "defaultBucket": "testing2" }
ref: updateAwsConfig
id required | integer |
id | integer unique identifier for this object. |
defaultBucket | string Default S3 bucket where user can upload and write results |
extraBuckets | Array of strings |
credentialProvider | string Enum: "default" "temporary"
|
role | string AWS IAM Role, required when credential provider is set to temporary |
key | string AWS key string, required when credential provider is set to default |
secret | string AWS secret string, required when credential provider is set to default |
personId | integer When creating an AWS configuration, an administrator can insert the personId parameter to assign the configuration to the internal identifier for the user. If this parameter is not included, the AWS configuration is assigned to the user who created it. |
workspaceId | integer When creating an AWS configuration, an administrator can insert the workspaceId parameter to assign the configuration to the internal identifier for the workspace. |
{- "id": 1,
- "defaultBucket": "bucketName",
- "extraBuckets": [
- "bucket1"
], - "credentialProvider": "default",
- "role": "arn:aws:iam::xxxxxxxxxxxxx:role/sample-role",
- "key": "string",
- "secret": "string",
- "personId": 1,
- "workspaceId": 1
}
{- "defaultBucket": "bucketName",
- "extraBuckets": [
- "bucket1"
], - "credentialProvider": "default",
- "role": "arn:aws:iam::xxxxxxxxxxxxx:role/sample-role",
- "credential": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "activeRoleId": 1
}
The request body contains the parameters of the awsConfigs object that you wish to modify. You do not have to include parameters that are not being modified.
The following changes the default bucket for the AWS configuration object.
{ "defaultBucket": "testing2" }
ref: patchAwsConfig
id required | integer |
id | integer unique identifier for this object. |
defaultBucket | string Default S3 bucket where user can upload and write results |
extraBuckets | Array of strings |
credentialProvider | string Enum: "default" "temporary"
|
role | string AWS IAM Role, required when credential provider is set to temporary |
key | string AWS key string, required when credential provider is set to default |
secret | string AWS secret string, required when credential provider is set to default |
personId | integer When creating an AWS configuration, an administrator can insert the personId parameter to assign the configuration to the internal identifier for the user. If this parameter is not included, the AWS configuration is assigned to the user who created it. |
workspaceId | integer When creating an AWS configuration, an administrator can insert the workspaceId parameter to assign the configuration to the internal identifier for the workspace. |
{- "id": 1,
- "defaultBucket": "bucketName",
- "extraBuckets": [
- "bucket1"
], - "credentialProvider": "default",
- "role": "arn:aws:iam::xxxxxxxxxxxxx:role/sample-role",
- "key": "string",
- "secret": "string",
- "personId": 1,
- "workspaceId": 1
}
{- "defaultBucket": "bucketName",
- "extraBuckets": [
- "bucket1"
], - "credentialProvider": "default",
- "role": "arn:aws:iam::xxxxxxxxxxxxx:role/sample-role",
- "credential": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "activeRoleId": 1
}
An object containing the AWS IAM Role ARN for authenticating aws resources when using role-base authentication, this object belongs to an awsConfig.
Create an aws role. If neither personId nor workspaceId is provided, create the role for the request user
ℹ️ NOTE: Admin role is required to use this endpoint.
ref: createAwsRole
role required | string |
personId | integer |
workspaceId | integer When creating an AWS role, an administrator can insert the workspaceId parameter to assign the configuration to the internal identifier for the workspace. |
{- "role": "string",
- "personId": 1,
- "workspaceId": 1
}
{- "id": 1,
- "awsConfigId": 1,
- "role": "string",
- "createdFrom": "api",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deletedAt": "2019-08-24T14:15:22Z"
}
List AWS roles for a user
or workspace. If neither personId
nor workspaceId
is provided, list the roles associated with the request user.
ref: listAwsRoles
personId | integer person id |
workspaceId | integer workspace id |
{- "data": {
- "id": 1,
- "awsConfig": {
- "id": 1
}, - "role": "string",
- "createdFrom": "api",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deletedAt": "2019-08-24T14:15:22Z"
}
}
Update an existing aws role
ℹ️ NOTE: Admin role is required to use this endpoint.
ref: updateAwsRole
id required | integer |
personId | integer |
workspaceId | integer When creating an AWS role, an administrator can insert the workspaceId parameter to assign the configuration to the internal identifier for the workspace. |
role | string |
createdFrom | string Enum: "api" "idp" shows which means created the role |
createdAt | string <date-time> The time this object was first created. |
updatedAt | string <date-time> The time this object was last updated. |
deletedAt | string <date-time> The time this object was deleted. |
{- "personId": 1,
- "workspaceId": 1,
- "role": "string",
- "createdFrom": "api",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deletedAt": "2019-08-24T14:15:22Z"
}
{- "id": 1,
- "awsConfigId": 1,
- "role": "string",
- "createdFrom": "api",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deletedAt": "2019-08-24T14:15:22Z"
}
Delete an existing aws role
ref: deleteAwsRole
id required | integer |
An object representing Designer Cloud Powered by Trifacta's connection to an external data source. connections can be used for import, publishing, or both, depending on type.
Create a new connection
ref: createConnection
vendor required | string String identifying the connection`s vendor |
vendorName required | string Name of the vendor of the connection |
type required | string Enum: "jdbc" "rest" "remotefile" Type of connection |
credentialType required | string Enum: "basic" "conf" "kerberosDelegate" "azureTokenSso" "kerberosImpersonation" "sshKey" "securityToken" "iamRoleArn" "iamDbUser" "oauth2" "keySecret" "apiKey" "awsKeySecret" "basicWithAppToken" "userWithApiToken" "basicApp" "transactionKey" "password" "apiKeyWithToken" "noAuth" "httpHeaderBasedAuth" "privateApp" "httpQueryBasedAuth"
|
name required | string Display name of the connection. |
params required | object This setting is populated with any parameters that are passed to the source duringconnection and operations. For relational sources, this setting may include thedefault database and extra load parameters. |
advancedCredentialType | string |
sshTunneling | boolean When |
ssl | boolean When |
description | string User-friendly description for the connection. |
disableTypeInference | boolean If set to false, type inference has been disabled for this connection. The default is true. When type inference has been disabled, the Designer Cloud Powered by Trifacta Platform does not apply Designer Cloud Powered by Trifacta types to data when it is imported. |
isGlobal | boolean If NOTE: After a connection has been made public, it cannot be made private again. It must be deleted and recreated. |
credentialsShared | boolean If |
host | string Host of the source |
port | integer Port number for the source |
bucket | string bucket name for the source |
oauth2StateId | string |
Array of basic (object) or conf (object) or kerberosDelegate (object) or azureTokenSso (object) or kerberosImpersonation (object) or sshKey (object) or securityToken (object) or iamRoleArn (object) or iamDbUser (object) or oauth2 (object) or keySecret (object) or apiKey (object) or awsKeySecret (object) or basicWithAppToken (object) or userWithApiToken (object) or basicApp (object) or transactionKey (object) or password (object) or privateApp (object) or apiKeyWithToken (object) or noAuth (object) or httpHeaderBasedAuth (object) or privateApp (object) or httpQueryBasedAuth (object) (credentialsInfo) [ items ] If present, these values are the credentials used to connect to the database. | |
Array of sshTunnelingBasic (object) (advancedCredentialsInfo) [ items ] If present, these values are the credentials used to connect to the database. | |
Array of objects (jdbcRestEndpointsInfo) [ items ] If present, these values are the REST endpoints info required for connection |
{- "vendor": "oracle",
- "vendorName": "oracle",
- "type": "jdbc",
- "name": "example_oracle_connection",
- "description": "This is an oracle connection",
- "disableTypeInference": false,
- "isGlobal": false,
- "credentialsShared": false,
- "host": "my_oracle_host",
- "port": 1521,
- "params": {
- "service": "my_oracle_service"
}, - "credentialType": "basic",
- "credentials": [
- {
- "username": "my_oracle_username",
- "password": "my_oracle_password"
}
]
}
{- "vendor": "oracle",
- "vendorName": "oracle",
- "type": "jdbc",
- "credentialType": "basic",
- "advancedCredentialType": "string",
- "sshTunneling": true,
- "ssl": true,
- "name": "example_oracle_connection",
- "description": "string",
- "disableTypeInference": true,
- "isGlobal": true,
- "credentialsShared": true,
- "host": "example.oracle.test",
- "port": 1521,
- "id": "55",
- "uuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "credentials": [
- {
- "username": "string",
- "password": "string"
}
], - "advancedCredentials": [
- {
- "sshTunnelingUsername": "string",
- "sshTunnelingPassword": "string"
}
], - "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "params": {
- "database": "dev"
}, - "endpoints": [
- {
- "tableName": "table1",
- "httpMethod": "GET",
- "endpoint": "/capsules",
- "headers": {
- "Content-Type": "application/json"
}, - "queryParams": {
- "q": "query-param-example"
}, - "requestBody": "{\"key1\": \"value1\"}",
- "pagination": {
- "paginationType": "nextPageURL",
- "pageurlpath": "$./data/nextPage"
}, - "xPath": "$.missions",
- "dataModel": "DOCUMENT"
}
]
}
List existing connections
ref: listConnections
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
sharedRole | string Which type of role to list the connections |
{- "data": [
- {
- "vendor": "oracle",
- "vendorName": "oracle",
- "type": "jdbc",
- "credentialType": "basic",
- "advancedCredentialType": "string",
- "sshTunneling": true,
- "ssl": true,
- "name": "example_oracle_connection",
- "description": "string",
- "disableTypeInference": true,
- "isGlobal": true,
- "credentialsShared": true,
- "host": "example.oracle.test",
- "port": 1521,
- "id": "55",
- "uuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "credentials": [
- {
- "username": "string",
- "password": "string"
}
], - "advancedCredentials": [
- {
- "sshTunnelingUsername": "string",
- "sshTunnelingPassword": "string"
}
], - "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "params": {
- "database": "dev"
}, - "endpoints": [
- {
- "tableName": "table1",
- "httpMethod": "GET",
- "endpoint": "/capsules",
- "headers": {
- "Content-Type": "application/json"
}, - "queryParams": {
- "q": "query-param-example"
}, - "requestBody": "{\"key1\": \"value1\"}",
- "pagination": {
- "paginationType": "nextPageURL",
- "pageurlpath": "$./data/nextPage"
}, - "xPath": "$.missions",
- "dataModel": "DOCUMENT"
}
]
}
], - "count": 1
}
Performs a dry run of creating the connection, testing it, and then deleting the connection
vendor required | string String identifying the connection`s vendor |
vendorName required | string Name of the vendor of the connection |
type required | string Enum: "jdbc" "rest" "remotefile" Type of connection |
credentialType required | string Enum: "basic" "conf" "kerberosDelegate" "azureTokenSso" "kerberosImpersonation" "sshKey" "securityToken" "iamRoleArn" "iamDbUser" "oauth2" "keySecret" "apiKey" "awsKeySecret" "basicWithAppToken" "userWithApiToken" "basicApp" "transactionKey" "password" "apiKeyWithToken" "noAuth" "httpHeaderBasedAuth" "privateApp" "httpQueryBasedAuth"
|
name required | string Display name of the connection. |
params required | object This setting is populated with any parameters that are passed to the source duringconnection and operations. For relational sources, this setting may include thedefault database and extra load parameters. |
advancedCredentialType | string |
sshTunneling | boolean When |
ssl | boolean When |
description | string User-friendly description for the connection. |
disableTypeInference | boolean If set to false, type inference has been disabled for this connection. The default is true. When type inference has been disabled, the Designer Cloud Powered by Trifacta Platform does not apply Designer Cloud Powered by Trifacta types to data when it is imported. |
isGlobal | boolean If NOTE: After a connection has been made public, it cannot be made private again. It must be deleted and recreated. |
credentialsShared | boolean If |
host | string Host of the source |
port | integer Port number for the source |
bucket | string bucket name for the source |
oauth2StateId | string |
Array of basic (object) or conf (object) or kerberosDelegate (object) or azureTokenSso (object) or kerberosImpersonation (object) or sshKey (object) or securityToken (object) or iamRoleArn (object) or iamDbUser (object) or oauth2 (object) or keySecret (object) or apiKey (object) or awsKeySecret (object) or basicWithAppToken (object) or userWithApiToken (object) or basicApp (object) or transactionKey (object) or password (object) or privateApp (object) or apiKeyWithToken (object) or noAuth (object) or httpHeaderBasedAuth (object) or privateApp (object) or httpQueryBasedAuth (object) (credentialsInfo) [ items ] If present, these values are the credentials used to connect to the database. | |
Array of sshTunnelingBasic (object) (advancedCredentialsInfo) [ items ] If present, these values are the credentials used to connect to the database. | |
Array of objects (jdbcRestEndpointsInfo) [ items ] If present, these values are the REST endpoints info required for connection |
{- "vendor": "oracle",
- "vendorName": "oracle",
- "type": "jdbc",
- "credentialType": "basic",
- "advancedCredentialType": "string",
- "sshTunneling": true,
- "ssl": true,
- "name": "example_oracle_connection",
- "description": "string",
- "disableTypeInference": true,
- "isGlobal": true,
- "credentialsShared": true,
- "host": "example.oracle.test",
- "port": 1521,
- "bucket": "3fac-testing",
- "params": {
- "database": "dev"
}, - "oauth2StateId": "string",
- "credentials": [
- {
- "username": "string",
- "password": "string"
}
], - "advancedCredentials": [
- {
- "sshTunnelingUsername": "string",
- "sshTunnelingPassword": "string"
}
], - "endpoints": [
- {
- "tableName": "table1",
- "httpMethod": "GET",
- "endpoint": "/capsules",
- "headers": {
- "Content-Type": "application/json"
}, - "queryParams": {
- "q": "query-param-example"
}, - "requestBody": "{\"key1\": \"value1\"}",
- "pagination": {
- "paginationType": "nextPageURL",
- "pageurlpath": "$./data/nextPage"
}, - "xPath": "$.missions",
- "dataModel": "DOCUMENT"
}
]
}
{- "result": "string"
}
Count existing connections
ref: countConnections
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
sharedRole | string Which type of role to count the connections |
{- "count": 1
}
Get an existing connection
ref: getConnection
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "vendor": "oracle",
- "vendorName": "oracle",
- "type": "jdbc",
- "credentialType": "basic",
- "advancedCredentialType": "string",
- "sshTunneling": true,
- "ssl": true,
- "name": "example_oracle_connection",
- "description": "string",
- "disableTypeInference": true,
- "isGlobal": true,
- "credentialsShared": true,
- "host": "example.oracle.test",
- "port": 1521,
- "id": "55",
- "uuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "credentials": [
- {
- "username": "string",
- "password": "string"
}
], - "advancedCredentials": [
- {
- "sshTunnelingUsername": "string",
- "sshTunnelingPassword": "string"
}
], - "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "params": {
- "database": "dev"
}, - "endpoints": [
- {
- "tableName": "table1",
- "httpMethod": "GET",
- "endpoint": "/capsules",
- "headers": {
- "Content-Type": "application/json"
}, - "queryParams": {
- "q": "query-param-example"
}, - "requestBody": "{\"key1\": \"value1\"}",
- "pagination": {
- "paginationType": "nextPageURL",
- "pageurlpath": "$./data/nextPage"
}, - "xPath": "$.missions",
- "dataModel": "DOCUMENT"
}
]
}
Update an existing connection
ref: updateConnection
id required | integer |
host | string Host of the source |
port | integer Port number for the source |
ssl | boolean When |
description | string User-friendly description for the connection. |
disableTypeInference | boolean If set to false, type inference has been disabled for this connection. The default is true. When type inference has been disabled, the Designer Cloud Powered by Trifacta Platform does not apply Designer Cloud Powered by Trifacta types to data when it is imported. |
name | string Display name of the connection. |
params | object This setting is populated with any parameters that are passed to the source duringconnection and operations. For relational sources, this setting may include thedefault database and extra load parameters. |
isGlobal | boolean If NOTE: After a connection has been made public, it cannot be made private again. It must be deleted and recreated. |
credentialsShared | boolean If |
Array of basic (object) or conf (object) or kerberosDelegate (object) or azureTokenSso (object) or kerberosImpersonation (object) or sshKey (object) or securityToken (object) or iamRoleArn (object) or iamDbUser (object) or oauth2 (object) or keySecret (object) or apiKey (object) or awsKeySecret (object) or basicWithAppToken (object) or userWithApiToken (object) or basicApp (object) or transactionKey (object) or password (object) or privateApp (object) or apiKeyWithToken (object) or noAuth (object) or httpHeaderBasedAuth (object) or privateApp (object) or httpQueryBasedAuth (object) (credentialsInfo) [ items ] If present, these values are the credentials used to connect to the database. | |
Array of sshTunnelingBasic (object) (advancedCredentialsInfo) [ items ] If present, these values are the credentials used to connect to the database. | |
sshTunneling | boolean When |
credentialType | string Enum: "basic" "conf" "kerberosDelegate" "azureTokenSso" "kerberosImpersonation" "sshKey" "securityToken" "iamRoleArn" "iamDbUser" "oauth2" "keySecret" "apiKey" "awsKeySecret" "basicWithAppToken" "userWithApiToken" "basicApp" "transactionKey" "password" "apiKeyWithToken" "noAuth" "httpHeaderBasedAuth" "privateApp" "httpQueryBasedAuth"
|
advancedCredentialType | string |
oauth2StateId | string |
vendor | string String identifying the connection`s vendor |
bucket | string bucket name for the source |
Array of objects (jdbcRestEndpointsInfo) [ items ] If present, these values are the REST endpoints info required for connection |
{- "host": "example.oracle.test",
- "port": 1521,
- "ssl": true,
- "description": "string",
- "disableTypeInference": true,
- "name": "example_oracle_connection",
- "params": {
- "database": "dev"
}, - "isGlobal": true,
- "credentialsShared": true,
- "credentials": [
- {
- "username": "string",
- "password": "string"
}
], - "advancedCredentials": [
- {
- "sshTunnelingUsername": "string",
- "sshTunnelingPassword": "string"
}
], - "sshTunneling": true,
- "credentialType": "basic",
- "advancedCredentialType": "string",
- "oauth2StateId": "string",
- "vendor": "oracle",
- "bucket": "3fac-testing",
- "endpoints": [
- {
- "tableName": "table1",
- "httpMethod": "GET",
- "endpoint": "/capsules",
- "headers": {
- "Content-Type": "application/json"
}, - "queryParams": {
- "q": "query-param-example"
}, - "requestBody": "{\"key1\": \"value1\"}",
- "pagination": {
- "paginationType": "nextPageURL",
- "pageurlpath": "$./data/nextPage"
}, - "xPath": "$.missions",
- "dataModel": "DOCUMENT"
}
]
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing connection
ref: deleteConnection
id required | integer |
Get the connection status
ref: getConnectionStatus
id required | integer |
{- "result": "string"
}
An internal object representing the relationship between a connection and any person objects with which it is shared.
Create a new connection permission
id required | integer |
required | Array of personObjectWithRole (object) or personIdWithRole (object)[ items ] |
{- "data": [
- {
- "person": {
- "id": 1
}, - "role": "owner",
- "policy": "string"
}
]
}
{- "data": [
- {
- "role": "owner",
- "person": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
], - "count": 1
}
Get existing connection permissions
id required | integer |
{- "data": [
- {
- "name": "string",
- "email": "string",
- "id": 1,
- "connectionPermission": {
- "role": "owner",
- "person": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}, - "isCreatedBy": true
}
], - "count": 1
}
Get an existing connection permission
id required | integer |
aid required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "role": "owner",
- "person": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Metadata that controls the behavior of JDBC connectors.
The terminology in the connectivity API is as follows:
The default configuration of each connector has been tuned for optimal performance and standardized type mapping behavior. If you require connector behavior changes, you can leverage the following APIs.
The specified overrides are merged into the current set of overrides for the connector. A new entry is created if no overrides currently exist.
The connector metadata stores a mapping for each Trifacta type to an official JDBC type
and database native type. When Trifacta publishes to a new table, it uses the first type specified
in the vendorTypeList
. The rest of the types are used when validating the publish
action during design time.
As an example, let’s override the type mapping behavior for the Postgres connector.
By default it publishes Trifacta integers to bigint
, but we can make it publish to
int
instead. Make a GET request to /v4/connectormetadata/postgres
to get the current
behavior. Locate the section called publishTypeMap
and identify the element in the list where
trifactaType
is INTEGER. We can see that the first element under the corresponding
vendorTypeList
is bigint
.
Since we want Postgres to write to int
when creating string columns in a new table,
move that value to the beginning of the vendorTypeList. Send a POST request to
/v4/connectormetadata/postgres/overrides
with the following body:
ℹ️ NOTE: Overriding the jdbcType is not supported behavior. Please use the same value from the default.
{
"publishMetadata": {
"publishTypeMap": [
{
"vendorTypeList": [
"int",
"bigint",
"int2",
"int4",
"int8",
"smallint",
"serial",
"bigserial",
"text",
"varchar",
"bpchar",
"char",
"character varying",
"character"
],
"jdbcType": 4,
"trifactaType": "INTEGER"
}
]
}
}
Rerun the GET request to ensure the values are reflected.
The default performance configurations have been tuned to work well with the majority of systems. There are a few parameters that can be tuned if needed:
numberOfConnections
: Number of connections that are used to write data in parallel.batchSize
: Number of records written in each database batch.{
"publishMetadata": {
"performanceParams": {
"batchSize": 10000,
"numberOfConnections": 5
}
}
}
The default performance configurations have been tuned to work well with the majority
of systems. One parameter that can be tuned is the database fetchSize
. By default it is
set to a value of -1, which uses the default specified by the database driver. The following request
body can override this value:
{
"runtimeMetadata": {
"importPerformance": {"fetchSize": 1000}
}
}
connector required | string |
object | |
object |
{- "publishMetadata": {
- "publishTypeMap": [
- {
- "vendorTypeList": [
- "int",
- "bigint",
- "int2",
- "int4",
- "int8",
- "smallint",
- "serial",
- "bigserial",
- "text",
- "varchar",
- "bpchar",
- "char",
- "character varying",
- "character"
], - "jdbcType": 4,
- "trifactaType": "INTEGER"
}
]
}
}
Get the metadata overrides for a connector in a given workspace. These overrides are applied to the base configuration for connectivity operations.
connector required | string |
{- "connectionMetadata": {
- "name": "string",
- "displayName": "string",
- "type": "string",
- "category": "relational",
- "status": "supported",
- "credentialTypes": [
- "basic"
], - "operation": "import",
- "connectionParameters": [
- {
- "name": "string",
- "displayName": "string",
- "type": "string",
- "required": true,
- "category": "string",
- "defaultValue": "string"
}
]
}, - "runtimeMetadata": {
- "defaultTypeTreatment": "WHITELIST",
- "typeMap": [
- {
- "vendorType": "string",
- "jdbcType": 1,
- "accessorClass": "string",
- "trifactaType": "Array",
- "classification": "WHITELIST"
}
], - "metadataAccessors": { },
- "pathMetadata": {
- "qualifiedPath": "CATALOG"
}, - "limit": {
- "table": "string",
- "query": "string"
}, - "errorHandlers": { },
- "importPerformance": {
- "fetchSize": 1,
- "disableAutoCommit": true,
- "schemaLimit": 1,
- "ormEnabled": true,
- "unload": {
- "stream": true,
- "cli": {
- "script": "string",
- "format": "string",
- "timeout": 1
}
}
}
}, - "publishMetadata": {
- "publishMethod": "direct",
- "publishTypeMap": [
- {
- "jdbcType": 1,
- "trifactaType": "string",
- "defaultValue": "string",
- "vendorTypeList": [
- "string"
]
}
], - "publishValidation": {
- "enabled": true,
- "maxTableNameLength": 1,
- "maxColumnNameLength": 1,
- "validTableNameRegex": "string",
- "validColNameRegex": "string"
}, - "publishQueries": {
- "createTable": "string",
- "createTempTable": "string",
- "copyTable": "string",
- "dropTable": "string",
- "insertTable": "string",
- "truncateTable": "string",
- "addColumn": "string"
}, - "performanceParams": {
- "batchProcessingEnabled": true,
- "batchLoggingEnabled": true,
- "batchSize": 1,
- "numberOfConnections": 1,
- "commitFrequency": 1,
- "queueSize": 1,
- "maxOfferToQueueRetryCount": 1,
- "maxPollFromQueueRetryCount": 1
}, - "publishInfo": {
- "qualifyingPath": [
- "string"
], - "supportedActions": [
- "create"
], - "supportedProtocols": [
- "string"
], - "externalFileFormats": [
- "pqt"
]
}
}
}
Get the consolidated metadata for a connector in a given workspace. This metadata is used to defined connectivity, ingestion, and publishing for the connector.
ref: getConnectorConfig
connector required | string |
{- "connectionMetadata": {
- "name": "string",
- "displayName": "string",
- "type": "string",
- "category": "relational",
- "status": "supported",
- "credentialTypes": [
- "basic"
], - "operation": "import",
- "connectionParameters": [
- {
- "name": "string",
- "displayName": "string",
- "type": "string",
- "required": true,
- "category": "string",
- "defaultValue": "string"
}
]
}, - "runtimeMetadata": {
- "defaultTypeTreatment": "WHITELIST",
- "typeMap": [
- {
- "vendorType": "string",
- "jdbcType": 1,
- "accessorClass": "string",
- "trifactaType": "Array",
- "classification": "WHITELIST"
}
], - "metadataAccessors": { },
- "pathMetadata": {
- "qualifiedPath": "CATALOG"
}, - "limit": {
- "table": "string",
- "query": "string"
}, - "errorHandlers": { },
- "importPerformance": {
- "fetchSize": 1,
- "disableAutoCommit": true,
- "schemaLimit": 1,
- "ormEnabled": true,
- "unload": {
- "stream": true,
- "cli": {
- "script": "string",
- "format": "string",
- "timeout": 1
}
}
}
}, - "publishMetadata": {
- "publishMethod": "direct",
- "publishTypeMap": [
- {
- "jdbcType": 1,
- "trifactaType": "string",
- "defaultValue": "string",
- "vendorTypeList": [
- "string"
]
}
], - "publishValidation": {
- "enabled": true,
- "maxTableNameLength": 1,
- "maxColumnNameLength": 1,
- "validTableNameRegex": "string",
- "validColNameRegex": "string"
}, - "publishQueries": {
- "createTable": "string",
- "createTempTable": "string",
- "copyTable": "string",
- "dropTable": "string",
- "insertTable": "string",
- "truncateTable": "string",
- "addColumn": "string"
}, - "performanceParams": {
- "batchProcessingEnabled": true,
- "batchLoggingEnabled": true,
- "batchSize": 1,
- "numberOfConnections": 1,
- "commitFrequency": 1,
- "queueSize": 1,
- "maxOfferToQueueRetryCount": 1,
- "maxPollFromQueueRetryCount": 1
}, - "publishInfo": {
- "qualifyingPath": [
- "string"
], - "supportedActions": [
- "create"
], - "supportedProtocols": [
- "string"
], - "externalFileFormats": [
- "pqt"
]
}
}
}
Get the default metadata for a connector without applying custom overrides. This metadata is used to defined connectivity, ingestion, and publishing for the connector.
ref: getConnectorDefaults
connector required | string |
{- "connectionMetadata": {
- "name": "string",
- "displayName": "string",
- "type": "string",
- "category": "relational",
- "status": "supported",
- "credentialTypes": [
- "basic"
], - "operation": "import",
- "connectionParameters": [
- {
- "name": "string",
- "displayName": "string",
- "type": "string",
- "required": true,
- "category": "string",
- "defaultValue": "string"
}
]
}, - "runtimeMetadata": {
- "defaultTypeTreatment": "WHITELIST",
- "typeMap": [
- {
- "vendorType": "string",
- "jdbcType": 1,
- "accessorClass": "string",
- "trifactaType": "Array",
- "classification": "WHITELIST"
}
], - "metadataAccessors": { },
- "pathMetadata": {
- "qualifiedPath": "CATALOG"
}, - "limit": {
- "table": "string",
- "query": "string"
}, - "errorHandlers": { },
- "importPerformance": {
- "fetchSize": 1,
- "disableAutoCommit": true,
- "schemaLimit": 1,
- "ormEnabled": true,
- "unload": {
- "stream": true,
- "cli": {
- "script": "string",
- "format": "string",
- "timeout": 1
}
}
}
}, - "publishMetadata": {
- "publishMethod": "direct",
- "publishTypeMap": [
- {
- "jdbcType": 1,
- "trifactaType": "string",
- "defaultValue": "string",
- "vendorTypeList": [
- "string"
]
}
], - "publishValidation": {
- "enabled": true,
- "maxTableNameLength": 1,
- "maxColumnNameLength": 1,
- "validTableNameRegex": "string",
- "validColNameRegex": "string"
}, - "publishQueries": {
- "createTable": "string",
- "createTempTable": "string",
- "copyTable": "string",
- "dropTable": "string",
- "insertTable": "string",
- "truncateTable": "string",
- "addColumn": "string"
}, - "performanceParams": {
- "batchProcessingEnabled": true,
- "batchLoggingEnabled": true,
- "batchSize": 1,
- "numberOfConnections": 1,
- "commitFrequency": 1,
- "queueSize": 1,
- "maxOfferToQueueRetryCount": 1,
- "maxPollFromQueueRetryCount": 1
}, - "publishInfo": {
- "qualifyingPath": [
- "string"
], - "supportedActions": [
- "create"
], - "supportedProtocols": [
- "string"
], - "externalFileFormats": [
- "pqt"
]
}
}
}
Gets Publishing related information for a connector
ref: getPublishInfo
connector required | string |
{- "qualifyingPath": [
- "string"
], - "supportedActions": [
- "create"
], - "supportedProtocols": [
- "string"
], - "externalFileFormats": [
- "pqt"
]
}
An internal object representing the relationship between a person and an Azure Databricks cluster.
Update Databricks access token for current user.
databricksAccessToken required | string |
{- "databricksAccessToken": "string"
}
{- "message": "string"
}
Admin can update databricks access token for user with id=personId.
ℹ️ NOTE: Admin role is required to use this endpoint.
personId required | integer |
databricksAccessToken required | string |
{- "personId": 1,
- "databricksAccessToken": "string"
}
{- "message": "string"
}
Update Databricks cluster id for current user.
databricksClusterId required | string |
{- "databricksClusterId": "string"
}
{- "databricksClusterId": "string",
- "personId": 1,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Admin can update databricks cluster for user with id=personId.
ℹ️ NOTE: Admin role is required to use this endpoint.
databricksClusterId required | string |
personId required | integer |
{- "databricksClusterId": "string",
- "personId": 1
}
{- "databricksClusterId": "string",
- "personId": 1,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Save databricks table cluster name for current user.
databricksTableClusterName required | string |
{- "databricksTableClusterName": "string"
}
{- "databricksTableClusterName": "string",
- "personId": 1
}
A versioned set of releases.
A deployment allows you to create a separation between your development and production environments. You can for e.g. develop Flows in a development instance and then import them to a deployment instance where they will be read-only.
You can override file paths or tables when importing flow packages to a deployment instance using updateObjectImportRules and updateValueImportRules.
The Deployment Manager includes the tools to migrate your software between environments, manage releases of it, and separately control access to development and production flows. See the documentation for more details.
Create a new deployment
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: createDeployment
name required | string Display name of the deployment. |
{- "name": "Test Deployment"
}
{- "name": "Test Deployment",
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
List all deployments, including information about the latest release in each deployment.
You can get all releases for a deployment by using embed
:
/v4/deployments/{id}?embed=releases
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: listDeployments
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "name": "Test Deployment",
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "numReleases": 1,
- "latestRelease": {
- "notes": "string",
- "packageUuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "active": true,
- "deployment": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
}
], - "count": 1
}
Run the primary flow in the active release of the given deployment.
The request body can stay empty. You can optionally pass parameters:
{
"runParameters": {
"overrides": {
"data": [{"key": "varRegion", "value": "02"}]
}
}
}
You can also pass Spark Options that will be used for the Job run.
{
"sparkOptions": [
{"key": "spark.executor.memory", "value": "4GB"}
]
}
You can also override each outputs in the Flow using the recipe name.
{
"overrides": {
"my recipe name": {
"profiler": true,
"writesettings": [
{
"path": "<path_to_output_file>",
"action": "create",
"format": "csv",
"compression": "none",
"header": false,
"asSingleFile": false
}
]
}
}
}
An array of jobGroup results is returned. Use the flowRunId
if you want to track the status of the deployment run. See Get Flow Run Status for more information.
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: runDeployment
id required | integer |
x-execution-id | string Example: f9cab740-50b7-11e9-ba15-93c82271a00b Optional header to safely retry the request without accidentally performing the same operation twice. If a FlowRun with the same |
object (runParameterOverrides) Allows to override parameters that are defined in the flow on datasets or outputs for e.g. | |
Array of objects (outputObjectSparkOptionUpdateRequest) [ items ] | |
overrides | object Overrides for each of the output object. Use the recipe name to specify the overrides. |
{ }
{- "data": [
- {
- "id": 1,
- "flowRun": {
- "id": 1
}, - "jobs": {
- "data": [
- {
- "id": 1
}
]
}, - "jobGraph": {
- "edges": [
- {
- "source": 1,
- "target": 1
}
], - "vertices": [
- 1
]
}, - "reason": "Job started",
- "sessionId": "f9cab740-50b7-11e9-ba15-93c82271a00b"
}
], - "count": 1
}
Create a release for the specified deployment.
Release is created from a local ZIP
containing the package of the flow exported from the source system.
When importing a release, import-mapping rules are executed. These import rules allow you to replace the file location or the table names of different objects during the import for a deployment. See updateObjectImportRules and updateValueImportRules if you need to update the import rules.
This endpoint accept a multipart/form
content type.
Here is how to send the ZIP
package using curl.
curl -X POST http://example.com:3005/v4/deployments/:id/releases \
-H 'authorization: Bearer <api-token>' \
-H 'content-type: multipart/form-data' \
-F 'data=@path/to/flow-package.zip'
The response lists the objects that have been created.
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
id required | integer |
folderId | integer |
An exported flow zip file.
{- "deletedObjects": { },
- "createdObjectMapping": { },
- "importRuleChanges": {
- "object": [
- { }
], - "value": [
- { }
]
}, - "primaryFlowIds": [
- 1
], - "flows": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "datasources": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "flownodes": [
- {
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
], - "flowedges": [
- {
- "inPortId": 1,
- "outPortId": 1,
- "inputFlowNode": {
- "id": 1
}, - "outputFlowNode": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "recipes": [
- {
- "name": "string",
- "description": "string",
- "active": true,
- "nextPortId": 1,
- "currentEdit": {
- "id": 1
}, - "redoLeafEdit": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "outputobjects": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "webhookflowtasks": [
- { }
], - "release": { }
}
Get the list of releases for the specified deployment
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "data": [
- {
- "notes": "string",
- "packageUuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "active": true,
- "deployment": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "count": 1
}
Test importing flow package, applying all import rules that apply to this deployment, and return information about what objects would be created.
The same payload as for Import Deployment package is expected.
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
id required | integer |
folderId | integer |
An exported flow zip file.
{- "deletedObjects": { },
- "createdObjectMapping": { },
- "importRuleChanges": {
- "object": [
- { }
], - "value": [
- { }
]
}, - "primaryFlowIds": [
- 1
], - "flows": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "datasources": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "flownodes": [
- {
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
], - "flowedges": [
- {
- "inPortId": 1,
- "outPortId": 1,
- "inputFlowNode": {
- "id": 1
}, - "outputFlowNode": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "recipes": [
- {
- "name": "string",
- "description": "string",
- "active": true,
- "nextPortId": 1,
- "currentEdit": {
- "id": 1
}, - "redoLeafEdit": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "outputobjects": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "webhookflowtasks": [
- { }
], - "release": { }
}
Count existing deployments
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: countDeployments
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get the specified deployment.
You can get all releases for a deployment by using embed
:
/v4/deployments/:id?embed=releases
You can also get the value and object import rules using:
/v4/deployments/:id?embed=valueImportRules,objectImportRules
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: getDeployment
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "id": 1,
- "name": "2013 POS",
- "createdAt": "2019-03-27T17:45:14.837Z",
- "updatedAt": "2019-03-27T17:45:14.837Z",
- "releases": {
- "data": [
- {
- "id": 1,
- "notes": "v01",
- "packageUuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "active": null,
- "createdAt": "2019-03-27T17:45:48.345Z",
- "updatedAt": "2019-03-27T17:46:24.675Z",
- "deployment": {
- "id": 1
}, - "creator": {
- "id": 2
}, - "updater": {
- "id": 2
}
}, - {
- "id": 2,
- "notes": "v02",
- "packageUuid": "ff8738c0-50b7-11e9-ba15-93c82271a00b",
- "active": true,
- "createdAt": "2019-03-27T17:46:24.671Z",
- "updatedAt": "2019-03-27T17:46:24.671Z",
- "deployment": {
- "id": 1
}, - "creator": {
- "id": 2
}, - "updater": {
- "id": 2
}
}
]
}, - "creator": {
- "id": 2
}, - "updater": {
- "id": 2
}
}
Update an existing deployment
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: updateDeployment
id required | integer |
name | string Display name of the deployment. |
{- "name": "Test Deployment"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Patch an existing deployment
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: patchDeployment
id required | integer |
name | string Display name of the deployment. |
{- "name": "Test Deployment"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete the specified deployment.
⚠️ Deleting a deployment removes all releases, packages, and flows underneath it. This step cannot be undone.
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: deleteDeployment
id required | integer |
Get active outputs of the specified deployment. When the deployment is run, the listed outputs are generated.
This endpoint is useful if you only want to run a specific job in a deployment, or pass overrides.
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
id required | integer |
{- "data": [
- {
- "outputObjectId": 1,
- "flowNodeId": 1,
- "recipeName": "string"
}
], - "count": 1
}
Create a list of object-based import rules for the specified deployment. Delete all previous rules applied to the same object.
ℹ️ NOTE: Import rules must be applied to individual deployments.
The generated rules apply to all flows that are imported into the deployment after they has been created.
The response contains any previously created rules that have been deleted as a result of this change.
You can also make replacements in the import package based on value mappings. See updateValueImportRules.
The following JSON array describes replacing the connection specified by the UUID, which is a field on the connection object exported from the original platform instance. This connection reference is replaced by a reference to connection ID 1 in the local platform instance and is applied to any release uploaded into the deployment after the rule has been created:
[
{
"tableName": "connections",
"onCondition": {
"uuid": "d75255f0-a245-11e7-8618-adc1dbb4bed0"
},
"withCondition": {"id": 1}
}
]
This example request includes replacements for multiple connection references.
ℹ️ NOTE: Rules are applied in the listed order. If you are applying multiple rules to the same object in the import package, the second rule must reference the expected changes applied by the first rule.
This type of replacement applies if the imported packages contain sources that are imported through two separate connections:
[
{
"tableName": "connections",
"onCondition": {
"uuid": "d75255f0-a245-11e7-8618-adc1dbb4bed0"
},
"withCondition": {"id": 1}
},
{
"tableName": "connections",
"onCondition": {
"uuid": "d552045e0-c314-22b5-9410-acd1bcd8eea2"
},
"withCondition": {"id": 2}
}
]
The response body contains any previously created rules that have been deleted as a result of this update.
If the update does not overwrite any previous rules, then no rules are deleted. So, the response looks like the following:
{"deleted": {"data": []}}
If you submit the request again, the response contains the rule definition of the previous update, which has been deleted. This example applies to the one-rule change listed previously:
{
"deleted": {
"data": [
{
"onCondition": {
"uuid": "d75255f0-a245-11e7-8618-adc1dbb4bed0"
},
"withCondition": {"id": 1},
"id": 1,
"tableName": "connections",
"createdAt": "2019-02-13T23:07:51.720Z",
"updatedAt": "2019-02-13T23:07:51.720Z",
"creator": {"id": 7},
"updater": {"id": 7},
"deployment": {"id": 4}
}
]
}
}
ℹ️ NOTE: You can get the value and object import rules using:
/v4/deployments/:id?embed=valueImportRules,objectImportRules
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
id required | integer |
tableName required | string Name of the table to which the mapping is applied. |
onCondition required | object The matching object identifier and the specified literal or pattern to match. |
withCondition required | object The identifier for the object type,
as specified in by the |
[- {
- "tableName": "connections",
- "onCondition": {
- "uuid": "d75255f0-a245-11e7-8618-adc1dbb4bed0"
}, - "withCondition": {
- "id": 1
}
}
]
{- "deleted": {
- "data": [ ]
}
}
Create a list of value-based import rules for the specified deployment. Delete any previous rules applied to the same values.
ℹ️ NOTE: Import rules must be applied to individual deployments.
The generated rules apply to all flows that are imported into the Production instance after they have been created.
The response contains any previously created rules that have been deleted as a result of this change.
You can also make replacements in the import package based on object references. See updateObjectImportRules
The following JSON array describes a single replacement rule for the S3 bucket name. In this case, the wrangle-dev bucket name has been replaced by the wrangle-prod bucket name, which means data is pulled in the Production deployment from the appropriate S3 bucket.
ℹ️ NOTE: The executing user of any job must have access to any data source that is remapped in the new instance.
[
{
"type": "s3Bucket",
"on": "wrangle-dev",
"with": "wrangle-prod"
}
]
The following JSON array describes two replacements for the fileLocation values. In this case, rules are applied in succession.
ℹ️ NOTE: Rules are applied in the listed order. If you are applying multiple rules to the same object in the import package, the second rule must reference the expected changes applied by the first rule.
[
{
"type": "fileLocation",
"on": "klamath",
"with": "klondike"
},
{
"type": "fileLocation",
"on": "//dev//",
"with": "/prod/"
}
]
In the above:
klamath
in the path to the source with the following value: klondike
.
The second rule performs a regular expression match on the string /dev/
. Since the match is described using the regular expression syntax, the backslashes must be escaped. The replacement value is the following literal: /prod/
.Match Type | Example Syntax |
---|---|
string literal | {"on":"d75255f0-a245-11e7-8618-adc1dbb4bed0"} |
regular expression | {"on":"/[0-9a-zA-z]{8}-a245-11e7-8618-adc1dbb4bed0/"} |
This example request includes replacements for a database table and its path (database name) in a relational publication.
ℹ️ NOTE: Rules are applied in the listed order. If you are applying multiple rules to the same object in the import package, the second rule must reference the expected changes applied by the first rule.
This type of replacement applies if the imported packages contain sources that are imported through two separate connections:
[
{
"type": "dbTableName",
"on": "from_table_name",
"with": "to_table_name"
},
{
"type": "dbPath",
"on": "from_path_element",
"with": "to_path_element"
}
]
Type | Description |
---|---|
dbTableName | Replaces the name of the table in the source (on value) with the new table name to use (with value). |
dbPath | Replaces the path to the database in the source (on value) with the new path to use (with value). The value of dbPath is an array. So, the replacement rule is applied to each element of the array. in most cases, the number of elements is 1. If your path contains multiple elements, you should be careful in your use of regular expressions for remapping dbPath values. |
✅ TIP: The
on
parameter values can be provided as a regular expression.
The response body contains any previously created rules that have been deleted as a result of this update.
{"deleted": {"data": []}}
If you submit the request again, the response contains the rule definition of the previous update, which has been deleted.
{
"deleted": {
"data": [
{
"id": 1,
"type": "s3Bucket",
"on": "wrangle-dev",
"with": "wrangle-prod",
"createdAt": "2019-02-13T23:27:13.351Z",
"updatedAt": "2019-02-13T23:27:13.351Z",
"creator": {"id": 7},
"updater": {"id": 7},
"deployment": {"id": 2}
}
]
}
}
ℹ️ NOTE: You can get the value and object import rules using:
/v4/deployments/:id?embed=valueImportRules,objectImportRules
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
id required | integer |
type required | string Enum: "fileLocation" "s3Bucket" "dbTableName" "dbPath" "host" "userinfo" The type of value import rule:
|
on required | string The specified literal or pattern to match. |
with required | string The replacement value or pattern. |
[- {
- "type": "s3Bucket",
- "on": "wrangle-dev",
- "with": "wrangle-prod"
}
]
{- "deleted": {
- "data": [ ]
}
}
An internal object representing the state of a recipe at a given point in time.
Gets a summary of the history of given recipe edit. This includes information about the changes involved in each edit along the way, as well as the person who made the edit.
You can obtain the recipe for a given wrangledDataset by using:
GET v4/wrangledDatasets/:id?embed=editablescript
It is then possible to know the current edit id of a recipe by looking at the currentEditId
field of the recipe.
id required | integer |
withNaturalLanguage | boolean |
{- "nextEditId": 1,
- "history": [
- {
- "owner": {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy"
}, - "date": "2019-08-24T14:15:22Z",
- "editId": 1,
- "changes": [
- {
- "type": "inserted",
- "task": { },
- "portId": 1,
- "id": 1
}
], - "tableNameMap": { }
}
]
}
An internal object representing the AWS Elastic MapReduce (EMR) cluster configured to run trifacta jobs.
Create a new emr cluster
ℹ️ NOTE: Admin role is required to use this endpoint.
ref: createEmrCluster
emrClusterId required | string The identifier for the EMR Cluster |
resourceBucket required | string S3 bucket to store Trifacta's libraries, external libraries, and any other resources for Spark execution |
resourcePath | string Path on S3 bucket to store resources for execution on EMR |
region | string The region where the EMR Cluster runs at |
{- "emrClusterId": "j-XXXXXXXXXXXXX",
- "resourceBucket": "bucketName",
- "resourcePath": "",
- "region": "us-west-2"
}
{- "emrClusterId": "j-XXXXXXXXXXXXX",
- "resourceBucket": "bucketName",
- "resourcePath": "",
- "region": "us-west-2",
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
List existing emr clusters
ℹ️ NOTE: Admin role is required to use this endpoint.
ref: listEmrClusters
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "emrClusterId": "j-XXXXXXXXXXXXX",
- "resourceBucket": "bucketName",
- "resourcePath": "",
- "region": "us-west-2",
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
], - "count": 1
}
Count existing emr clusters
ℹ️ NOTE: Admin role is required to use this endpoint.
ref: countEmrClusters
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get an existing emr cluster
ℹ️ NOTE: Admin role is required to use this endpoint.
ref: getEmrCluster
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "emrClusterId": "j-XXXXXXXXXXXXX",
- "resourceBucket": "bucketName",
- "resourcePath": "",
- "region": "us-west-2",
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Update an existing emr cluster
ℹ️ NOTE: Admin role is required to use this endpoint.
ref: updateEmrCluster
id required | integer |
emrClusterId | string The identifier for the EMR Cluster |
resourceBucket | string S3 bucket to store Trifacta's libraries, external libraries, and any other resources for Spark execution |
resourcePath | string Path on S3 bucket to store resources for execution on EMR |
region | string The region where the EMR Cluster runs at |
{- "emrClusterId": "j-XXXXXXXXXXXXX",
- "resourceBucket": "bucketName",
- "resourcePath": "",
- "region": "us-west-2"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing emr cluster
ℹ️ NOTE: Admin role is required to use this endpoint.
ref: deleteEmrCluster
id required | integer |
Create a new environment parameter to be used in the workspace.
ℹ️ NOTE: Admin role is required to use this endpoint.
overrideKey required | string key/name used when overriding the value of the variable |
required | overrideValueInfoVariable (object) or overrideValueInfoSelector (object) |
{- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}
}
{- "id": 1,
- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deleted_at": "2019-08-24T14:15:22Z",
- "usageInfo": {
- "runParameters": 1
}
}
List existing environment parameters
includeUsageInfo | string Include information about where the environment parameter is used. |
filter | string Filter environment parameters using the attached overrideKey |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "id": 1,
- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deleted_at": "2019-08-24T14:15:22Z",
- "usageInfo": {
- "runParameters": 1
}
}
]
}
Import the environment parameters from the given package.
A ZIP
file as exported by the export environment parameters endpoint is accepted.
This endpoint accept a multipart/form
content type.
Here is how to send the ZIP
package using curl.
curl -X POST http://example.com:3005/v4/environmentParameters/package \
-H 'authorization: Bearer <api-token>' \
-H 'content-type: multipart/form-data' \
-F 'data=@path/to/environment-parameters-package.zip'
The response lists the objects that have been created.
ℹ️ NOTE: Admin role is required to use this endpoint.
fromUI | boolean If true, will return the list of imported environment parameters for confirmation. |
{ }
{- "data": [
- {
- "id": 1,
- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deleted_at": "2019-08-24T14:15:22Z",
- "usageInfo": {
- "runParameters": 1
}
}
]
}
Retrieve a package containing the list of environment parameters.
Response body is the contents of the package. Package contents are a ZIPped version of the list of environment parameters.
The environment parameters package can be used to import the environment parameters in another environment.
ℹ️ NOTE: Admin role is required to use this endpoint.
hideSecrets | boolean If included, the secret values will be hidden. |
Get an existing environment parameter
ℹ️ NOTE: Admin role is required to use this endpoint.
id required | integer |
includeUsageInfo | string Include information about where the environment parameter is used. |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "id": 1,
- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "deleted_at": "2019-08-24T14:15:22Z",
- "usageInfo": {
- "runParameters": 1
}
}
Count existing environment parameters
ℹ️ NOTE: Admin role is required to use this endpoint.
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
A container for wrangling logic. Contains imported datasets, recipe, output objects, and References.
Create a new flow with specified name and optional description and target folder.
ℹ️ NOTE: You cannot add datasets to the flow through this endpoint. Moving pre-existing datasets into a flow is not supported in this release. Create the flow first and then when you create the datasets, associate them with the flow at the time of creation.
ref: createFlow
name | string Display name of the flow. |
description | string User-friendly description for the flow. |
object Settings for the flow. | |
incrementName | boolean Default: false Increment the flow name if a similar flow name already exist |
folderId | integer Internal identifier for a Flow folder. |
{- "name": "string",
- "description": "string",
- "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "incrementName": false,
- "folderId": 1
}
{- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
List existing flows
ref: listFlows
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
folderId | integer Only show flow from this folder |
flowsFilter | string Which types of flows to list. One of ['all', 'shared', 'owned'] |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "count": 1
}
Move Flow to some directory
ref: moveFlow
id required | integer |
folderId | integer id of the folder, where the flow will be moved |
{- "folderId": 1
}
{- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
Import all flows from the given package.
A ZIP
file as exported by the export Flow endpoint is accepted.
Before you import, you can perform a dry-run to check for errors. See Import Flow package - Dry run.
This endpoint accept a multipart/form
content type.
Here is how to send the ZIP
package using curl.
curl -X POST http://example.com:3005/v4/flows/package \
-H 'authorization: Bearer <api-token>' \
-H 'content-type: multipart/form-data' \
-F 'data=@path/to/flow-package.zip'
The response lists the objects that have been created.
ref: importPackage
folderId | integer |
fromUI | boolean If true, will return the list of imported environment parameters for confirmation if any are referenced in the flow. |
overrideJsUdfs | boolean If true, will override the conflicting JS UDFS in the target environment which impacts all the existing flows that references it. |
File required | object (importFlowPackageRequestZip) An exported flow zip file. |
Array of environmentParameterMappingToExistingEnvParam (object) or environmentParameterMappingToManualValue (object) (environmentParameterMapping) [ items ] | |
Array of objects (connectionIdMapping) [ items ] |
{- "deletedObjects": { },
- "createdObjectMapping": { },
- "importRuleChanges": {
- "object": [
- { }
], - "value": [
- { }
]
}, - "primaryFlowIds": [
- 1
], - "flows": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "datasources": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "flownodes": [
- {
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
], - "flowedges": [
- {
- "inPortId": 1,
- "outPortId": 1,
- "inputFlowNode": {
- "id": 1
}, - "outputFlowNode": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "recipes": [
- {
- "name": "string",
- "description": "string",
- "active": true,
- "nextPortId": 1,
- "currentEdit": {
- "id": 1
}, - "redoLeafEdit": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "outputobjects": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "webhookflowtasks": [
- { }
], - "release": { }
}
Test importing flow package and return information about what objects would be created.
The same payload as for Import Flow package is expected.
ref: importPackageDryRun
folderId | integer |
File required | object (importFlowPackageRequestZip) An exported flow zip file. |
Array of environmentParameterMappingToExistingEnvParam (object) or environmentParameterMappingToManualValue (object) (environmentParameterMapping) [ items ] | |
Array of objects (connectionIdMapping) [ items ] |
{- "deletedObjects": { },
- "createdObjectMapping": { },
- "importRuleChanges": {
- "object": [
- { }
], - "value": [
- { }
]
}, - "primaryFlowIds": [
- 1
], - "flows": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "datasources": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "flownodes": [
- {
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
], - "flowedges": [
- {
- "inPortId": 1,
- "outPortId": 1,
- "inputFlowNode": {
- "id": 1
}, - "outputFlowNode": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "recipes": [
- {
- "name": "string",
- "description": "string",
- "active": true,
- "nextPortId": 1,
- "currentEdit": {
- "id": 1
}, - "redoLeafEdit": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "outputobjects": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "webhookflowtasks": [
- { }
], - "release": { }
}
Create a copy of this flow, as well as all contained recipes.
ref: copyFlow
id required | integer |
name | string name of the new copied flow. |
description | string description of the new copied flow. |
copyDatasources | boolean Default: false If true, Data sources will be copied (i.e. new imported datasets will be created, no data is copied on the file system). Otherwise, the existing imported datasets are reused. |
{- "name": "string",
- "description": "string",
- "copyDatasources": false
}
{- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
Run all adhoc destinations in a flow.
(deprecated) If a scheduleExecutionId
is provided, run all scheduled destinations in the flow.
The request body can stay empty. You can optionally pass parameters:
{
"runParameters": {
"overrides": {
"data": [{"key": "varRegion", "value": "02"}]
}
}
}
You can also pass Spark Options that will be used for the Job run.
{
"sparkOptions": [
{"key": "spark.executor.memory", "value": "4GB"}
]
}
You can also pass Databricks Options in a flow run which will be used for the all the job runs. These can further be overridden at each recipe level using overrides block.
{
"databricksOptions": [
{"key": "maxWorkers", "value": 8},
{"key": "poolId", "value": "pool-123456789"},
{"key": "enableLocalDiskEncryption", "value": true}
]
}
Using recipe identifiers, you can specify a subset of outputs in the flow to run. See runJobGroup for more information on specifying wrangledDataset
.
{"wrangledDatasetIds": [2, 3]}
You can also override each outputs in the Flow using the recipe name.
{
"overrides": {
"my recipe name": {
"profiler": true,
"writesettings": [
{
"path": "<path_to_output_file>",
"action": "create",
"format": "csv",
"compression": "none",
"header": false,
"asSingleFile": false
}
]
}
}
}
An array of jobGroup results is returned. Use the flowRunId
if you want to track the status of the flow run. See Get Flow Run Status for more information.
ref: runFlow
id required | integer |
runAsync | boolean Uses queue to run individual jobgroups asynchronously and return immediately. Default value is false. |
x-execution-id | string Example: f9cab740-50b7-11e9-ba15-93c82271a00b Optional header to safely retry the request without accidentally performing the same operation twice. If a FlowRun with the same |
ignoreRecipeErrors | boolean Setting this flag to true will mean the job will run even if there are upstream recipe errors. Setting it to false will cause the Request to fail on recipe errors. |
object (runParameterOverrides) Allows to override parameters that are defined in the flow on datasets or outputs for e.g. | |
scheduleExecutionId | integer |
Array of objects (outputObjectSparkOptionUpdateRequest) [ items ] | |
object (outputObjectSchemaDriftOptionsUpdateRequest) | |
Array of objects (databricksOptionsUpdateRequest) [ items ] | |
execution | string Enum: "photon" "spark" "emrSpark" "databricksSpark" Execution language. Indicate on which engine the job was executed. Can be null/missing for scheduled jobs that fail during the validation phase.
|
wrangledDatasetIds | Array of integers[ items ] Subset of outputs (identified by identifier of the recipe preceding the output) in this flow to run. When empty or unspecified, all outputs in the flow will be run. |
overrides | object Overrides for each of the output object. Use the recipe name to specify the overrides. |
{ }
{- "flowRunId": 1,
- "data": [
- {
- "id": 1,
- "flowRun": {
- "id": 1
}, - "jobs": {
- "data": [
- {
- "id": 1
}
]
}, - "jobGraph": {
- "edges": [
- {
- "source": 1,
- "target": 1
}
], - "vertices": [
- 1
]
}, - "reason": "Job started",
- "sessionId": "f9cab740-50b7-11e9-ba15-93c82271a00b"
}
]
}
Count existing flows
ref: countFlows
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
folderId | integer Only show flow from this folder |
flowsFilter | string Which types of flows to count. One of ['all', 'shared', 'owned'] |
{- "count": 1
}
Get an existing flow
ref: getFlow
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
Update an existing flow based on the specified identifier.
ℹ️ NOTE: You cannot add datasets to the flow through this endpoint. Moving pre-existing datasets into a flow is not supported in this release. Create the flow first and then when you create the datasets, associate them with the flow at the time of creation.
ref: updateFlow
id required | integer |
name | string Display name of the flow. |
description | string User-friendly description for the flow. |
object Settings for the flow. | |
folderId | integer Internal identifier for a Flow folder. |
{- "name": "string",
- "description": "string",
- "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "folderId": 1
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Update an existing flow based on the specified identifier.
ℹ️ NOTE: You cannot add datasets to the flow through this endpoint. Moving pre-existing datasets into a flow is not supported in this release. Create the flow first and then when you create the datasets, associate them with the flow at the time of creation.
ref: patchFlow
id required | integer |
name | string Display name of the flow. |
description | string User-friendly description for the flow. |
object Settings for the flow. | |
folderId | integer Internal identifier for a Flow folder. |
{- "name": "string",
- "description": "string",
- "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "folderId": 1
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing flow
ref: deleteFlow
id required | integer |
Validate a flow's outputs for recipe errors. For the given flow, validate recipe errors in all outputs and their dependencies. This API returns a list of all recipes contained in the flow or in referenced flows which will be executed if the flow is run or scheduled. For each returned recipe, the API specifies errors, if any, and the flowId and flowNodeId which contain the recipe.
ref: validateFlow
id required | integer |
{- "data": [
- {
- "flowNodeId": 1,
- "flowId": 1,
- "flowName": "string",
- "recipeName": "string",
- "errors": [
- {
- "index": 1,
- "errorMessages": [
- "string"
]
}
]
}
]
}
Retrieve a package containing the definition of the specified flow.
Response body is the contents of the package. Package contents are a ZIPped version of the flow definition.
The flow package can be used to import the flow in another environment. See the Import Flow Package for more information.
ref: getFlowPackage
id required | integer |
comment | string comment to be displayed when flow is imported in a deployment package |
Performs a dry-run of generating a flow package and exporting it, which performs a check of all permissions required to export the package.
If they occur, permissions errors are reported in the response.
ref: getFlowPackageDryRun
id required | integer |
List flows, with special filtering behaviour
ref: listFlowsLibrary
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowsFilter | string Which types of flows to list. One of ['all', 'shared', 'owned'] |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "count": {
- "flow": 1,
- "folder": 1,
- "all": 1
}
}
Count flows, with special filtering behaviour
ref: countFlowsLibrary
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowsFilter | string Which types of flows to count. One of ['all', 'shared', 'owned'] |
{- "count": {
- "flow": 1,
- "folder": 1,
- "all": 1
}
}
List parameters and overrides in associated flow
id required | integer |
outputObjectType | string Filter with a specific output object type |
{- "flowParameters": {
- "data": [
- {
- "value": { },
- "insertionIndices": [
- { }
], - "id": 1,
- "type": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "runParameterEdit": { },
- "flow": {
- "id": 1
}, - "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "overrideKey": "string"
}
]
}, - "overrides": {
- "data": [
- {
- "defaultValues": [
- {
- "variable": {
- "value": "myValue"
}
}
], - "value": {
- "variable": {
- "value": "myValue"
}
}, - "isUsed": true,
- "overrideKey": "myVar",
- "flowId": 1,
- "id": 1
}
]
}
}
List all the inputs of a Flow. Also include data sources that are present in referenced flows.
ref: getFlowInputs
id required | integer |
{- "data": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "count": 1
}
List all the outputs of a Flow.
ref: getFlowOutputs
id required | integer |
{- "data": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "count": 1
}
Get all flows contained in this folder.
ref: getFlowsForFolder
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowsFilter | string Which types of flows to list. One of ['all', 'shared', 'owned'] |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "count": 1
}
Get the count of flows contained in this folder.
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowsFilter | string Which types of flows to count. One of ['all', 'shared', 'owned'] |
{- "count": 1
}
Replace the dataset or the specified wrangled dataset (flow node) in the flow with a new imported or wrangled dataset. This allows one to perform the same action as the "Replace" action in the flow UI.
You can get the flow node id (wrangled dataset id) and the imported it from the URL when clicking on a node in the UI.
ref: replaceDatasetInFlow
id required | integer |
flowNodeId required | integer |
newImportedDatasetId required | integer |
{- "flowNodeId": 1,
- "newImportedDatasetId": 1
}
{- "newInputNode": {
- "id": 1,
- "scriptId": 1,
- "flowId": 1
}, - "outputNodeEdges": [
- {
- "id": 1,
- "flowId": 1,
- "inFlowNodeId": 1,
- "outFlowNodeId": 1
}
]
}
Reset dependencies in flow to the pending state
ref: resetDependencies
id required | integer |
name | string Display name of the flow. |
description | string User-friendly description for the flow. |
object Settings for the flow. | |
folderId | integer Internal identifier for a Flow folder. |
{- "name": "string",
- "description": "string",
- "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "folderId": 1
}
{- "data": [
- {
- "name": "string",
- "description": "string",
- "approvedDependency": { }
}
]
}
A placeholder for an object in a flow. Can represent an imported dataset, a recipe, or a Reference.
Create edges between nodes
ref: commitEdges
id required | integer |
required | object |
{- "updateInfo": {
- "deleteOrphaned": true,
- "newEdges": [
- {
- "outPortId": 1,
- "inPortId": 1,
- "outFlowNodeId": 1,
- "inFlowNodeId": 1
}
], - "edgesToRevive": [
- {
- "id": 1
}
], - "portsToDelete": [
- {
- "id": 1
}
]
}
}
{- "data": [
- {
- "inPortId": 1,
- "outPortId": 1,
- "inputFlowNode": {
- "id": 1
}, - "outputFlowNode": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
]
}
Validate a flow node and upstream flow nodes for recipe errors. For the given flow node, validate recipe errors in it and upstream flow nodes. This API returns a list of all upstream recipes which will be executed if the output object attached to this flow node is run For each returned recipe, the API specifies errors, if any, and the flowId and flowNodeId which contain the recipe.
ref: validateFlowNode
id required | integer |
{- "data": [
- {
- "flowNodeId": 1,
- "flowId": 1,
- "flowName": "string",
- "recipeName": "string",
- "errors": [
- {
- "index": 1,
- "errorMessages": [
- "string"
]
}
]
}
]
}
Get a list of users and groups with which a Flow is shared. Collaborators can add and edit recipes and datasets in this Flow.
ref: getFlowPermissions
id required | integer |
{- "data": [
- {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy",
- "flowPermission": {
- "role": "owner",
- "person": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
}
], - "count": 1
}
Get an existing flow permission
ref: getFlowPermission
id required | integer |
aid required | integer |
{- "role": "owner",
- "person": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing flow permission
ref: deleteFlowPermission
id required | integer |
aid required | integer |
An object representing a flow run.
Get an existing flow run
ref: getFlowRun
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "scheduleExecutionId": 1,
- "requestId": "string",
- "flow": {
- "id": 1
}
}
Get the status of a Flow Run. It combines the status of the underlying Job Groups.
ref: getFlowRunStatus
id required | integer |
"Complete"
Get the list of jobGroups.
ref: getFlowRunJobGroups
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "ranfrom": "ui",
- "ranfor": "recipe",
- "status": "Complete",
- "profilingEnabled": true,
- "runParameterReferenceDate": "2019-08-24T14:15:22Z",
- "snapshot": {
- "id": 1
}, - "wrangledDataset": {
- "id": 1
}, - "flowrun": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "count": 1
}
Used to override the default value of runParameter in a flow
Create a new flow run parameter override
flowId required | number |
overrideKey required | string key/name used when overriding the value of the variable |
required | overrideValueInfoVariable (object) or overrideValueInfoSelector (object) |
{- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "flowId": 0
}
{- "id": 1,
- "flowId": 1,
- "overrideKey": "string",
- "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Get an existing flow run parameter override
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Patch an existing flow run parameter override
id required | integer |
overrideKey | string key/name used when overriding the value of the variable |
overrideValueInfoVariable (object) or overrideValueInfoSelector (object) |
{- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
A collection of flows, useful for organization.
Create a new folder
ref: createFolder
name | string Display name of the folder. |
description | string User-friendly description for the folder. |
{- "name": "string",
- "description": "string"
}
{- "name": "string",
- "description": "string",
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}
}
List existing folders
ref: listFolders
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}
}
], - "count": 1
}
Count existing folders
ref: countFolders
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get an existing folder
ref: getFolder
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "name": "string",
- "description": "string",
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}
}
Update an existing folder
ref: updateFolder
id required | integer |
name | string Display name of the folder. |
description | string User-friendly description for the folder. |
{- "name": "string",
- "description": "string"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Patch an existing folder
ref: patchFolder
id required | integer |
name | string Display name of the folder. |
description | string User-friendly description for the folder. |
{- "name": "string",
- "description": "string"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing folder
ref: deleteFolder
id required | integer |
Get all flows contained in this folder.
ref: getFlowsForFolder
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowsFilter | string Which types of flows to list. One of ['all', 'shared', 'owned'] |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "count": 1
}
Get the count of flows contained in this folder.
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowsFilter | string Which types of flows to count. One of ['all', 'shared', 'owned'] |
{- "count": 1
}
An object representing data loaded into Designer Cloud Powered by Trifacta, as well as any structuring that has been applied to it. imported datasets are the starting point for wrangling, and can be used in multiple flows.
Create an imported dataset from an available resource. Created dataset is owned by the authenticated user.
In general, importing a file is done using the following payload:
{
"uri": "protocol://path-to-file",
"name": "my dataset",
"detectStructure": true
}
See more examples in the Request Samples section.
✅ TIP: When an imported dataset is created via API, it is always imported as an unstructured dataset by default. To import a dataset with the inferred recipe, add
detectStructure: true
in the payload.
ℹ️ NOTE: Do not create an imported dataset from a file that is being used by another imported dataset. If you delete the newly created imported dataset, the file is removed, and the other dataset is corrupted. Use a new file or make a copy of the first file first.
ℹ️ NOTE: Importing a Microsoft Excel file or a file that need to be converted using the API is not supported yet.
name required | string Display name of the imported dataset. |
uri required | string Dataset URI |
description | string User-friendly description for the imported dataset. |
disableTypeInference | boolean Only applicable to relational sources (database tables/views for e.g.). Prevent Designer Cloud Powered by Trifacta type inference from running and inferring types by looking at the first rows of the dataset. |
type | string Indicate the type of dataset. If not specified, the default storage protocol is used. |
isDynamic | boolean Default: false indicate if the datasource is parameterized. In that case, a |
host | string Host for the dataset |
userinfo | string User info for the dataset |
detectStructure | boolean Default: false Indicate if a parsing script should be inferred when importing the dataset. By default, the dataset is imported |
dynamicPath | string Path used when resolving the parameters. It is used when running a job or collecting a sample. It is different from the one used as a storage location which corresponds to the first match. The latter is used when doing a fast preview in the UI. |
encoding | string Default: "UTF-8" Optional dataset encoding. |
sanitizeColumnNames | boolean Default: false Indicate whether the column names in the imported file should be sanitized |
ensureHeader | boolean If provided, forces first row header toggle |
Array of objects (runParameterFileBasedInfo) [ items ] Description of the dataset parameters if the dataset is parameterized. |
{- "uri": "protocol://path-to-file",
- "name": "my dataset",
- "detectStructure": true
}
{- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
Deprecated. Use listDatasetLibrary instead.
ref: listImportedDatasets
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "count": 1
}
Add the specified imported dataset to a flow based on its internal identifier.
ℹ️ NOTE: Datasets can be added to flows based on the permissions of the access token used on this endpoint. Datasets can be added to flows that are shared by the user.
id required | integer |
required | object The flow to add this dataset to. |
{- "flow": {
- "id": 1
}
}
{- "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
Create a copy of an imported dataset
ref: copyDataSource
id required | integer |
name | string name of the copied dataset |
{- "name": "string"
}
{- "dynamicPath": "string",
- "isSchematized": true,
- "isDynamic": true,
- "isConverted": true,
- "disableTypeInference": true,
- "hasStructuring": true,
- "hasSchemaErrors": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}
}
Fetches and updates the latest schema of a datasource
ref: asyncRefreshSchema
id required | integer |
{ }
{- "resourceTaskStateId": 1
}
List all the inputs of a Flow. Also include data sources that are present in referenced flows.
ref: getFlowInputs
id required | integer |
{- "data": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "count": 1
}
Deprecated. Use countDatasetLibrary instead.
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "count": 1
}
Get the specified imported dataset.
Use the following embedded reference to embed in the response data about the connection used to acquire the source dataset if it was created from a custom connection. See embedding resources for more information.
/v4/importedDatasets/{id}?embed=connection
ref: getImportedDataset
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
includeAssociatedSubjects | boolean If includeAssociatedSubjects is true, it will include entitlement associated subjects in the response |
{- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
Modify the specified imported dataset. Name, path, bucket etc. for gcs can be modified.
ℹ️ NOTE: Samples will not be updated for the recipes. This results in the recipe showing samples of the older data.
id required | integer |
name | string Display name of the imported dataset. |
description | string User-friendly description for the imported dataset. |
disableTypeInference | boolean Only applicable to relational sources (database tables/views for e.g.). Prevent Designer Cloud Powered by Trifacta type inference from running and inferring types by looking at the first rows of the dataset. |
type | string Indicate the type of dataset. If not specified, the default storage protocol is used. |
isDynamic | boolean Default: false indicate if the datasource is parameterized. In that case, a |
host | string Host for the dataset |
userinfo | string User info for the dataset |
bucket | string The bucket is required if the datasource is stored in a bucket file system. |
raw | string Raw SQL query |
path | string |
dynamicPath | string Path used when resolving the parameters. It is used when running a job or collecting a sample. It is different from the one used as a storage location which corresponds to the first match. The latter is used when doing a fast preview in the UI. |
Array of objects (runParameterInfo) [ items ] | |
dynamicBucket | string |
dynamicHost | string |
dynamicUserInfo | string |
isConverted | boolean Indicate if the imported dataset is converted. This is the case for Microsoft Excel Dataset for e.g. |
{- "name": "My Dataset",
- "description": "string",
- "disableTypeInference": true,
- "type": "string",
- "isDynamic": false,
- "host": "string",
- "userinfo": "string",
- "bucket": "string",
- "raw": "SELECT * FROM table",
- "path": "string",
- "dynamicPath": "string",
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
], - "dynamicBucket": "string",
- "dynamicHost": "string",
- "dynamicUserInfo": "string",
- "isConverted": true
}
{- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
Modify the specified imported dataset. Only the name and description properties should be modified.
ref: patchImportedDataset
id required | integer |
name | string Display name of the imported dataset. |
description | string User-friendly description for the imported dataset. |
{- "name": "My Dataset",
- "description": "string"
}
{- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
List all the inputs that are linked to this output object. Also include data sources that are present in referenced flows.
id required | integer |
{- "data": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "count": 1
}
List Designer Cloud Powered by Trifacta datasets.
This can be used to list both imported and reference datasets throughout the system,
as well as recipes in a given flow.
ref: listDatasetLibrary
required | string or Array of strings Which types of datasets to list.
Valid choices are: [ |
ownershipFilter | string Which set of datasets to list.
One of [ |
schematized | boolean If included, filter to only show schematized imported datasets. |
currentFlowId | integer Required for including |
datasourceFlowId | integer When included, filter included datasets to only include those associated to the given flow. |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowId | integer When provided, list datasets associated with this flow before other datasets. |
userIdFilter | integer allows admin to filter datasets based on userId |
includeAssociatedSubjects | boolean If includeAssociatedSubjects is true, it will include entitlements associated subjects in the response |
{- "data": {
- "type": "datasource",
- "referenceCount": 1,
- "count": 1,
- "importedDataset": {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
}, - "count": {
- "imported": 1,
- "reference": 1,
- "recipe": 1,
- "all": 1
}
}
Count Designer Cloud Powered by Trifacta datasets. Gives counts for various types of datasets matching the provided filters.
ref: countDatasetLibrary
ownershipFilter | string Which set of datasets to count.
One of [ |
schematized | boolean If included, filter to only show schematized imported datasets. |
currentFlowId | integer Required for including |
datasourceFlowId | integer When included, filter included datasets to only include those associated to the given flow. |
flowId | integer When provided, count datasets associated with this flow before other datasets. |
string or Array of strings Which types of datasets to list.
Valid choices are: [ | |
filter | string Example: filter=my-object Value for fuzzy-filtering objects. See |
userIdFilter | integer allows admin to filter datasets based on userId |
{- "count": {
- "imported": 1,
- "reference": 1,
- "recipe": 1,
- "all": 1
}
}
Update existing scriptlines for the datasource
ref: updateScriptLines
id required | integer |
inferredScript required | object |
currentEditId required | integer |
{- "inferredScript": { },
- "currentEditId": 1
}
{ }
An internal object encoding the information necessary to run a part of a Designer Cloud Powered by Trifacta jobGroup.
This is called a "Stage" on the Job Results page in the UI.
Get information about the batch jobs within a Designer Cloud Powered by Trifacta job.
ref: getJobsForJobGroup
id required | integer |
{- "data": [
- {
- "id": 1,
- "status": "Complete",
- "jobType": "wrangle",
- "sampleSize": 1,
- "percentComplete": 1,
- "jobGroup": {
- "id": 1
}, - "errorMessage": {
- "id": 1
}, - "lastHeartbeatAt": "2019-08-24T14:15:22Z",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "executionLanguage": "photon",
- "cpJobId": "string",
- "wranglescript": {
- "id": 1
}, - "emrcluster": {
- "id": 1
}
}
], - "count": 1
}
Get Job Status.
ref: getJobStatus
id required | integer |
"Complete"
Create a jobGroup, which launches the specified job as the authenticated user. This performs the same action as clicking on the Run Job button in the application.
The request specification depends on one of the following conditions:
In the last case, you must specify some overrides when running the job. See the example with overrides
for more information.
ℹ️ NOTE: Override values applied to a job are not validated. Invalid overrides may cause your job to fail.
To run a job, you just specify the recipe identifier (wrangledDataset.id). If the job is successful, all defined outputs are generated, as defined in the outputobject, publications, and writeSettings objects associated with the recipe.
✅ TIP: To identify the wrangledDataset Id, select the recipe icon in the flow view and take the id shown in the URL. e.g. if the URL is
/flows/10?recipe=7
, the wrangledDataset Id is7
.
{"wrangledDataset": {"id": 7}}
If you must change some outputs or other settings for the specific job, you can insert these changes in the overrides section of the request. In the example below, the running environment, profiling option, and writeSettings for the job are modified for this execution.
{
"wrangledDataset": {"id": 1},
"overrides": {
"execution": "spark",
"profiler": false,
"writesettings": [
{
"path": "<path_to_output_file>",
"action": "create",
"format": "csv",
"compression": "none",
"header": false,
"asSingleFile": false
}
]
}
}
You can also override the spark options that will be used for the job run
{
"wrangledDataset": {"id": 1},
"overrides": {
"execution": "spark",
"profiler": true,
"sparkOptions": [
{"key": "spark.executor.cores", "value": "2"},
{"key": "spark.executor.memory", "value": "4GB"}
]
}
}
You can also override the databricks options that will be used for the job run.
{
"wrangledDataset": {"id": 1},
"overrides": {
"execution": "databricksSpark",
"profiler": true,
"databricksOptions": [
{"key": "maxWorkers", "value": 8},
{"key": "poolId", "value": "pool-123456789"},
{"key": "enableLocalDiskEncryption", "value": true}
]
}
}
If you have created a dataset with parameters, you can specify overrides for parameter values during execution through the APIs. Through this method, you can iterate job executions across all matching sources of a parameterized dataset.
In the example below, the runParameters override has been specified for the country
. In this case, the value "Germany" is inserted for the specified variable as part of the job execution.
{
"wrangledDataset": {"id": 33},
"runParameters": {
"overrides": {
"data": [{"key": "country", "value": "Germany"}]
}
}
}
The response contains a list of jobs which can be used to get a granular status of the JobGroup completion.
The jobGraph
indicates the dependency between each of the jobs.
{
"sessionId": "79276c31-c58c-4e79-ae5e-fed1a25ebca1",
"reason": "JobStarted",
"jobGraph": {
"vertices": [21, 22],
"edges": [{"source": 21, "target": 22}]
},
"id": 9,
"jobs": {"data": [{"id": 21}, {"id": 22}]}
}
When you create a new jobGroup through the APIs, the internal jobGroup identifier is returned in the response. Retain this identifier for future use. You can also acquire the jobGroup identifier from the application. In the Jobs page, the internal identifier for the jobGroup is the value in the left column.
ref: runJobGroup
x-execution-id | string Example: f9cab740-50b7-11e9-ba15-93c82271a00b Optional header to safely retry the request without accidentally performing the same operation twice. If a JobGroup with the same |
required | object The identifier for the recipe you would like to run. |
forceCacheUpdate | boolean Setting this flag to true will invalidate any cached datasources. This only applies to SQL datasets. |
ignoreRecipeErrors | boolean Default: false Setting this flag to true will mean the job will run even if there are upstream recipe errors. Setting it to false will cause the Request to fail on recipe errors. |
testMode | boolean Setting this flag to true will not run the job but just perform some validations. |
object (runParameterOverrides) Allows to override parameters that are defined in the flow on datasets or outputs for e.g. | |
workspaceId | integer Internal. Does not need to be specified |
object Allows to override execution settings that are set on the output object. | |
ranfrom | string Enum: "ui" "schedule" "api" Where the job was executed from. Does not need to be specified when using the API.
|
{- "wrangledDataset": {
- "id": 7
}
}
{- "sessionId": "79276c31-c58c-4e79-ae5e-fed1a25ebca1",
- "reason": "JobStarted",
- "jobGraph": {
- "vertices": [
- 21,
- 22
], - "edges": [
- {
- "source": 21,
- "target": 22
}
]
}, - "id": 9,
- "jobs": {
- "data": [
- {
- "id": 21
}, - {
- "id": 22
}
]
}
}
Deprecated. Use listJobLibrary instead.
ref: listJobGroups
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowNodeId | integer |
ranfor | string Default: "recipe,plan" filter jobs based on their type |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "ranfrom": "ui",
- "ranfor": "recipe",
- "status": "Complete",
- "profilingEnabled": true,
- "runParameterReferenceDate": "2019-08-24T14:15:22Z",
- "snapshot": {
- "id": 1
}, - "wrangledDataset": {
- "id": 1
}, - "flowrun": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "count": 1
}
Cancel the execution of a running Designer Cloud Powered by Trifacta jobGroup.
ℹ️ NOTE: If the job has completed, this endpoint does nothing.
ref: cancelJobGroup
id required | integer |
{ }
{- "jobIds": [
- 1
], - "jobgroupId": 1
}
Deprecated. Use countJobLibrary instead.
ref: countJobGroups
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
flowNodeId | integer |
ranfor | string Default: "recipe,plan" filter jobs based on their type |
{- "count": 1
}
Get the specified jobGroup.
A job group is a job that is executed from a specific node in a flow. The job group may contain:
It is possible to only get the current status for a jobGroup:
/v4/jobGroups/{id}/status
In that case, the response status would simply be a string:
"Complete"
If you wish to also get the related jobs and wrangledDataset, you can use embed
. See embedding resources for more information.
/v4/jobGroups/{id}?embed=jobs,wrangledDataset
ref: getJobGroup
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "name": "string",
- "description": "string",
- "ranfrom": "ui",
- "ranfor": "recipe",
- "status": "Complete",
- "profilingEnabled": true,
- "runParameterReferenceDate": "2019-08-24T14:15:22Z",
- "snapshot": {
- "id": 1
}, - "wrangledDataset": {
- "id": 1
}, - "flowrun": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
id required | integer |
{- "profilerTypeCheckHistograms": {
- "property1": [
- {
- "key": "VALID",
- "count": 1
}
], - "property2": [
- {
- "key": "VALID",
- "count": 1
}
]
}, - "profilerValidValueHistograms": {
- "property1": [
- {
- "min": 0,
- "max": 0,
- "roundMin": 0,
- "roundMax": 0,
- "buckets": [
- {
- "pos": 1,
- "b": 1
}
], - "quartiles": {
- "q1": 0,
- "q2": 0,
- "q3": 0
}
}
], - "property2": [
- {
- "min": 0,
- "max": 0,
- "roundMin": 0,
- "roundMax": 0,
- "buckets": [
- {
- "pos": 1,
- "b": 1
}
], - "quartiles": {
- "q1": 0,
- "q2": 0,
- "q3": 0
}
}
]
}, - "profilerRules": {
- "property1": [
- {
- "id": 1,
- "type": "string",
- "comment": "string",
- "description": "string",
- "status": "pass",
- "updatedAt": "string",
- "passCount": 1,
- "failCount": 1,
- "totalCount": 1
}
], - "property2": [
- {
- "id": 1,
- "type": "string",
- "comment": "string",
- "description": "string",
- "status": "pass",
- "updatedAt": "string",
- "passCount": 1,
- "failCount": 1,
- "totalCount": 1
}
]
}, - "columnTypes": {
- "property1": [
- "string"
], - "property2": [
- "string"
]
}
}
id required | integer |
{- "profilerTypeCheckHistograms": {
- "property1": [
- {
- "key": "VALID",
- "count": 1
}
], - "property2": [
- {
- "key": "VALID",
- "count": 1
}
]
}, - "profilerValidValueHistograms": {
- "property1": [
- {
- "min": 0,
- "max": 0,
- "roundMin": 0,
- "roundMax": 0,
- "buckets": [
- {
- "pos": 1,
- "b": 1
}
], - "quartiles": {
- "q1": 0,
- "q2": 0,
- "q3": 0
}
}
], - "property2": [
- {
- "min": 0,
- "max": 0,
- "roundMin": 0,
- "roundMax": 0,
- "buckets": [
- {
- "pos": 1,
- "b": 1
}
], - "quartiles": {
- "q1": 0,
- "q2": 0,
- "q3": 0
}
}
]
}, - "profilerRules": {
- "property1": [
- {
- "id": 1,
- "type": "string",
- "comment": "string",
- "description": "string",
- "status": "pass",
- "updatedAt": "string",
- "passCount": 1,
- "failCount": 1,
- "totalCount": 1
}
], - "property2": [
- {
- "id": 1,
- "type": "string",
- "comment": "string",
- "description": "string",
- "status": "pass",
- "updatedAt": "string",
- "passCount": 1,
- "failCount": 1,
- "totalCount": 1
}
]
}, - "columnTypes": {
- "property1": [
- "string"
], - "property2": [
- "string"
]
}
}
Get JobGroup Status.
ref: getJobGroupStatus
id required | integer |
"Complete"
Get the job group inputs. Return the list of datasets used when running this jobGroup.
ref: getJobGroupInputs
id required | integer |
{- "data": [
- {
- "name": "string",
- "inputs": [
- {
- "vendor": "string",
- "databaseConnectString": "string",
- "relationalPath": [
- "string"
], - "table": "string",
- "action": "string",
- "query": [
- "string"
]
}
]
}
]
}
Get the job group outputs. Return the list of tables and file paths used as output.
ref: getJobGroupOutputs
id required | integer |
{- "files": [
- {
- "uri": "string",
- "fileType": "FILE",
- "isPrimaryOutput": true
}
], - "tables": [
- {
- "vendor": "string",
- "databaseConnectString": "string",
- "relationalPath": [
- "string"
], - "table": "string",
- "action": "string",
- "query": [
- "string"
]
}
]
}
Get list of all jobGroup accessible to the authenticated user.
Note that it is possible to embed other resources while fetching the jobGroup list. e.g.:
/v4/jobLibrary/?embed=jobs,wrangledDataset
See embedding resources for more information.
It is possible to filter jobGroups based on their status.
Here is how to get all jobGroups with a Failed
status:
/v4/jobLibrary?status=Failed
It is possible to filter only scheduled jobGroups using the following request:
/v4/jobLibrary?ranfrom=schedule
It is also possible to filter the jobGroups based on the Date. Here is an example:
/v4/jobLibrary?dateFilter[createdAt][gte]=1572994800000&dateFilter[updatedAt][lt]=1581375600000
ref: listJobLibrary
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
dateFilter | object for filtering jobgroups by start and end date |
ranfrom | string filter jobs based on how they were run |
status | string filter jobs based on their status |
ranfor | string Default: "recipe,plan" filter jobs based on their type |
runBy | string Filter jobs by the users who have run them. One of ['all', 'currentUser'] |
{- "data": [
- {
- "name": "string",
- "description": "string",
- "ranfrom": "ui",
- "ranfor": "recipe",
- "status": "Complete",
- "profilingEnabled": true,
- "runParameterReferenceDate": "2019-08-24T14:15:22Z",
- "snapshot": {
- "id": 1
}, - "wrangledDataset": {
- "id": 1
}, - "flowrun": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "count": 1
}
Count Designer Cloud Powered by Trifacta jobs with special filter capabilities. See listJobLibrary for some examples.
ref: countJobLibrary
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
dateFilter | object for filtering jobgroups by start and end date |
ranfrom | string filter jobs based on how they were run |
status | string filter jobs based on their status |
ranfor | string Default: "recipe,plan" filter jobs based on their type |
runBy | string Filter jobs by the users who have run them. One of ['all', 'currentUser'] |
{- "count": 1
}
Get information about the batch jobs within a Designer Cloud Powered by Trifacta job.
ref: getJobsForJobGroup
id required | integer |
{- "data": [
- {
- "id": 1,
- "status": "Complete",
- "jobType": "wrangle",
- "sampleSize": 1,
- "percentComplete": 1,
- "jobGroup": {
- "id": 1
}, - "errorMessage": {
- "id": 1
}, - "lastHeartbeatAt": "2019-08-24T14:15:22Z",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "executionLanguage": "photon",
- "cpJobId": "string",
- "wranglescript": {
- "id": 1
}, - "emrcluster": {
- "id": 1
}
}
], - "count": 1
}
Get job group logs in a ZIP
format.
ref: getJobGroupLogs
id required | integer |
maxFileSizeInBytes | integer Max file size of filtered log files in the support bundle; can only be set by admins |
Get list of publications for the specified jobGroup.
A publication is an export of job results from the platform after they have been initially generated.
id required | integer |
{- "data": [
- {
- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputobject": {
- "id": 1
}, - "connection": {
- "id": "55"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
], - "count": 1
}
For a specified jobGroup, this endpoint performs an ad-hoc publish of the results to the designated target. Target information is based on the specified connection.
Job results to published are based on the specified jobGroup. You can specify:
Supported targets:
ref: publishJobGroup
id required | integer |
required | object Internal identifier of the connection to use to write the results. |
path required | Array of strings path to the location of the table/datasource. |
table required | string Name of table in the database to which to write the results. |
action required | string Enum: "create" "load" "createAndLoad" "truncateAndLoad" "dropAndLoad" "upsert" Type of writing action to perform with the results
|
inputFormat required | string Source format of the results. Supported values:
|
{- "connection": {
- "id": 1
}, - "path": [
- "default"
], - "table": "test_table",
- "action": "create",
- "inputFormat": "pqt"
}
{- "jobgroupId": 1,
- "reason": "Job started",
- "sessionId": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "connection": {
- "id": 1
}, - "path": [
- "string"
], - "table": "string",
- "action": "create",
- "inputFormat": "avro"
}
An object containing a list of scriptLines that can be reused across recipes.
Performs an import of a macro package.
ℹ️ NOTE: You cannot import a macro that was exported from a later version of the product.
✅ TIP: You can paste the response of the exported macro page as the request.
ℹ️ NOTE: Modification of the macro definition is not supported outside of the Designer Cloud Powered by Trifacta.
ref: importMacroPackage
type required | string Type of artifact. This value is always |
kind required | string This value is |
hash required | string Hash value used to verify the internal integrity of the macro definition. |
required | object |
required | object |
{- "type": "string",
- "kind": "string",
- "hash": "string",
- "data": {
- "name": "string",
- "description": "string",
- "signature": [
- {
- "name": "Store_Nbr",
- "type": "column"
}
], - "scriptlines": [
- {
- "hash": "string",
- "task": { }
}
]
}, - "metadata": {
- "lastMigration": "20191024143300",
- "trifactaVersion": "6.8.0+4.20191104073802.8b6217a",
- "exportedAt": "2019-08-24T14:15:22Z",
- "exportedBy": 1,
- "uuid": "6b27eee0-0034-11ea-a378-9dc0586de9fb",
- "edition": "Enterprise"
}
}
{- "id": 1,
- "name": "string",
- "description": "string",
- "createdBy": 1,
- "updatedBy": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "workspaceId": 1
}
Performs an dry run import of a macro package.
ℹ️ NOTE: You cannot import a macro that was exported from a later version of the product.
✅ TIP: You can paste the response of the exported macro page as the request.
ℹ️ NOTE: Modification of the macro definition is not supported outside of the Designer Cloud Powered by Trifacta.
id required | integer |
type required | string Type of artifact. This value is always |
kind required | string This value is |
hash required | string Hash value used to verify the internal integrity of the macro definition. |
required | object |
required | object |
{- "type": "string",
- "kind": "string",
- "hash": "string",
- "data": {
- "name": "string",
- "description": "string",
- "signature": [
- {
- "name": "Store_Nbr",
- "type": "column"
}
], - "scriptlines": [
- {
- "hash": "string",
- "task": { }
}
]
}, - "metadata": {
- "lastMigration": "20191024143300",
- "trifactaVersion": "6.8.0+4.20191104073802.8b6217a",
- "exportedAt": "2019-08-24T14:15:22Z",
- "exportedBy": 1,
- "uuid": "6b27eee0-0034-11ea-a378-9dc0586de9fb",
- "edition": "Enterprise"
}
}
{- "name": "string",
- "description": "string",
- "signature": [
- {
- "name": "Store_Nbr",
- "type": "column"
}
], - "scriptlines": [
- {
- "hash": "string",
- "task": { }
}
]
}
Retrieve a package containing the definition of the specified macro. Response body is the contents of the package, which is an importable version of the macro definition.
✅ TIP: The response body can be pasted as the request when you import the macro into a different environment. For more information, see Import Macro Package.
ℹ️ NOTE: Modification of the macro definition is not supported outside of the Designer Cloud Powered by Trifacta.
ref: getMacroPackage
id required | integer |
{- "type": "string",
- "kind": "string",
- "hash": "string",
- "data": {
- "name": "string",
- "description": "string",
- "signature": [
- {
- "name": "Store_Nbr",
- "type": "column"
}
], - "scriptlines": [
- {
- "hash": "string",
- "task": { }
}
]
}, - "metadata": {
- "lastMigration": "20191024143300",
- "trifactaVersion": "6.8.0+4.20191104073802.8b6217a",
- "exportedAt": "2019-08-24T14:15:22Z",
- "exportedBy": 1,
- "uuid": "6b27eee0-0034-11ea-a378-9dc0586de9fb",
- "edition": "Enterprise"
}
}
{ }
Create an OAuth 2.0 client
ℹ️ NOTE: Workspace admin role is required to use this endpoint.
name required | string |
type required | string |
clientId required | string |
clientSecret required | string |
authorizationURL required | string |
tokenUrl required | string |
scopes required | string |
accessTokenExpiresIn | integer |
refreshTokenExpiresIn | integer |
{- "name": "string",
- "type": "string",
- "clientId": "string",
- "clientSecret": "string",
- "authorizationURL": "string",
- "tokenUrl": "string",
- "scopes": "string",
- "accessTokenExpiresIn": 1,
- "refreshTokenExpiresIn": 1
}
{- "oauth2ClientInfo": {
- "oauth2ClientId": 1,
- "name": "string",
- "type": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
}
Get Oauth2 Clients
ℹ️ NOTE: Workspace admin role is required to use this endpoint.
ref: oauth2ClientModels
type required | string |
{- "oauth2ClientModels": [
- "string"
]
}
An outputObject is a definition of one or more types of outputs and how they are generated.
If an outputObject already exists for the recipe (flowNodeId
) to which you are posting, you must either modify the object instead or delete it before posting your new object.
ref: createOutputObject
execution required | string Enum: "photon" "spark" "emrSpark" "databricksSpark" Execution language. Indicate on which engine the job was executed. Can be null/missing for scheduled jobs that fail during the validation phase.
|
profiler required | boolean Indicate if recipe errors should be ignored for the jobGroup. |
isAdhoc | |
ignoreRecipeErrors | |
flowNodeId | integer FlowNode the outputObject should be attached to. (This is also the id of the wrangledDataset). |
Array of objects (writeSettingCreateRequest) [ items ] Optionally you can include writeSettings while creating the outputObject | |
Array of objects (sqlScriptCreateRequest) [ items ] Optionally you can include sqlScripts while creating the outputObject | |
Array of objects (publicationCreateRequest) [ items ] Optionally you can include publications while creating the outputObject | |
Array of objects (outputObjectSparkOptionUpdateRequest) [ items ] | |
object (outputObjectSchemaDriftOptionsUpdateRequest) |
{- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "ignoreRecipeErrors": true,
- "flowNodeId": 1,
- "writeSettings": [
- {
- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "outputObjectId": 1,
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
], - "connectionId": "25"
}
], - "sqlScripts": [
- {
- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connectionId": "55",
- "runParameters": [
- {
- "type": "sql",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
]
}
], - "publications": [
- {
- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputObjectId": 1,
- "connectionId": "55",
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
], - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
], - "outputObjectSparkOptions": [
- {
- "key": "string",
- "value": "string"
}
], - "outputObjectSchemaDriftOptions": {
- "schemaValidation": "true",
- "stopJobOnErrorsFound": "false"
}
}
{- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
List existing output objects
ref: listOutputObjects
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "count": 1
}
Generate python script for input recipe to the output object. EXPERIMENTAL FEATURE: This feature is intended for demonstration purposes only. In a future release, it can be modified or removed without warning. This endpoint should not be deployed in a production environment.
id required | integer |
orderedColumns required | string Ordered Column Names for the input dataset |
object (cdfToPythonOverrides) |
{- "orderedColumns": "string",
- "overrides": {
- "execution": "photon",
- "profiler": true
}
}
{- "pythonScript": "string"
}
List all the outputs of a Flow.
ref: getFlowOutputs
id required | integer |
{- "data": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "count": 1
}
Count existing output objects
ref: countOutputObjects
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get the specified outputObject.
Note that it is possible to include writeSettings and publications that are linked to this outputObject. See embedding resources for more information.
/v4/outputObjects/{id}?embed=writeSettings,publications
ref: getOutputObject
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
Update an existing output object
ref: updateOutputObject
id required | integer |
execution | string Enum: "photon" "spark" "emrSpark" "databricksSpark" Execution language. Indicate on which engine the job was executed. Can be null/missing for scheduled jobs that fail during the validation phase.
|
profiler | boolean Indicate if recipe errors should be ignored for the jobGroup. |
ignoreRecipeErrors | |
Array of objects (writeSettingCreateRequest) [ items ] | |
Array of objects (sqlScriptCreateRequest) [ items ] | |
Array of objects (publicationCreateRequest) [ items ] | |
Array of objects (outputObjectSparkOptionUpdateRequest) [ items ] | |
object (outputObjectSchemaDriftOptionsUpdateRequest) | |
name | string Name of output as it appears in the flow view |
description | string Description of output |
{- "execution": "photon",
- "profiler": true,
- "ignoreRecipeErrors": true,
- "writeSettings": [
- {
- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "outputObjectId": 1,
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
], - "connectionId": "25"
}
], - "sqlScripts": [
- {
- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connectionId": "55",
- "runParameters": [
- {
- "type": "sql",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
]
}
], - "publications": [
- {
- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputObjectId": 1,
- "connectionId": "55",
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
], - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
], - "outputObjectSparkOptions": [
- {
- "key": "string",
- "value": "string"
}
], - "outputObjectSchemaDriftOptions": {
- "schemaValidation": "true",
- "stopJobOnErrorsFound": "false"
}, - "name": "string",
- "description": "string"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Patch an existing output object
ref: patchOutputObject
id required | integer |
execution | string Enum: "photon" "spark" "emrSpark" "databricksSpark" Execution language. Indicate on which engine the job was executed. Can be null/missing for scheduled jobs that fail during the validation phase.
|
profiler | boolean Indicate if recipe errors should be ignored for the jobGroup. |
ignoreRecipeErrors | |
Array of objects (writeSettingCreateRequest) [ items ] | |
Array of objects (sqlScriptCreateRequest) [ items ] | |
Array of objects (publicationCreateRequest) [ items ] | |
Array of objects (outputObjectSparkOptionUpdateRequest) [ items ] | |
object (outputObjectSchemaDriftOptionsUpdateRequest) | |
name | string Name of output as it appears in the flow view |
description | string Description of output |
{- "execution": "photon",
- "profiler": true,
- "ignoreRecipeErrors": true,
- "writeSettings": [
- {
- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "outputObjectId": 1,
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
], - "connectionId": "25"
}
], - "sqlScripts": [
- {
- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connectionId": "55",
- "runParameters": [
- {
- "type": "sql",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
]
}
], - "publications": [
- {
- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputObjectId": 1,
- "connectionId": "55",
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
], - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
], - "outputObjectSparkOptions": [
- {
- "key": "string",
- "value": "string"
}
], - "outputObjectSchemaDriftOptions": {
- "schemaValidation": "true",
- "stopJobOnErrorsFound": "false"
}, - "name": "string",
- "description": "string"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing output object
ref: deleteOutputObject
id required | integer |
List all the inputs that are linked to this output object. Also include data sources that are present in referenced flows.
id required | integer |
{- "data": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "count": 1
}
Get information about the currently logged-in user.
ref: getCurrentPerson
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
uuid | string |
workspaceId | string |
includePrivileges | boolean Include the user's maximal privileges and authorization roles |
{- "email": "joe@example.com",
- "isAdmin": true,
- "isDisabled": false,
- "state": "active",
- "id": 1,
- "outputHomeDir": "/home-dir/queryResults/joe@example.com",
- "uploadDir": "/uploads",
- "lastLoginTime": "2019-08-24T14:15:22Z",
- "lastStateChange": "2019-08-24T14:15:22Z",
- "maximalPrivileges": [
- {
- "operations": [
- "read"
], - "resourceType": "flow"
}
]
}
Get an existing person
ref: getPerson
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
uuid | string |
workspaceId | string |
includePrivileges | boolean Include the user's maximal privileges and authorization roles |
{- "email": "joe@example.com",
- "isAdmin": true,
- "isDisabled": false,
- "state": "active",
- "id": 1,
- "outputHomeDir": "/home-dir/queryResults/joe@example.com",
- "uploadDir": "/uploads",
- "lastLoginTime": "2019-08-24T14:15:22Z",
- "lastStateChange": "2019-08-24T14:15:22Z",
- "maximalPrivileges": [
- {
- "operations": [
- "read"
], - "resourceType": "flow"
}
]
}
Update an existing person
ref: updatePerson
id required | integer |
string <email> | |
isAdmin | boolean If true, the user account is an administrator account. This property can only be changed by an admin account. |
isDisabled | boolean If true, the account is disabled. This property can only be changed by an admin account. |
state | string Enum: "active" "hidden" Current state of the user account. This property can only be changed by an admin account.
|
name | string name of the user |
outputHomeDir | string Home directory where the user's generated results are written |
uploadDir | string Path on backend datastore where files uploaded from the user's desktop are stored for use as imported datasets. |
Array of authorizationRoleWithName (object) or authorizationRoleWithTag (object) (authorizationRole) [ items ] List of the roles that this subject has been assigned |
{- "email": "joe@example.com",
- "isAdmin": true,
- "isDisabled": false,
- "state": "active",
- "name": "Joe Guy",
- "outputHomeDir": "/home-dir/queryResults/joe@example.com",
- "uploadDir": "/uploads",
- "authorizationRoles": [
- {
- "policyId": 1,
- "workspaceId": 1,
- "resourceOperations": [
- {
- "operations": [
- "read"
], - "resourceType": "flow",
- "policyTag": "flow_author"
}
], - "updatedAt": "2019-08-24T14:15:22Z",
- "createdAt": "2019-08-24T14:15:22Z",
- "groupCount": 1,
- "userCount": 1,
- "nameLocked": true,
- "privilegeLocked": true,
- "name": "string"
}
]
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Patch an existing person
ref: patchPerson
id required | integer |
string <email> | |
isAdmin | boolean If true, the user account is an administrator account. This property can only be changed by an admin account. |
isDisabled | boolean If true, the account is disabled. This property can only be changed by an admin account. |
state | string Enum: "active" "hidden" Current state of the user account. This property can only be changed by an admin account.
|
name | string name of the user |
outputHomeDir | string Home directory where the user's generated results are written |
uploadDir | string Path on backend datastore where files uploaded from the user's desktop are stored for use as imported datasets. |
Array of authorizationRoleWithName (object) or authorizationRoleWithTag (object) (authorizationRole) [ items ] List of the roles that this subject has been assigned |
{- "email": "joe@example.com",
- "isAdmin": true,
- "isDisabled": false,
- "state": "active",
- "name": "Joe Guy",
- "outputHomeDir": "/home-dir/queryResults/joe@example.com",
- "uploadDir": "/uploads",
- "authorizationRoles": [
- {
- "policyId": 1,
- "workspaceId": 1,
- "resourceOperations": [
- {
- "operations": [
- "read"
], - "resourceType": "flow",
- "policyTag": "flow_author"
}
], - "updatedAt": "2019-08-24T14:15:22Z",
- "createdAt": "2019-08-24T14:15:22Z",
- "groupCount": 1,
- "userCount": 1,
- "nameLocked": true,
- "privilegeLocked": true,
- "name": "string"
}
]
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
List existing people
ref: listPerson
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
string | |
workspaceId | string Filter the users using in a specific workspace. If not set, list users in the current workspace. It is ignored if the user is not an admin user. |
state | string |
isDisabled | string |
includePrivileges | boolean Include the user's maximal privileges and authorization roles |
noLimit | string If set to |
{- "data": [
- {
- "email": "joe@example.com",
- "isAdmin": true,
- "isDisabled": false,
- "state": "active",
- "id": 1,
- "outputHomeDir": "/home-dir/queryResults/joe@example.com",
- "uploadDir": "/uploads",
- "lastLoginTime": "2019-08-24T14:15:22Z",
- "lastStateChange": "2019-08-24T14:15:22Z",
- "maximalPrivileges": [
- {
- "operations": [
- "read"
], - "resourceType": "flow"
}
]
}
], - "count": 1
}
Create a new person
ref: createPerson
email required | string <email> |
accept required | string This property must be set to "accept" to create the user. |
isAdmin | boolean If true, the user account is an administrator account. This property can only be changed by an admin account. |
isDisabled | boolean If true, the account is disabled. This property can only be changed by an admin account. |
name | string name of the user |
password | string User password |
outputHomeDir | string Home directory where the user's generated results are written |
uploadDir | string Path on backend datastore where files uploaded from the user's desktop are stored for use as imported datasets. |
{- "email": "joe@example.com",
- "isAdmin": true,
- "isDisabled": false,
- "name": "Joe Guy",
- "accept": "accept",
- "password": "string",
- "outputHomeDir": "/home-dir/queryResults/joe@example.com",
- "uploadDir": "/uploads"
}
{- "email": "joe@example.com",
- "isAdmin": true,
- "isDisabled": false,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Count existing people
ref: countPerson
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
isDisabled | string |
state | string |
{- "count": 1
}
Update the current user's password
ref: updatePassword
oldPassword required | string Old password |
newPassword required | string New password |
{- "oldPassword": "string",
- "newPassword": "string"
}
Request to reset a user's password.
ℹ️ NOTE: Admin role is required to use this endpoint
ℹ️ NOTE: This endpoint does not generate an email or perform the reset. You must use the reset code to build a reset URL to send separately to the specific user. The above must be built into a URL in the following format:
http://example.com:3005/password-reset?email=<email>&code=<AccountResetCode>
URL element | Example value | Description |
---|---|---|
joe@example.com | User ID (email address) of the user whose password is to be reset | |
AccountResetCode | CD44232791 | Password reset code |
ref: passwordResetRequest
accountId | integer Internal identifier of the user whose password should be reset |
{- "accountId": 1
}
{- "code": "string",
- "email": "user@example.com"
}
Create a new plan
ref: createPlan
name required | string Display name of the flow. |
description | string User-friendly description for the flow. |
originalPlanId | integer unique identifier for this object. |
{- "name": "string",
- "description": "string",
- "originalPlanId": 1
}
{- "id": 1,
- "name": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "snapshotted": true,
- "originalPlanId": 1,
- "description": "string",
- "planSnapshotRunCount": 1,
- "notificationsEnabled": true,
- "latestPlanSnapshot": { },
- "latestPlanSnapshotRun": { }
}
List existing plans
ref: listPlans
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
includeAssociatedPeople | boolean If true, the returned plans will include a list of people with access. |
ownershipFilter | string Filter plans by ownership. Valid values are 'all', 'shared', and 'owned'. |
{- "data": [
- {
- "id": 1,
- "name": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "snapshotted": true,
- "originalPlanId": 1,
- "description": "string",
- "planSnapshotRunCount": 1,
- "notificationsEnabled": true,
- "latestPlanSnapshot": { },
- "latestPlanSnapshotRun": { }
}
], - "count": 1
}
Run the plan. A new snapshot will be created if required.
If some flows or outputs referenced by the plan tasks have been deleted, it will return a
MissingFlowReferences
validation status.
If the plan is valid, it will be queued for execution.
This endpoint returns a planSnapshotRunId
that can be used to track the plan execution status
using getPlanSnapshotRun.
ref: runPlan
id required | integer |
x-execution-id | string Example: f9cab740-50b7-11e9-ba15-93c82271a00b Optional header to safely retry the request without accidentally performing the same operation twice.
If a |
Array of objects (planNodeOverride) [ items ] Collection of run parameter overrides that should be applied to flow run parameters of the respective plan node. |
{- "planNodeOverrides": [
- {
- "handle": "string",
- "overrideKey": "string",
- "value": "string"
}
]
}
{- "validationStatus": "Valid",
- "planSnapshotRunId": 1
}
Get a list of users with whom the plan is shared.
ref: getPlanPermissions
id required | integer |
{- "data": [
- {
- "id": 1,
- "email": "joe@example.com",
- "name": "Joe Guy"
}
]
}
Import the plan and associated flows from the given package.
A ZIP
file as exported by the export plan endpoint is accepted.
Before you import, you can perform a dry-run to check for errors. See Import plan package - dry run.
This endpoint accept a multipart/form
content type.
Here is how to send the ZIP
package using curl.
curl -X POST http://example.com:3005/v4/plans/package \
-H 'authorization: Bearer <api-token>' \
-H 'content-type: multipart/form-data' \
-F 'data=@path/to/plan-package.zip'
The response lists the objects that have been created.
ref: importPlanPackage
folderId | integer |
fromUI | boolean If true, will return the list of imported environment parameters for confirmation if any are referenced in the plan. |
packageContents required | object (importPlanPackageRequestZip) An exported plan zip file. |
Array of environmentParameterMappingToExistingEnvParam (object) or environmentParameterMappingToManualValue (object) (environmentParameterMapping) [ items ] | |
Array of objects (connectionIdMapping) [ items ] |
{- "packageContents": { },
- "environmentParameterMapping": [
- {
- "overrideKey": "myVar",
- "mappedOverrideKey": "myVar"
}
], - "connectionIdMapping": [
- {
- "connectionUuid": "string",
- "mappedConnectionUuid": "string"
}
]
}
{- "flowPackages": [
- {
- "deletedObjects": { },
- "createdObjectMapping": { },
- "importRuleChanges": {
- "object": [
- { }
], - "value": [
- { }
]
}, - "primaryFlowIds": [
- 1
], - "flows": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "datasources": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": null,
- "order": null
}
], - "value": {
- "dateRange": {
- "timezone": null,
- "formats": [ ],
- "last": { }
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "flownodes": [
- {
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
], - "flowedges": [
- {
- "inPortId": 1,
- "outPortId": 1,
- "inputFlowNode": {
- "id": 1
}, - "outputFlowNode": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "recipes": [
- {
- "name": "string",
- "description": "string",
- "active": true,
- "nextPortId": 1,
- "currentEdit": {
- "id": 1
}, - "redoLeafEdit": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "outputobjects": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "webhookflowtasks": [
- { }
], - "release": { }
}
], - "planPackage": {
- "id": 1,
- "name": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "snapshotted": true,
- "originalPlanId": 1,
- "description": "string",
- "planSnapshotRunCount": 1,
- "notificationsEnabled": true,
- "latestPlanSnapshot": { },
- "latestPlanSnapshotRun": { }
}, - "taskCount": 1
}
Count existing plans
ref: countPlans
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
ownershipFilter | string Filter plans by ownership. |
{- "count": 1
}
List run parameters of a plan. Parameters will be grouped by plannode. Each element in the returned list will only contain resources that have run parameters defined.
ref: planRunParameters
id required | integer |
{- "planNodeParameters": [
- {
- "handle": "string",
- "planNodeId": 1,
- "flow": {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}, - "conflicts": [
- null
], - "datasources": {
- "data": [
- {
- "dynamicPath": "string",
- "isSchematized": true,
- "isDynamic": true,
- "isConverted": true,
- "disableTypeInference": true,
- "hasStructuring": true,
- "hasSchemaErrors": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}
}
]
}, - "outputObjects": {
- "data": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
]
}, - "planOverrides": { }
}
]
}
Read full plan with all its nodes, tasks, and edges.
ref: readFull
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
includeAssociatedPeople | boolean If true, the returned plan will include a list of people with access. |
includeCreatorInfo | boolean If true, the returned plan will include info about the creators of the flows and plan such as name and email adress. |
{- "id": 1,
- "name": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "snapshotted": true,
- "originalPlanId": 1,
- "description": "string",
- "planSnapshotRunCount": 1,
- "notificationsEnabled": true,
- "latestPlanSnapshot": { },
- "latestPlanSnapshotRun": { }
}
List of all schedules configured in the plan.
ref: getSchedulesForPlan
id required | integer |
{- "data": [
- {
- "name": "string",
- "triggers": [
- {
- "timeBased": {
- "cron": {
- "expression": "string"
}, - "timezone": "string"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
], - "count": 1
}
Retrieve a package containing the definition of the specified plan.
Response body is the contents of the package. Package contents are a ZIPped version of the plan definition.
The plan package can be used to import the plan in another environment. See the Import Plan Package for more information.
ref: getPlanPackage
id required | integer |
comment | string comment to be displayed when plan is imported in a deployment package |
Update plan properties, e.g. name and description
ref: updatePlan
id required | integer |
name | string Display name of the flow. |
description | string User-friendly description for the flow. |
notificationsEnabled | boolean Indicate if notification will be sent for this plan |
{- "name": "string",
- "description": "string",
- "notificationsEnabled": true
}
{- "id": 1,
- "name": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "snapshotted": true,
- "originalPlanId": 1,
- "description": "string",
- "planSnapshotRunCount": 1,
- "notificationsEnabled": true,
- "latestPlanSnapshot": { },
- "latestPlanSnapshotRun": { }
}
Delete plan and remove associated schedules.
ref: deletePlan
id required | integer |
An edge connection two tasks in the plan graph.
Create a new plan edge
ref: createPlanEdge
planId required | integer unique identifier for this object. |
inPlanNodeId required | integer unique identifier for this object. |
outPlanNodeId required | integer unique identifier for this object. |
statusRule required | string Enum: "success" "failure" "always" |
{- "planId": 1,
- "inPlanNodeId": 1,
- "outPlanNodeId": 1,
- "statusRule": "success"
}
{- "id": 1,
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "statusRule": "success"
}
A node representing a task in the plan graph.
Create a new plan node
ref: createPlanNode
planId required | integer unique identifier for this object. |
taskType required | string Enum: "flow" "http" "storage" "workflow" |
required | planFlowTaskCreateRequest (object) or planHTTPTaskCreateRequest (object) or planStorageTaskCreateRequest (object) or planWorkflowTaskCreateRequest (object) |
name required | string |
inPlanNodeIds | Array of integers[ items ] |
outPlanNodeIds | Array of integers[ items ] |
object Location of the plan node |
{- "planId": 1,
- "taskType": "flow",
- "task": {
- "flowId": 1,
- "flowNodeIds": [
- 1
]
}, - "name": "string",
- "inPlanNodeIds": [
- 1
], - "outPlanNodeIds": [
- 1
], - "coordinates": {
- "x": 1,
- "y": 1
}
}
{- "id": 1,
- "taskType": "flow",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "name": "string",
- "coordinates": {
- "x": 1,
- "y": 1
}
}
List run parameters of a plan node. Only resources with run parameters will be included in the response.
id required | integer |
{- "handle": "string",
- "planNodeId": 1,
- "conflicts": [
- null
], - "flow": {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}, - "datasources": {
- "data": [
- {
- "dynamicPath": "string",
- "isSchematized": true,
- "isDynamic": true,
- "isConverted": true,
- "disableTypeInference": true,
- "hasStructuring": true,
- "hasSchemaErrors": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}
}
]
}, - "outputObjects": {
- "data": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
]
}, - "planOverrides": { }
}
Delete an existing plan node
ref: deletePlanNode
id required | integer |
Used to override the default value of runParameter in a plan for future executions.
Create a new plan override
ref: createPlanOverride
planNodeId required | integer |
overrideKey required | string key/name used when overriding the value of the variable |
required | planRunParameterVariableSchema (object) or planRunParameterSelectorSchema (object) |
{- "planNodeId": 1,
- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "string"
}
}
}
{ }
Update an existing plan override
ref: updatePlanOverride
id required | integer |
planNodeId required | integer |
overrideKey required | string key/name used when overriding the value of the variable |
required | planRunParameterVariableSchema (object) or planRunParameterSelectorSchema (object) |
{- "planNodeId": 1,
- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "string"
}
}
}
{ }
Used to override the default value of runParameter in a plan using a schedule
Create a new plan schedule override
overrideKey required | string key/name used when overriding the value of the variable |
required | overrideValueInfoVariable (object) or overrideValueInfoSelector (object) |
scheduleId required | integer ID of the schedule to which these overrides belong |
planNodeId | integer unique identifier for this object. |
planId | integer unique identifier for this object. |
{- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "planNodeId": 1,
- "planId": 1,
- "scheduleId": 1
}
{- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "id": 1,
- "planNodeId": 1,
- "planId": 1,
- "scheduleId": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "deleted_at": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Update an existing plan schedule override
id required | integer |
overrideKey required | string key/name used when overriding the value of the variable |
required | overrideValueInfoVariable (object) or overrideValueInfoSelector (object) |
planNodeId required | integer unique identifier for this object. |
{- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "planNodeId": 1
}
{- "overrideKey": "myVar",
- "value": {
- "variable": {
- "value": "myValue"
}
}, - "id": 1,
- "planNodeId": 1,
- "planId": 1,
- "scheduleId": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "deleted_at": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
An execution of a plan's snapshot state
Cancel the plan execution. If there are any pending jobs, it will also try to cancel them. If any jobs may not have been canceled, the failedToCancelSomeJobs
value will be true.
id required | integer |
{ }
{- "id": 1,
- "status": "Complete",
- "createdAt": "2019-08-24T14:15:22Z",
- "finishedAt": "2019-08-24T14:15:22Z",
- "startedAt": "2019-08-24T14:15:22Z",
- "submittedAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "failedToCancelSomeJobs": true,
- "plan": {
- "id": 1
}, - "nextRun": {
- "id": "string"
}, - "previousRun": {
- "id": "string"
}
}
List existing plan snapshot runs
ref: listPlanSnapshotRuns
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
status | string filter plan executions based on their status |
dateFilter | object for filtering plan runs by start and end date |
ranfrom | string filter plan runs based on how they were run |
runBy | string Filter plans by the users who have run them. One of ['all', 'currentUser'] |
{- "data": [
- {
- "id": 1,
- "status": "Complete",
- "createdAt": "2019-08-24T14:15:22Z",
- "finishedAt": "2019-08-24T14:15:22Z",
- "startedAt": "2019-08-24T14:15:22Z",
- "submittedAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "failedToCancelSomeJobs": true,
- "plan": {
- "id": 1
}, - "nextRun": {
- "id": "string"
}, - "previousRun": {
- "id": "string"
}
}
], - "count": 1
}
Count existing plan snapshot runs
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
status | string filter plan executions based on their status |
dateFilter | object for filtering plan runs by start and end date |
ranfrom | string filter plan runs based on how they were run |
runBy | string Filter plans by the users who have run them. One of ['all', 'currentUser'] |
{- "count": 1
}
Return a plan snapshot run that contains the current status of a plan execution
ref: getPlanSnapshotRun
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
includeFlowCreatorInfo | string Include info about flow creators such as name and email adress. |
{- "id": 1,
- "status": "Complete",
- "createdAt": "2019-08-24T14:15:22Z",
- "finishedAt": "2019-08-24T14:15:22Z",
- "startedAt": "2019-08-24T14:15:22Z",
- "submittedAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "failedToCancelSomeJobs": true,
- "plan": {
- "id": 1
}, - "nextRun": {
- "id": "string"
}, - "previousRun": {
- "id": "string"
}
}
Get the schedule definition that triggered the plan snapshot run.
id required | integer |
{- "name": "string",
- "triggers": [
- {
- "timeBased": {
- "cron": {
- "expression": "string"
}, - "timezone": "string"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
A storage modification task to be executed as part of a plan.
Update an existing plan storage task
id required | integer |
path required | string |
type | string |
connectionId | string |
{- "path": "string",
- "type": "string",
- "connectionId": "string"
}
{- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
A publication object is used to specify a table-based output and is associated with an outputObject. Settings include the connection to use, path, table type, and write action to apply.
Create a new publication
ref: createPublication
path required | Array of strings path to the location of the table/datasource. |
tableName required | string name of the table (or of the datasource in case of Tableau) |
targetType required | string e.g. |
action required | string Enum: "create" "load" "createAndLoad" "truncateAndLoad" "dropAndLoad" "upsert" Type of writing action to perform with the results
|
outputObjectId | integer outputObject to attach this publication to. |
connectionIdString (string) or connectionIdBigQuery (string) | |
Array of objects (runParameterDestinationInfo) [ items ] Optional Parameters that can be used to parameterized the | |
parameters | object Additional publication parameters specific to each JDBC data source. Example: isDeltaTable=true for Databricks connections to produce Delta Lake Tables |
{- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputObjectId": 1,
- "connectionId": "55",
- "runParameters": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
], - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
{- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputobject": {
- "id": 1
}, - "connection": {
- "id": "55"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
List existing publications
ref: listPublications
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputobject": {
- "id": 1
}, - "connection": {
- "id": "55"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
], - "count": 1
}
Count existing publications
ref: countPublications
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get an existing publication
ref: getPublication
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputobject": {
- "id": 1
}, - "connection": {
- "id": "55"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
Update an existing publication
ref: updatePublication
id required | integer |
path | Array of strings path to the location of the table/datasource. |
tableName | string name of the table (or of the datasource in case of Tableau) |
targetType | string e.g. |
action | string Enum: "create" "load" "createAndLoad" "truncateAndLoad" "dropAndLoad" "upsert" Type of writing action to perform with the results
|
parameters | object Additional publication parameters specific to each JDBC data source. Example: isDeltaTable=true for Databricks connections to produce Delta Lake Tables |
{- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
{- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "outputobject": {
- "id": 1
}, - "connection": {
- "id": "55"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
Patch an existing publication
ref: patchPublication
id required | integer |
path | Array of strings path to the location of the table/datasource. |
tableName | string name of the table (or of the datasource in case of Tableau) |
targetType | string e.g. |
action | string Enum: "create" "load" "createAndLoad" "truncateAndLoad" "dropAndLoad" "upsert" Type of writing action to perform with the results
|
parameters | object Additional publication parameters specific to each JDBC data source. Example: isDeltaTable=true for Databricks connections to produce Delta Lake Tables |
{- "path": [
- "string"
], - "tableName": "string",
- "targetType": "string",
- "action": "create",
- "parameters": {
- "property1": {
- "type": "string",
- "default": null
}, - "property2": {
- "type": "string",
- "default": null
}
}
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing publication
ref: deletePublication
id required | integer |
A specific instance of a flow package that has been imported.
A deployment contains multiple releases among which only one is active.
Create a release for the specified deployment.
Release is created from a local ZIP
containing the package of the flow exported from the source system.
When importing a release, import-mapping rules are executed. These import rules allow you to replace the file location or the table names of different objects during the import for a deployment. See updateObjectImportRules and updateValueImportRules if you need to update the import rules.
This endpoint accept a multipart/form
content type.
Here is how to send the ZIP
package using curl.
curl -X POST http://example.com:3005/v4/deployments/:id/releases \
-H 'authorization: Bearer <api-token>' \
-H 'content-type: multipart/form-data' \
-F 'data=@path/to/flow-package.zip'
The response lists the objects that have been created.
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
id required | integer |
folderId | integer |
An exported flow zip file.
{- "deletedObjects": { },
- "createdObjectMapping": { },
- "importRuleChanges": {
- "object": [
- { }
], - "value": [
- { }
]
}, - "primaryFlowIds": [
- 1
], - "flows": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "datasources": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "flownodes": [
- {
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
], - "flowedges": [
- {
- "inPortId": 1,
- "outPortId": 1,
- "inputFlowNode": {
- "id": 1
}, - "outputFlowNode": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "recipes": [
- {
- "name": "string",
- "description": "string",
- "active": true,
- "nextPortId": 1,
- "currentEdit": {
- "id": 1
}, - "redoLeafEdit": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "outputobjects": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "webhookflowtasks": [
- { }
], - "release": { }
}
Get the list of releases for the specified deployment
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "data": [
- {
- "notes": "string",
- "packageUuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "active": true,
- "deployment": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "count": 1
}
Test importing flow package, applying all import rules that apply to this deployment, and return information about what objects would be created.
The same payload as for Import Deployment package is expected.
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
id required | integer |
folderId | integer |
An exported flow zip file.
{- "deletedObjects": { },
- "createdObjectMapping": { },
- "importRuleChanges": {
- "object": [
- { }
], - "value": [
- { }
]
}, - "primaryFlowIds": [
- 1
], - "flows": [
- {
- "name": "string",
- "description": "string",
- "folder": {
- "id": 1
}, - "id": 1,
- "defaultOutputDir": "string",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "settings": {
- "optimize": "enabled",
- "optimizers": {
- "columnPruning": "enabled"
}
}, - "workspace": {
- "id": 1
}, - "flowState": {
- "isOpened": true,
- "flow": {
- "id": 1
}, - "person": {
- "id": 1
}, - "zoom": 0,
- "offsetX": 0,
- "offsetY": 0
}
}
], - "datasources": [
- {
- "dynamicPath": "string",
- "isDynamic": false,
- "isConverted": true,
- "disableTypeInference": true,
- "parsingScript": {
- "id": 1
}, - "storageLocation": {
- "id": 1
}, - "connection": {
- "id": 1
}, - "runParameters": {
- "data": [
- {
- "type": "path",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
]
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "workspace": {
- "id": 1
}, - "name": "My Dataset",
- "description": "string"
}
], - "flownodes": [
- {
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
], - "flowedges": [
- {
- "inPortId": 1,
- "outPortId": 1,
- "inputFlowNode": {
- "id": 1
}, - "outputFlowNode": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "recipes": [
- {
- "name": "string",
- "description": "string",
- "active": true,
- "nextPortId": 1,
- "currentEdit": {
- "id": 1
}, - "redoLeafEdit": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "outputobjects": [
- {
- "execution": "photon",
- "profiler": true,
- "isAdhoc": true,
- "flownode": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "name": "string",
- "description": "string"
}
], - "webhookflowtasks": [
- { }
], - "release": { }
}
List existing releases
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: listReleases
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
deploymentId | integer Apply this filter to show only releases matching the given deployment. |
{- "data": [
- {
- "notes": "string",
- "packageUuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "active": true,
- "deployment": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "count": 1
}
Count existing releases
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: countReleases
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get an existing release
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: getRelease
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "notes": "string",
- "packageUuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "active": true,
- "deployment": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
Update an existing release
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: updateRelease
id required | integer |
notes | string Display value for notes that you can add to describe the release. |
packageUuid | string <uuid> Unique identifier for the package |
active | boolean If |
{- "notes": "string",
- "packageUuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "active": true
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Update the specified release.
You can use the following example to make the current release the active one for the deployment.
{"active": true}
ℹ️ NOTE: You can have only one active release per deployment. If this release is made active as part of this execution, the currently active release is made inactive.
✅ TIP: You can use this endpoint to deactivate a release, which prevents its jobs from being run. If there is no active release for the deployment, no jobs are run via the deployment job run endpoint. See runDeployment.
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: patchRelease
id required | integer |
notes | string Display value for notes that you can add to describe the release. |
packageUuid | string <uuid> Unique identifier for the package |
active | boolean If |
{- "active": true
}
{- "notes": "string",
- "packageUuid": "f9cab740-50b7-11e9-ba15-93c82271a00b",
- "active": true,
- "deployment": {
- "id": 1
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
Delete an existing release
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: deleteRelease
id required | integer |
Retrieve a package containing the definition of the flow for the specified release.
ℹ️ NOTE: Releases pertain to Production instances of the Designer Cloud Powered by Trifacta Platform. For more information, see Overview of Deployment Manager.
ℹ️ NOTE: This method exports flows from a Product instance, which is different from exporting using the getFlowPackage, which exports from the Dev instance. Connection identifiers and paths may differ between the two instances. This method is typically used for archiving releases from the Deployment Manager.
Response body is the contents of the package. Package contents are a ZIPped version of the flow definition.
ℹ️ NOTE: A deployment role or a deployment instance is required to use this endpoint.
ref: getReleasePackage
id required | integer |
Track the state of operations on any resources. For example, it will track the state of refresh_data of datasources.
Gets the states of resourceTaskState for given resourceTaskStateIds
resourceTaskStateIds required | Array of integers[ items ] |
{- "resourceTaskStateIds": [
- 1
]
}
[- {
- "id": 1,
- "resourceIdentifier": 1,
- "taskStage": "string"
}
]
Gets the resourceTaskIds for given resources
ref: getResourceTaskIds
resourceIdentifierIds required | Array of integers[ items ] |
{- "resourceIdentifierIds": [
- 1
]
}
[- {
- "id": 1,
- "resourceIdentifier": 1,
- "taskStage": "string"
}
]
Create a new run parameter
ref: createRunParameter
type required | string |
required | runParameterDateRange (object) or runParameterVariable (object) or runParameterPattern (object) or runParameterTimestamp (object) or runParameterSelector (object) (runParameterValueSchema) |
required | Array of objects[ items ] |
description | string |
object | |
flowId | integer |
overrideKey | string key/name used when overriding the value of the variable |
{- "type": "string",
- "description": "string",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "flow": {
- "id": 1
}, - "flowId": 1,
- "overrideKey": "myVar",
- "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
{- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "runParameterEdit": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "importedDataset": {
- "id": 1,
- "name": "string"
}, - "writeSetting": {
- "id": 1
}, - "publication": {
- "id": 1
}, - "sqlscript": {
- "id": 1
}, - "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}
}
Update an existing run parameter
ref: updateRunParameter
id required | integer |
required | runParameterDateRange (object) or runParameterVariable (object) or runParameterPattern (object) or runParameterTimestamp (object) or runParameterSelector (object) (runParameterValueSchema) |
id | integer unique identifier for this object. |
overrideKey | string |
description | string |
object | |
object | |
object | |
object | |
object | |
Array of objects[ items ] |
{- "id": 1,
- "overrideKey": "string",
- "description": "string",
- "flow": {
- "id": 1
}, - "importedDataset": {
- "id": 1,
- "name": "string"
}, - "writeSetting": {
- "id": 1
}, - "publication": {
- "id": 1
}, - "sqlscript": {
- "id": 1
}, - "value": {
- "dateRange": {
- "timezone": "string",
- "formats": [
- "string"
], - "last": {
- "unit": "years",
- "number": 1,
- "dow": 1
}
}
}, - "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
]
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing run parameter
ref: deleteRunParameter
id required | integer |
Contains information about repeated execution of a flow.
Create a new schedule
ref: createSchedule
name required | string name of the schedule |
required | Array of objects (timeBasedTrigger) [ items ] |
required | Array of runFlowTaskSchema (objects) or Array of runPlanTaskSchema (objects) or Array of runWorkflowTaskSchema (objects) |
{- "name": "string",
- "triggers": [
- {
- "timeBased": {
- "cron": {
- "expression": "string"
}, - "timezone": "string"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
]
}
{- "name": "string",
- "triggers": [
- {
- "timeBased": {
- "cron": {
- "expression": "string"
}, - "timezone": "string"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
list schedules owned by the current user
ref: listSchedules
filter | string Filter schedules using the attached flow name |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "name": "string",
- "triggers": [
- {
- "timeBased": {
- "cron": {
- "expression": "string"
}, - "timezone": "string"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1,
- "id": 1,
- "name": "string",
- "description": "string",
- "deleted_at": "string",
- "cpProject": "string",
- "workspaceId": 1,
- "folderId": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "creator": { },
- "updatedBy": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
], - "count": 1
}
Enable a schedule
ref: enableSchedule
id required | integer |
{ }
{- "name": "string",
- "triggers": [
- {
- "timeBased": {
- "cron": {
- "expression": "string"
}, - "timezone": "string"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
Disable a schedule
ref: disableSchedule
id required | integer |
{ }
{- "name": "string",
- "triggers": [
- {
- "timeBased": {
- "cron": {
- "expression": "string"
}, - "timezone": "string"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
count schedules owned by the current user
ref: countSchedules
filter | string Filter schedules using the attached flow name |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Fetch a schedule
ref: getSchedule
id required | integer |
{- "name": "string",
- "triggers": [
- {
- "timeBased": {
- "cron": {
- "expression": "string"
}, - "timezone": "string"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1,
- "id": 1,
- "name": "string",
- "description": "string",
- "deleted_at": "string",
- "cpProject": "string",
- "workspaceId": 1,
- "folderId": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "creator": { },
- "updatedBy": 1
}
}
], - "enabled": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "createdBy": 1,
- "updatedBy": 1
}
Update an existing schedule
ref: updateSchedule
id required | integer |
name | string name of the schedule |
Array of objects (timeBasedTrigger) [ items ] | |
Array of runFlowTaskSchema (objects) or Array of runPlanTaskSchema (objects) or Array of runWorkflowTaskSchema (objects) |
{- "name": "string",
- "triggers": [
- {
- "timeBased": {
- "cron": {
- "expression": "string"
}, - "timezone": "string"
}
}
], - "tasks": [
- {
- "runFlow": {
- "flowId": 1
}
}
]
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing schedule
ref: deleteSchedule
id required | integer |
A sqlScript object is used to specify arbitrary SQLs to be run and is associated with an outputObject. Settings include the connection to use and sql type (pre/post),
Create a new sql script
ref: createSqlScript
sqlScript required | string String of SQL queries to be executed. |
type required | string Identifier to decide if the SQLs will be executed before or after a job. |
vendor required | string e.g. |
outputObjectId | integer outputObject to attach this sqlScript to. |
connectionIdString (string) or connectionIdBigQuery (string) | |
Array of objects (runParameterSqlScriptInfo) [ items ] Optional Parameters that can be used to parameterized the |
{- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connectionId": "55",
- "runParameters": [
- {
- "type": "sql",
- "overrideKey": "myVar",
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}
}
}
]
}
{- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connection": {
- "id": "55"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
List existing sql scripts
ref: listSqlScripts
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connection": {
- "id": "55"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
], - "count": 1
}
Count existing sql scripts
ref: countSqlScripts
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get an existing sql script
ref: getSqlScript
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "outputObjectId": "21",
- "connection": {
- "id": "55"
}, - "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}
}
Patch an existing sql script
ref: patchSqlScript
id required | integer |
sqlScript | string String of SQL queries to be executed. |
type | string Identifier to decide if the SQLs will be executed before or after a job. |
vendor | string e.g. |
connectionIdString (string) or connectionIdBigQuery (string) |
{- "sqlScript": "string",
- "type": "string",
- "vendor": "string",
- "connectionId": "55"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing sql script
ref: deleteSqlScript
id required | integer |
startTime | string ISO timestamp; only include log events which happened after this time |
endTime | string ISO timestamp; only include log events which happened before this time |
sessionId | string only include log events which belong to this session id |
maxFileSizeInBytes | integer max file size of filtered log files in the support bundle |
An internal object used by Designer Cloud Powered by Trifacta to represent different metrics that track product usage.
List Jobs consumption for the specified time period.
ℹ️ NOTE: Admin role is required to use this endpoint.
ref: listJobsUsage
from required | string Starting date for the usage period. This is either in milliseconds since Unix epoch, or a YYYY-MM-DD formatted date. This value is inclusive. |
to required | string Ending date for the usage period. This is either in milliseconds since Unix epoch, or a YYYY-MM-DD formatted date. This value is inclusive. |
groupBy | string Groups jobs usage by parameter. |
binnedBy | string When supplied bins usage values by month. |
monthBoundary | string Use in conjunction with |
[- {
- "from": 1,
- "to": 1,
- "jobCount": 1,
- "ranFrom": "string",
- "workspaceId": 1,
- "workspaceName": "string"
}
]
List vCPU consumption for the specified time period.
ℹ️ NOTE: Admin role is required to use this endpoint.
ref: listComputeUsage
from required | string Starting date for computing usage. This is either in milliseconds since Unix epoch, or a YYYY-MM-DD formatted date. This value is inclusive. |
to required | string Ending date for computing usage. This is either in milliseconds since Unix epoch, or a YYYY-MM-DD formatted date. This value is inclusive. |
groupBy | string Groups compute usage by parameter. |
binnedBy | string When supplied bins usage values by month. |
monthBoundary | string Use in conjunction with |
[- {
- "from": 1,
- "to": 1,
- "amount": 1,
- "executionLanguage": "string",
- "jobCount": 1,
- "workspaceId": 1,
- "workspaceName": "string"
}
]
Get Usage Report containing job details for the specified period
ℹ️ NOTE: Admin role is required to use this endpoint.
from required | string Starting date for usage. This is either in milliseconds since Unix epoch, or a YYYY-MM-DD formatted date. This value is inclusive. |
to required | string Ending date for usage. This is either in milliseconds since Unix epoch, or a YYYY-MM-DD formatted date. This value is inclusive. |
Get Usage Report containing job details for the specified period
ℹ️ NOTE: Admin role is required to use this endpoint.
from required | string Starting date for usage. This is either in milliseconds since Unix epoch, or a YYYY-MM-DD formatted date. This value is inclusive. |
to required | string Ending date for usage. This is either in milliseconds since Unix epoch, or a YYYY-MM-DD formatted date. This value is inclusive. |
groupBy | string Groups jobs usage by parameter. |
List Designer Cloud Powered by Trifacta usage by different users in the specified time range and project. This can be used to query compute usage over arbitrary time periods.
ℹ️ NOTE: Admin role is required to use this endpoint.
projectId required | string Project id for which to calculate the aggregate consumption. |
from required | string Starting date for computing usage. This is either in milliseconds since Unix epoch, or a YYYY-MM-DD formatted date. This value is inclusive. |
to required | string Ending date for computing usage. This is either in milliseconds since Unix epoch, or a YYYY-MM-DD formatted date. This value is inclusive. |
{- "activeUsersList": [
- {
- "id": 1,
- "name": "string",
- "email": "string",
- "firstSeen": "string",
- "usedProjects": [
- "string"
]
}
], - "numUsersPerMonth": [
- {
- "date": "string",
- "dateLabel": "string",
- "count": 1
}
]
}
List Designer Cloud Powered by Trifacta usage by different users in the specified time range and entitlement. This can be used to query compute usage over arbitrary time periods.
ℹ️ NOTE: Admin role is required to use this endpoint.
entitlementId required | string Entitlement id for which to calculate the aggregate consumption. An entitlement represents an entity which provides a customer means to start using a service. Billing Admins can find entitlement ids as order ids by navigating to https://console.cloud.google.com/marketplace/product/endpoints/cloud-dataprep-editions-v2 and clicking on "Manage orders". |
from required | string Starting date for computing usage. This is either in milliseconds since Unix epoch, or a YYYY-MM-DD formatted date. This value is inclusive. |
to required | string Ending date for computing usage. This is either in milliseconds since Unix epoch, or a YYYY-MM-DD formatted date. This value is inclusive. |
{- "activeUsersList": [
- {
- "id": 1,
- "name": "string",
- "email": "string",
- "firstSeen": "string",
- "usedProjects": [
- "string"
]
}
], - "numSeatsPerMonth": [
- {
- "date": "string",
- "dateLabel": "string",
- "count": 1
}
], - "numUsersPerMonth": [
- {
- "date": "string",
- "dateLabel": "string",
- "count": 1
}
]
}
Webhook Tasks allows to make HTTP calls to external services after jobs completion in a flow.
Create a new webhook flow task
name required | string Webhook name |
flowId required | integer Id of the flow the webhook belongs to |
url required | string Webhook url |
method required | string Enum: "post" "get" "put" "patch" "delete" HTTP method |
triggerEvent required | string Enum: "onJobFailure" "onJobSuccess" "onJobDone" Event that will trigger the webhook |
triggerObject required | string Indicate which objects will trigger the webhooks any any some |
body | string Webhook body |
headers | object Webhook HTTP headers |
secretKey | string Optional secret key used to sign the webhook |
sslVerification | boolean Enable SSL verification |
retryOnFailure | boolean Retry if the status code is not in the 200-299 range |
{- "name": "string",
- "flowId": 1,
- "url": "string",
- "method": "post",
- "triggerEvent": "onJobFailure",
- "triggerObject": "any",
- "body": "string",
- "headers": {
- "property1": "string",
- "property2": "string"
}, - "secretKey": "string",
- "sslVerification": true,
- "retryOnFailure": true
}
{- "id": 1,
- "name": "string",
- "flow": {
- "id": 1
}, - "url": "string",
- "method": "post",
- "triggerEvent": "onJobFailure",
- "triggerObject": "any",
- "flowNodeIds": [
- 1
], - "body": "string",
- "headers": {
- "property1": "string",
- "property2": "string"
}, - "secretKey": "string",
- "sslVerification": true,
- "retryOnFailure": true,
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Allow to test a webhook task without running a job
ref: testWebhook
url required | string Webhook url |
method required | string Enum: "post" "get" "put" "patch" "delete" HTTP method |
body | string Webhook body |
headers | object Webhook HTTP headers |
secretKey | string Optional secret key used to sign the webhook |
sslVerification | boolean Enable SSL verification |
{- "url": "string",
- "method": "post",
- "body": "string",
- "headers": {
- "property1": "string",
- "property2": "string"
}, - "secretKey": "string",
- "sslVerification": true
}
{- "statusCode": 1,
- "error": { }
}
Get an existing webhook flow task
ref: getWebhookFlowTask
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "id": 1,
- "name": "string",
- "flow": {
- "id": 1
}, - "url": "string",
- "method": "post",
- "triggerEvent": "onJobFailure",
- "triggerObject": "any",
- "flowNodeIds": [
- 1
], - "body": "string",
- "headers": {
- "property1": "string",
- "property2": "string"
}, - "secretKey": "string",
- "sslVerification": true,
- "retryOnFailure": true,
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing webhook flow task
id required | integer |
{- "id": 1,
- "name": "string",
- "flow": {
- "id": 1
}, - "url": "string",
- "method": "post",
- "triggerEvent": "onJobFailure",
- "triggerObject": "any",
- "flowNodeIds": [
- 1
], - "body": "string",
- "headers": {
- "property1": "string",
- "property2": "string"
}, - "secretKey": "string",
- "sslVerification": true,
- "retryOnFailure": true,
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
A self-contained, configurable space shared by several users,
containing flows, Dataset
s, connections, and other Designer Cloud Powered by Trifacta objects.
Delete Workspace configuration settings override (reset the settings to their initial values).
settings required | Array of strings |
{- "settings": [
- "feature.myFeature"
]
}
{- "numberOfRowsDeleted": 1
}
Delete Workspace configuration settings override (reset the settings to their initial values).
id required | integer |
settings required | Array of strings |
{- "settings": [
- "feature.myFeature"
]
}
{- "numberOfRowsDeleted": 1
}
Get workspace configuration. Settings set to null use the default configuration.
It is possible to filter the configuration to a specific key using the query parameter key
:
/v4/workspaces/:id/configuration?key=outputFormats.JSON
[{ "key": "outputFormats.JSON", "value": true }]
key | string |
[- {
- "key": "feature.feature1",
- "value": 42,
- "schema": {
- "type": "integer",
- "default": 10,
- "description": "some example description"
}
}, - {
- "key": "feature.anotherFeature.usingDefaultValue",
- "value": null,
- "schema": {
- "type": "boolean",
- "default": false,
- "description": "some example description"
}
}
]
Update the workspace configuration for the specified keys. To reset a configuration value to its default, use the delete endpoint.
Use the getConfigurationSchema endpoint to get the list of editable configuration values.
required | Array of objects (configurationKeyValueSchema) [ items ] |
{- "configuration": [
- {
- "key": "feature.feature1",
- "value": false
}, - {
- "key": "feature.feature2",
- "value": "some value"
}
]
}
[- true
]
Get workspace configuration. Settings set to null use the default configuration.
It is possible to filter the configuration to a specific key using the query parameter key
:
/v4/workspaces/:id/configuration?key=outputFormats.JSON
[{ "key": "outputFormats.JSON", "value": true }]
id required | integer |
key | string |
[- {
- "key": "feature.feature1",
- "value": 42,
- "schema": {
- "type": "integer",
- "default": 10,
- "description": "some example description"
}
}, - {
- "key": "feature.anotherFeature.usingDefaultValue",
- "value": null,
- "schema": {
- "type": "boolean",
- "default": false,
- "description": "some example description"
}
}
]
Update the workspace configuration for the specified keys. To reset a configuration value to its default, use the delete endpoint.
Use the getConfigurationSchema endpoint to get the list of editable configuration values.
id required | integer |
required | Array of objects (configurationKeyValueSchema) [ items ] |
{- "configuration": [
- {
- "key": "feature.feature1",
- "value": false
}, - {
- "key": "feature.feature2",
- "value": "some value"
}
]
}
[- true
]
Get configuration schema for the specified workspace.
id required | integer |
{- "property1": {
- "type": "string",
- "default": null,
- "allowedValues": [
- "string"
]
}, - "property2": {
- "type": "string",
- "default": null,
- "allowedValues": [
- "string"
]
}
}
Get configuration schema for the current workspace.
{- "property1": {
- "type": "string",
- "default": null,
- "allowedValues": [
- "string"
]
}, - "property2": {
- "type": "string",
- "default": null,
- "allowedValues": [
- "string"
]
}
}
Transfer Designer Cloud Powered by Trifacta assets to another user in the current workspace. For the given workspace, assigns ownership of all the user's contents to another user. This includes flows, datasets, recipes, and connections–basically any object that can be created and managed through the Designer Cloud Powered by Trifacta UI.
ℹ️ NOTE: This API endpoint does not delete the original user account. To delete the user account, another API call is needed.
ℹ️ NOTE: The asset transfer endpoint cannot be applied to deleted users. You must transfer the assets first before deleting the user.
fromPersonId required | integer the id of the person to transfer assets from |
toPersonId required | integer the id of the person to transfer assets to |
object Asset IDs that need to be transferred. To specify all assets of a certain type, use "all" instead of integer array. If assets payload is not provided, all assets of all types will be transferred. |
{- "fromPersonId": 2,
- "toPersonId": 5,
- "assets": {
- "connections": [
- 702,
- 704
], - "datasources": [
- 111,
- 112,
- 113
], - "flows": [
- 201,
- 202
], - "macros": "all",
- "userdefinedfunctions": [
- 310,
- 307,
- 308
], - "plans": [
- 510,
- 512
]
}
}
Transfer Designer Cloud Powered by Trifacta assets to another user in the workspace. For the given workspace, assigns ownership of all the user's contents to another user. This includes flows, datasets, recipes, and connections–basically any object that can be created and managed through the Designer Cloud Powered by Trifacta UI.
ℹ️ NOTE: This API endpoint does not delete the original user account. To delete the user account, another API call is needed.
ℹ️ NOTE: The asset transfer endpoint cannot be applied to deleted users. You must transfer the assets first before deleting the user.
id required | integer |
fromPersonId required | integer the id of the person to transfer assets from |
toPersonId required | integer the id of the person to transfer assets to |
object Asset IDs that need to be transferred. To specify all assets of a certain type, use "all" instead of integer array. If assets payload is not provided, all assets of all types will be transferred. |
{- "fromPersonId": 2,
- "toPersonId": 5,
- "assets": {
- "connections": [
- 702,
- 704
], - "datasources": [
- 111,
- 112,
- 113
], - "flows": [
- 201,
- 202
], - "macros": "all",
- "userdefinedfunctions": [
- 310,
- 307,
- 308
], - "plans": [
- 510,
- 512
]
}
}
Represents the data produced by running a recipe on some input.
ℹ️ NOTE: In the Designer Cloud Powered by Trifacta application UI, the WrangledDataset object is called a recipe.
Create a new wrangled dataset
required | object |
required | object |
name required | string |
inferredScript | object |
{- "importedDataset": {
- "id": 1
}, - "inferredScript": { },
- "flow": {
- "id": 1
}, - "name": "string"
}
{- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "associatedPeople": { },
- "referenceinfo": {
- "id": 1,
- "name": "string",
- "description": "string"
}, - "wrangled": true
}
List existing wrangled datasets
ref: listWrangledDatasets
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "associatedPeople": { },
- "referenceinfo": {
- "id": 1,
- "name": "string",
- "description": "string"
}, - "wrangled": true
}
], - "count": 1
}
Add this wrangled dataset to a flow as a reference.
id required | integer |
required | object The flow to add this dataset to. |
{- "flow": {
- "id": 1
}
}
{- "flow": {
- "id": 1
}, - "referencedFlowNode": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "wrangled": true
}
Count existing wrangled datasets
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get an existing wrangled dataset
ref: getWrangledDataset
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "associatedPeople": { },
- "referenceinfo": {
- "id": 1,
- "name": "string",
- "description": "string"
}, - "wrangled": true
}
Update a wrangled dataset. This can mean one of two things.Either this will update the flownode object in our database or the editable script object.
ref: patchWrangledDataset
id required | integer |
activesampleId | integer Internal identifier of the currently active |
referenceId | integer Internal identifier for referenceInfo, which contains the name and description of the reference object associated with this flow node. This is how the reference dataset will appear when used in other flows. |
sampleLoadLimit | integer If not null, stores user selected sample size in MBs |
deletedAt | string <date-time> The time this object was deleted. |
{- "activesampleId": 1,
- "referenceId": 1,
- "sampleLoadLimit": 1,
- "deletedAt": "2019-08-24T14:15:22Z"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "updatedAt": "2019-08-24T14:15:22Z"
}
Get the dataset that is the primary input for this wrangled dataset. This can be either an imported dataset or a wrangled dataset.
ref: getInputDataset
id required | integer |
{- "wrangledDataset": {
- "id": 1
}
}
This action performs a dataset swap for the source of a wrangled dataset, which can be done through the UI.
Update the primary input dataset for the specified wrangled dataset. Each wrangled dataset must have one and only one primary input dataset, which can be an imported or wrangled dataset. If a wrangled dataset from another flow is selected, a reference will be used.
✅ TIP: After you have created a job via API, you can use this API to swap out the source data for the job's dataset. In this manner, you can rapidly re-execute a pre-existing job using fresh data.
ref: updateInputDataset
id required | integer |
required | object |
{- "wrangledDataset": {
- "id": 1
}
}
{- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "flow": {
- "id": 1
}, - "recipe": {
- "id": 1
}, - "activeSample": {
- "id": 1
}, - "associatedPeople": { },
- "referenceinfo": {
- "id": 1,
- "name": "string",
- "description": "string"
}, - "wrangled": true
}
A writeSetting object defines file-based outputs within an outputObject. Settings include path, format, compression, and delimiters.
To specify multiple outputs, you can include additional writeSetting objects in the request.
For example, if you want to generate output to csv
and json
, you can duplicate the writeSettings object for csv and change the format value in the second one to json.
Create a new writesetting
ref: createWriteSetting
path required | string The fully qualified path to the output location where to write the results. |
action required | string Enum: "create" "append" "overwrite" If the output file or directory exists, you can specify one of the following actions
|
format required | string Enum: "csv" "json" "avro" "pqt" "hyper" Output format for the results. Specify one of the supported values.
|
compression | string Enum: "none" "gzip" "bzip2" "snappy" For csv and json results,
you can optionally compress them using
|
header | boolean For csv results with action set to |
asSingleFile | boolean For |
delim | string The delimiter between field values in an output row. Only relevant if the chosen |
hasQuotes | boolean If true, each field in the output is wrapped in double-quotes. |
includeMismatches | boolean If true, write out mismatched values as a string. |
outputObjectId | integer outputObject to attach this writeSetting to. |
Array of objects (runParameterDestinationInfo) [ items ] Optional Parameters that can be used to parameterized the path | |
connectionId | string Internal identifier of the connection to use when writing to a SFTP destination. |
{- "path": "/path/to/file.csv",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "outputObjectId": 7,
- "runParameters": [
- {
- "insertionIndices": [
- {
- "index": 1,
- "order": 1
}
], - "value": {
- "variable": {
- "value": "string"
}, - "overrideKey": "myVar"
}
}
], - "connectionId": "5"
}
{- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "connectionId": "25"
}
List existing write settings
ref: listWriteSettings
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "data": [
- {
- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "connectionId": "25"
}
], - "count": 1
}
Count existing write settings
ref: countWriteSettings
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. | |
limit | integer Default: 25 Maximum number of objects to fetch. |
offset | integer Offset after which to start returning objects. For use with |
filterType | string Default: "fuzzy" Defined the filter type, one of ["fuzzy", "contains", "exact", "exactIgnoreCase"]. For use with |
sort | string Example: sort=-createdAt Defines sort order for returned objects |
filterFields | string Default: "name" Example: filterFields=id,order comma-separated list of fields to match the |
filter | string Example: filter=my-object Value for filtering objects. See |
includeCount | boolean If includeCount is true, it will include the total number of objects as a count object in the response |
{- "count": 1
}
Get an existing write setting
ref: getWriteSetting
id required | integer |
fields | string Example: fields=id;name;description Semi-colons-separated list of fields |
embed | string Example: embed=association.otherAssociation,anotherAssociation Comma-separated list of objects to pull in as part of the response. See Embedding Resources for more information. |
string or Array of strings Whether to include all or some of the nested deleted objects. |
{- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "connectionId": "25"
}
Update an existing write setting
ref: updateWriteSetting
id required | integer |
path | string The fully qualified path to the output location where to write the results. |
action | string Enum: "create" "append" "overwrite" If the output file or directory exists, you can specify one of the following actions
|
format | string Enum: "csv" "json" "avro" "pqt" "hyper" Output format for the results. Specify one of the supported values.
|
compression | string Enum: "none" "gzip" "bzip2" "snappy" For csv and json results,
you can optionally compress them using
|
header | boolean For csv results with action set to |
asSingleFile | boolean For |
delim | string The delimiter between field values in an output row. Only relevant if the chosen |
hasQuotes | boolean If true, each field in the output is wrapped in double-quotes. |
includeMismatches | boolean If true, write out mismatched values as a string. |
connectionId | string Internal identifier of the connection to use when writing to a SFTP destination. |
{- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "connectionId": "25"
}
{- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "id": 1,
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "creator": {
- "id": 1
}, - "updater": {
- "id": 1
}, - "connectionId": "25"
}
Patch an existing write setting
ref: patchWriteSetting
id required | integer |
path | string The fully qualified path to the output location where to write the results. |
action | string Enum: "create" "append" "overwrite" If the output file or directory exists, you can specify one of the following actions
|
format | string Enum: "csv" "json" "avro" "pqt" "hyper" Output format for the results. Specify one of the supported values.
|
compression | string Enum: "none" "gzip" "bzip2" "snappy" For csv and json results,
you can optionally compress them using
|
header | boolean For csv results with action set to |
asSingleFile | boolean For |
delim | string The delimiter between field values in an output row. Only relevant if the chosen |
hasQuotes | boolean If true, each field in the output is wrapped in double-quotes. |
includeMismatches | boolean If true, write out mismatched values as a string. |
connectionId | string Internal identifier of the connection to use when writing to a SFTP destination. |
{- "path": "string",
- "action": "create",
- "format": "csv",
- "compression": "none",
- "header": true,
- "asSingleFile": true,
- "delim": ",",
- "hasQuotes": true,
- "includeMismatches": true,
- "connectionId": "25"
}
{- "id": 1,
- "updater": {
- "id": 1
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Delete an existing write setting
ref: deleteWriteSetting
id required | integer |