Creates a new job to transform input data, using steps defined in an existing Glue DataBrew recipe
See also: AWS API Documentation
See ‘aws help’ for descriptions of global parameters.
create-recipe-job
[--dataset-name <value>]
[--encryption-key-arn <value>]
[--encryption-mode <value>]
--name <value>
[--log-subscription <value>]
[--max-capacity <value>]
[--max-retries <value>]
[--outputs <value>]
[--data-catalog-outputs <value>]
[--database-outputs <value>]
[--project-name <value>]
[--recipe-reference <value>]
--role-arn <value>
[--tags <value>]
[--timeout <value>]
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
--dataset-name
(string)
The name of the dataset that this job processes.
--encryption-key-arn
(string)
The Amazon Resource Name (ARN) of an encryption key that is used to protect the job.
--encryption-mode
(string)
The encryption mode for the job, which can be one of the following:
SSE-KMS
- Server-side encryption with keys managed by KMS.
SSE-S3
- Server-side encryption with keys managed by Amazon S3.Possible values:
SSE-KMS
SSE-S3
--name
(string)
A unique name for the job. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.
--log-subscription
(string)
Enables or disables Amazon CloudWatch logging for the job. If logging is enabled, CloudWatch writes one log stream for each job run.
Possible values:
ENABLE
DISABLE
--max-capacity
(integer)
The maximum number of nodes that DataBrew can consume when the job processes data.
--max-retries
(integer)
The maximum number of times to retry the job after a job run fails.
--outputs
(list)
One or more artifacts that represent the output from running the job.
(structure)
Represents options that specify how and where in Amazon S3 DataBrew writes the output generated by recipe jobs or profile jobs.
CompressionFormat -> (string)
The compression algorithm used to compress the output text of the job.
Format -> (string)
The data format of the output of the job.
PartitionColumns -> (list)
The names of one or more partition columns for the output of the job.
(string)
Location -> (structure)
The location in Amazon S3 where the job writes its output.
Bucket -> (string)
The Amazon S3 bucket name.
Key -> (string)
The unique name of the object in the bucket.
BucketOwner -> (string)
The Amazon Web Services account ID of the bucket owner.
Overwrite -> (boolean)
A value that, if true, means that any data in the location specified for output is overwritten with new output.
FormatOptions -> (structure)
Represents options that define how DataBrew formats job output files.
Csv -> (structure)
Represents a set of options that define the structure of comma-separated value (CSV) job output.
Delimiter -> (string)
A single character that specifies the delimiter used to create CSV job output.
MaxOutputFiles -> (integer)
Maximum number of files to be generated by the job and written to the output folder. For output partitioned by column(s), the MaxOutputFiles value is the maximum number of files per partition.
Shorthand Syntax:
CompressionFormat=string,Format=string,PartitionColumns=string,string,Location={Bucket=string,Key=string,BucketOwner=string},Overwrite=boolean,FormatOptions={Csv={Delimiter=string}},MaxOutputFiles=integer ...
JSON Syntax:
[
{
"CompressionFormat": "GZIP"|"LZ4"|"SNAPPY"|"BZIP2"|"DEFLATE"|"LZO"|"BROTLI"|"ZSTD"|"ZLIB",
"Format": "CSV"|"JSON"|"PARQUET"|"GLUEPARQUET"|"AVRO"|"ORC"|"XML"|"TABLEAUHYPER",
"PartitionColumns": ["string", ...],
"Location": {
"Bucket": "string",
"Key": "string",
"BucketOwner": "string"
},
"Overwrite": true|false,
"FormatOptions": {
"Csv": {
"Delimiter": "string"
}
},
"MaxOutputFiles": integer
}
...
]
--data-catalog-outputs
(list)
One or more artifacts that represent the Glue Data Catalog output from running the job.
(structure)
Represents options that specify how and where in the Glue Data Catalog DataBrew writes the output generated by recipe jobs.
CatalogId -> (string)
The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.
DatabaseName -> (string)
The name of a database in the Data Catalog.
TableName -> (string)
The name of a table in the Data Catalog.
S3Options -> (structure)
Represents options that specify how and where DataBrew writes the Amazon S3 output generated by recipe jobs.
Location -> (structure)
Represents an Amazon S3 location (bucket name and object key) where DataBrew can write output from a job.
Bucket -> (string)
The Amazon S3 bucket name.
Key -> (string)
The unique name of the object in the bucket.
BucketOwner -> (string)
The Amazon Web Services account ID of the bucket owner.
DatabaseOptions -> (structure)
Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.
TempDirectory -> (structure)
Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.
Bucket -> (string)
The Amazon S3 bucket name.
Key -> (string)
The unique name of the object in the bucket.
BucketOwner -> (string)
The Amazon Web Services account ID of the bucket owner.
TableName -> (string)
A prefix for the name of a table DataBrew will create in the database.
Overwrite -> (boolean)
A value that, if true, means that any data in the location specified for output is overwritten with new output. Not supported with DatabaseOptions.
Shorthand Syntax:
CatalogId=string,DatabaseName=string,TableName=string,S3Options={Location={Bucket=string,Key=string,BucketOwner=string}},DatabaseOptions={TempDirectory={Bucket=string,Key=string,BucketOwner=string},TableName=string},Overwrite=boolean ...
JSON Syntax:
[
{
"CatalogId": "string",
"DatabaseName": "string",
"TableName": "string",
"S3Options": {
"Location": {
"Bucket": "string",
"Key": "string",
"BucketOwner": "string"
}
},
"DatabaseOptions": {
"TempDirectory": {
"Bucket": "string",
"Key": "string",
"BucketOwner": "string"
},
"TableName": "string"
},
"Overwrite": true|false
}
...
]
--database-outputs
(list)
Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write to.
(structure)
Represents a JDBC database output object which defines the output destination for a DataBrew recipe job to write into.
GlueConnectionName -> (string)
The Glue connection that stores the connection information for the target database.
DatabaseOptions -> (structure)
Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.
TempDirectory -> (structure)
Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.
Bucket -> (string)
The Amazon S3 bucket name.
Key -> (string)
The unique name of the object in the bucket.
BucketOwner -> (string)
The Amazon Web Services account ID of the bucket owner.
TableName -> (string)
A prefix for the name of a table DataBrew will create in the database.
DatabaseOutputMode -> (string)
The output mode to write into the database. Currently supported option: NEW_TABLE.
Shorthand Syntax:
GlueConnectionName=string,DatabaseOptions={TempDirectory={Bucket=string,Key=string,BucketOwner=string},TableName=string},DatabaseOutputMode=string ...
JSON Syntax:
[
{
"GlueConnectionName": "string",
"DatabaseOptions": {
"TempDirectory": {
"Bucket": "string",
"Key": "string",
"BucketOwner": "string"
},
"TableName": "string"
},
"DatabaseOutputMode": "NEW_TABLE"
}
...
]
--project-name
(string)
Either the name of an existing project, or a combination of a recipe and a dataset to associate with the recipe.
--recipe-reference
(structure)
Represents the name and version of a DataBrew recipe.
Name -> (string)
The name of the recipe.
RecipeVersion -> (string)
The identifier for the version for the recipe.
Shorthand Syntax:
Name=string,RecipeVersion=string
JSON Syntax:
{
"Name": "string",
"RecipeVersion": "string"
}
--role-arn
(string)
The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.
--tags
(map)
Metadata tags to apply to this job.
key -> (string)
value -> (string)
Shorthand Syntax:
KeyName1=string,KeyName2=string
JSON Syntax:
{"string": "string"
...}
--timeout
(integer)
The job’s timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of
TIMEOUT
.
--cli-input-json
| --cli-input-yaml
(string)
Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton
. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml
.
--generate-cli-skeleton
(string)
Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input
, prints a sample input JSON that can be used as an argument for --cli-input-json
. Similarly, if provided yaml-input
it will print a sample input YAML that can be used with --cli-input-yaml
. If provided with the value output
, it validates the command inputs and returns a sample output JSON for that command.
See ‘aws help’ for descriptions of global parameters.