[ aws . m2 ]

create-data-set-import-task

Description

Starts a data set import task for a specific application.

See also: AWS API Documentation

See ‘aws help’ for descriptions of global parameters.

Synopsis

  create-data-set-import-task
--application-id <value>
[--client-token <value>]
--import-config <value>
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]

Options

--application-id (string)

The unique identifier of the application for which you want to import data sets.

--client-token (string)

Unique, case-sensitive identifier you provide to ensure the idempotency of the request to create a data set import. The service generates the clientToken when the API call is triggered. The token expires after one hour, so if you retry the API within this timeframe with the same clientToken, you will get the same response. The service also handles deleting the clientToken after it expires.

--import-config (structure)

The data set import task configuration.

dataSets -> (list)

The data sets.

(structure)

Identifies a specific data set to import from an external location.

dataSet -> (structure)

The data set.

datasetName -> (string)

The logical identifier for a specific data set (in mainframe format).

datasetOrg -> (structure)

The type of dataset. Possible values include VSAM, IS, PS, GDG, PO, PS, UNKNOWN etc.

gdg -> (structure)

The generation data group of the data set.

limit -> (integer)

The maximum number of generation data sets, up to 255, in a GDG.

rollDisposition -> (string)

The disposition of the data set in the catalog.

vsam -> (structure)

The details of a VSAM data set.

alternateKeys -> (list)

The alternate key definitions, if any. A legacy dataset might not have any alternate key defined, but if those alternate keys definitions exist, provide them as some applications will make use of them.

(structure)

Defines an alternate key. This value is optional. A legacy data set might not have any alternate key defined but if those alternate keys definitions exist, provide them, as some applications will make use of them.

allowDuplicates -> (boolean)

Indicates whether the alternate key values are supposed to be unique for the given data set.

length -> (integer)

A strictly positive integer value representing the length of the alternate key.

name -> (string)

The name of the alternate key.

offset -> (integer)

A positive integer value representing the offset to mark the start of the alternate key part in the record byte array.

compressed -> (boolean)

Indicates whether indexes for this dataset are stored as compressed values. If you have a large data set (typically > 100 Mb), consider setting this flag to True.

encoding -> (string)

The character set used by the data set. Can be ASCII, EBCDIC, or unknown.

format -> (string)

The record format of the data set.

primaryKey -> (structure)

The primary key of the data set.

length -> (integer)

A strictly positive integer value representing the length of the primary key.

name -> (string)

A name for the Primary Key.

offset -> (integer)

A positive integer value representing the offset to mark the start of the primary key in the record byte array.

recordLength -> (structure)

The length of a record.

max -> (integer)

The maximum record length. In case of fixed, both minimum and maximum are the same.

min -> (integer)

The minimum record length of a record.

relativePath -> (string)

The relative location of the data set in the database or file system.

storageType -> (string)

The storage type of the data set: database or file system. For Micro Focus, database corresponds to datastore and file system corresponds to EFS/FSX. For Blu Age, there is no support of file system and database corresponds to Blusam.

externalLocation -> (structure)

The location of the data set.

s3Location -> (string)

The URI of the Amazon S3 bucket.

s3Location -> (string)

The Amazon S3 location of the data sets.

JSON Syntax:

{
  "dataSets": [
    {
      "dataSet": {
        "datasetName": "string",
        "datasetOrg": {
          "gdg": {
            "limit": integer,
            "rollDisposition": "string"
          },
          "vsam": {
            "alternateKeys": [
              {
                "allowDuplicates": true|false,
                "length": integer,
                "name": "string",
                "offset": integer
              }
              ...
            ],
            "compressed": true|false,
            "encoding": "string",
            "format": "string",
            "primaryKey": {
              "length": integer,
              "name": "string",
              "offset": integer
            }
          }
        },
        "recordLength": {
          "max": integer,
          "min": integer
        },
        "relativePath": "string",
        "storageType": "string"
      },
      "externalLocation": {
        "s3Location": "string"
      }
    }
    ...
  ],
  "s3Location": "string"
}

--cli-input-json | --cli-input-yaml (string) Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml.

--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. Similarly, if provided yaml-input it will print a sample input YAML that can be used with --cli-input-yaml. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command.

See ‘aws help’ for descriptions of global parameters.

Output

taskId -> (string)

The task identifier. This operation is asynchronous. Use this identifier with the GetDataSetImportTask operation to obtain the status of this task.