Modifies the specified endpoint.
See also: AWS API Documentation
See ‘aws help’ for descriptions of global parameters.
modify-endpoint
--endpoint-arn <value>
[--endpoint-identifier <value>]
[--endpoint-type <value>]
[--engine-name <value>]
[--username <value>]
[--password <value>]
[--server-name <value>]
[--port <value>]
[--database-name <value>]
[--extra-connection-attributes <value>]
[--certificate-arn <value>]
[--ssl-mode <value>]
[--service-access-role-arn <value>]
[--external-table-definition <value>]
[--dynamo-db-settings <value>]
[--s3-settings <value>]
[--dms-transfer-settings <value>]
[--mongo-db-settings <value>]
[--kinesis-settings <value>]
[--kafka-settings <value>]
[--elasticsearch-settings <value>]
[--neptune-settings <value>]
[--redshift-settings <value>]
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
[--cli-auto-prompt <value>]
--endpoint-arn
(string)
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
--endpoint-identifier
(string)
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can’t end with a hyphen or contain two consecutive hyphens.
--endpoint-type
(string)
The type of endpoint. Valid values are
source
andtarget
.Possible values:
source
target
--engine-name
(string)
The type of engine for the endpoint. Valid values, depending on the EndpointType, include
"mysql"
,"oracle"
,"postgres"
,"mariadb"
,"aurora"
,"aurora-postgresql"
,"redshift"
,"s3"
,"db2"
,"azuredb"
,"sybase"
,"dynamodb"
,"mongodb"
,"kinesis"
,"kafka"
,"elasticsearch"
,"documentdb"
,"sqlserver"
, and"neptune"
.
--username
(string)
The user name to be used to login to the endpoint database.
--password
(string)
The password to be used to login to the endpoint database.
--server-name
(string)
The name of the server where the endpoint database resides.
--port
(integer)
The port used by the endpoint database.
--database-name
(string)
The name of the endpoint database.
--extra-connection-attributes
(string)
Additional attributes associated with the connection. To reset this parameter, pass the empty string (“”) as an argument.
--certificate-arn
(string)
The Amazon Resource Name (ARN) of the certificate used for SSL connection.
--ssl-mode
(string)
The SSL mode used to connect to the endpoint. The default value is
none
.Possible values:
none
require
verify-ca
verify-full
--service-access-role-arn
(string)
The Amazon Resource Name (ARN) for the service access role you want to use to modify the endpoint.
--external-table-definition
(string)
The external table definition.
--dynamo-db-settings
(structure)
Settings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see Using Object Mapping to Migrate Data to DynamoDB in the AWS Database Migration Service User Guide.
ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) used by the service access IAM role.
Shorthand Syntax:
ServiceAccessRoleArn=string
JSON Syntax:
{
"ServiceAccessRoleArn": "string"
}
--s3-settings
(structure)
Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see Extra Connection Attributes When Using Amazon S3 as a Target for AWS DMS in the AWS Database Migration Service User Guide.
ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition -> (string)
The external table definition.
CsvRowDelimiter -> (string)
The delimiter used to separate rows in the source files. The default is a carriage return (
\n
).CsvDelimiter -> (string)
The delimiter used to separate columns in the source files. The default is a comma.
BucketFolder -> (string)
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn’t specified, then the path used is `` schema_name /table_name /`` .
BucketName -> (string)
The name of the S3 bucket.
CompressionType -> (string)
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don’t use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
EncryptionMode -> (string)
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
. To useSSE_S3
, you need an AWS Identity and Access Management (IAM) role with permission to allow"arn:aws:s3:::dms-*"
to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
ServerSideEncryptionKmsKeyId -> (string)
If you are using
SSE_KMS
for theEncryptionMode
, provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example: ``aws dms create-endpoint –endpoint-identifier value –endpoint-type target –engine-name s3 –s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
DataFormat -> (string)
The format of the data that you want to use for output. You can choose one of the following:
csv
: This is a row-based file format with comma-separated values (.csv).
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.EncodingType -> (string)
The type of encoding you are using:
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.
PLAIN
doesn’t use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.DictPageSizeLimit -> (integer)
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of
PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts toPLAIN
encoding. This size is used for .parquet file format only.RowGroupLength -> (integer)
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum,
RowGroupLength
is set to the max row group length in bytes (64 * 1024 * 1024).DataPageSize -> (integer)
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion -> (string)
The version of the Apache Parquet format that you want to use:
parquet_1_0
(the default) orparquet_2_0
.EnableStatistics -> (boolean)
A value that enables statistics for Parquet pages and row groups. Choose
true
to enable statistics,false
to disable. Statistics includeNULL
,DISTINCT
,MAX
, andMIN
values. This parameter defaults totrue
. This value is used for .parquet file format only.IncludeOpForFullLoad -> (boolean)
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the
IncludeOpForFullLoad
parameter in versions 3.1.4 and later.For full load, records can only be inserted. By default (the
false
setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. IfIncludeOpForFullLoad
is set totrue
ory
, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.Note
This setting works together with the
CdcInsertsOnly
and theCdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .CdcInsertsOnly -> (boolean)
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the
false
setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If
CdcInsertsOnly
is set totrue
ory
, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value ofIncludeOpForFullLoad
. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to I to indicate the INSERT operation at the source. IfIncludeOpForFullLoad
is set tofalse
, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .Note
AWS DMS supports the interaction described preceding between the
CdcInsertsOnly
andIncludeOpForFullLoad
parameters in versions 3.1.4 and later.
CdcInsertsOnly
andCdcInsertsAndUpdates
can’t both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.TimestampColumnName -> (string)
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the
TimestampColumnName
parameter in versions 3.1.4 and later.DMS includes an additional
STRING
column in the .csv or .parquet object files of your migrated data when you setTimestampColumnName
to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is
yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the
AddColumnName
parameter is set totrue
, DMS also includes a name for the timestamp column that you set withTimestampColumnName
.ParquetTimestampInMillisecond -> (boolean)
A value that specifies the precision of any
TIMESTAMP
column values that are written to an Amazon S3 object file in .parquet format.Note
AWS DMS supports the
ParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.When
ParquetTimestampInMillisecond
is set totrue
ory
, AWS DMS writes allTIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and AWS Glue can handle only millisecond precision for
TIMESTAMP
values. Set this parameter totrue
for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.Note
AWS DMS writes any
TIMESTAMP
column values written to an S3 file in .csv format with microsecond precision.Setting
ParquetTimestampInMillisecond
has no effect on the string format of the timestamp column value that is inserted by setting theTimestampColumnName
parameter.CdcInsertsAndUpdates -> (boolean)
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is
false
, but whenCdcInsertsAndUpdates
is set totrue
ory
, INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the
IncludeOpForFullLoad
parameter. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to eitherI
orU
to indicate INSERT and UPDATE operations at the source. But ifIncludeOpForFullLoad
is set tofalse
, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .Note
AWS DMS supports the use of the
CdcInsertsAndUpdates
parameter in versions 3.3.1 and later.
CdcInsertsOnly
andCdcInsertsAndUpdates
can’t both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.
Shorthand Syntax:
ServiceAccessRoleArn=string,ExternalTableDefinition=string,CsvRowDelimiter=string,CsvDelimiter=string,BucketFolder=string,BucketName=string,CompressionType=string,EncryptionMode=string,ServerSideEncryptionKmsKeyId=string,DataFormat=string,EncodingType=string,DictPageSizeLimit=integer,RowGroupLength=integer,DataPageSize=integer,ParquetVersion=string,EnableStatistics=boolean,IncludeOpForFullLoad=boolean,CdcInsertsOnly=boolean,TimestampColumnName=string,ParquetTimestampInMillisecond=boolean,CdcInsertsAndUpdates=boolean
JSON Syntax:
{
"ServiceAccessRoleArn": "string",
"ExternalTableDefinition": "string",
"CsvRowDelimiter": "string",
"CsvDelimiter": "string",
"BucketFolder": "string",
"BucketName": "string",
"CompressionType": "none"|"gzip",
"EncryptionMode": "sse-s3"|"sse-kms",
"ServerSideEncryptionKmsKeyId": "string",
"DataFormat": "csv"|"parquet",
"EncodingType": "plain"|"plain-dictionary"|"rle-dictionary",
"DictPageSizeLimit": integer,
"RowGroupLength": integer,
"DataPageSize": integer,
"ParquetVersion": "parquet-1-0"|"parquet-2-0",
"EnableStatistics": true|false,
"IncludeOpForFullLoad": true|false,
"CdcInsertsOnly": true|false,
"TimestampColumnName": "string",
"ParquetTimestampInMillisecond": true|false,
"CdcInsertsAndUpdates": true|false
}
--dms-transfer-settings
(structure)
The settings in JSON format for the DMS transfer type of source endpoint.
Attributes include the following:
serviceAccessRoleArn - The AWS Identity and Access Management (IAM) role that has permission to access the Amazon S3 bucket.
BucketName - The name of the S3 bucket to use.
compressionType - An optional parameter to use GZIP to compress the target files. Either set this parameter to NONE (the default) or don’t use it to leave the files uncompressed.
Shorthand syntax for these settings is as follows:
ServiceAccessRoleArn=string ,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows:
{ "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
ServiceAccessRoleArn -> (string)
The IAM role that has permission to access the Amazon S3 bucket.
BucketName -> (string)
The name of the S3 bucket to use.
Shorthand Syntax:
ServiceAccessRoleArn=string,BucketName=string
JSON Syntax:
{
"ServiceAccessRoleArn": "string",
"BucketName": "string"
}
--mongo-db-settings
(structure)
Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see the configuration properties section in Using MongoDB as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.
Username -> (string)
The user name you use to access the MongoDB source endpoint.
Password -> (string)
The password for the user account you use to access the MongoDB source endpoint.
ServerName -> (string)
The name of the server on the MongoDB source endpoint.
Port -> (integer)
The port value for the MongoDB source endpoint.
DatabaseName -> (string)
The database name on the MongoDB source endpoint.
AuthType -> (string)
The authentication type you use to access the MongoDB source endpoint.
When when set to
"no"
, user name and password parameters are not used and can be empty.AuthMechanism -> (string)
The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x,
"default"
is"mongodb_cr"
. For MongoDB version 3.x or later,"default"
is"scram_sha_1"
. This setting isn’t used whenAuthType
is set to"no"
.NestingLevel -> (string)
Specifies either document or table mode.
Default value is
"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode.ExtractDocId -> (string)
Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate -> (string)
Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.AuthSource -> (string)
The MongoDB database name. This setting isn’t used when
AuthType
is set to"no"
.The default is
"admin"
.KmsKeyId -> (string)
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don’t specify a value for the
KmsKeyId
parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
Shorthand Syntax:
Username=string,Password=string,ServerName=string,Port=integer,DatabaseName=string,AuthType=string,AuthMechanism=string,NestingLevel=string,ExtractDocId=string,DocsToInvestigate=string,AuthSource=string,KmsKeyId=string
JSON Syntax:
{
"Username": "string",
"Password": "string",
"ServerName": "string",
"Port": integer,
"DatabaseName": "string",
"AuthType": "no"|"password",
"AuthMechanism": "default"|"mongodb_cr"|"scram_sha_1",
"NestingLevel": "none"|"one",
"ExtractDocId": "string",
"DocsToInvestigate": "string",
"AuthSource": "string",
"KmsKeyId": "string"
}
--kinesis-settings
(structure)
Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see Using Amazon Kinesis Data Streams as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.
StreamArn -> (string)
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat -> (string)
The output format for the records created on the endpoint. The message format is
JSON
(default) orJSON_UNFORMATTED
(a single line with no tab).ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
IncludeTransactionDetails -> (boolean)
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isFalse
.IncludePartitionValue -> (boolean)
Shows the partition value within the Kinesis message output, unless the partition type is
schema-table-type
. The default isFalse
.PartitionIncludeSchemaTable -> (boolean)
Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default isFalse
.IncludeTableAlterOperations -> (boolean)
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isFalse
.IncludeControlDetails -> (boolean)
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is
False
.
Shorthand Syntax:
StreamArn=string,MessageFormat=string,ServiceAccessRoleArn=string,IncludeTransactionDetails=boolean,IncludePartitionValue=boolean,PartitionIncludeSchemaTable=boolean,IncludeTableAlterOperations=boolean,IncludeControlDetails=boolean
JSON Syntax:
{
"StreamArn": "string",
"MessageFormat": "json"|"json-unformatted",
"ServiceAccessRoleArn": "string",
"IncludeTransactionDetails": true|false,
"IncludePartitionValue": true|false,
"PartitionIncludeSchemaTable": true|false,
"IncludeTableAlterOperations": true|false,
"IncludeControlDetails": true|false
}
--kafka-settings
(structure)
Settings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see Using Apache Kafka as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.
Broker -> (string)
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
.Topic -> (string)
The topic to which you migrate the data. If you don’t specify a topic, AWS DMS specifies
"kafka-default-topic"
as the migration topic.
Shorthand Syntax:
Broker=string,Topic=string
JSON Syntax:
{
"Broker": "string",
"Topic": "string"
}
--elasticsearch-settings
(structure)
Settings in JSON format for the target Elasticsearch endpoint. For more information about the available settings, see Extra Connection Attributes When Using Elasticsearch as a Target for AWS DMS in the AWS Database Migration Service User Guide.
ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) used by service to access the IAM role.
EndpointUri -> (string)
The endpoint for the Elasticsearch cluster.
FullLoadErrorPercentage -> (integer)
The maximum percentage of records that can fail to be written before a full load operation stops.
ErrorRetryDuration -> (integer)
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
Shorthand Syntax:
ServiceAccessRoleArn=string,EndpointUri=string,FullLoadErrorPercentage=integer,ErrorRetryDuration=integer
JSON Syntax:
{
"ServiceAccessRoleArn": "string",
"EndpointUri": "string",
"FullLoadErrorPercentage": integer,
"ErrorRetryDuration": integer
}
--neptune-settings
(structure)
Settings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see Specifying Endpoint Settings for Amazon Neptune as a Target in the AWS Database Migration Service User Guide.
ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.
S3BucketName -> (string)
The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder -> (string)
A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by
S3BucketName
ErrorRetryDuration -> (integer)
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize -> (integer)
The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount -> (integer)
The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled -> (boolean)
If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to
true
. Then attach the appropriate IAM policy document to your service role specified byServiceAccessRoleArn
. The default isfalse
.
Shorthand Syntax:
ServiceAccessRoleArn=string,S3BucketName=string,S3BucketFolder=string,ErrorRetryDuration=integer,MaxFileSize=integer,MaxRetryCount=integer,IamAuthEnabled=boolean
JSON Syntax:
{
"ServiceAccessRoleArn": "string",
"S3BucketName": "string",
"S3BucketFolder": "string",
"ErrorRetryDuration": integer,
"MaxFileSize": integer,
"MaxRetryCount": integer,
"IamAuthEnabled": true|false
}
--redshift-settings
(structure)
Provides information that defines an Amazon Redshift endpoint.
AcceptAnyDate -> (boolean)
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn’t match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript -> (string)
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder -> (string)
The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.
BucketName -> (string)
The name of the S3 bucket you want to use
ConnectionTimeout -> (integer)
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName -> (string)
The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat -> (string)
The date format that you are using. Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of ‘YYYY-MM-DD’. Usingauto
recognizes most strings, even some that aren’t supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.EmptyAsNull -> (boolean)
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.EncryptionMode -> (string)
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
. To useSSE_S3
, create an AWS Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
FileTransferUploadStreams -> (integer)
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
LoadTimeout -> (integer)
The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
MaxFileSize -> (integer)
The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).
Password -> (string)
The password for the user named in the
username
property.Port -> (integer)
The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes -> (boolean)
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.ReplaceInvalidChars -> (string)
A list of characters that you want to replace. Use with
ReplaceChars
.ReplaceChars -> (string)
A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.ServerName -> (string)
The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
ServerSideEncryptionKmsKeyId -> (string)
The AWS KMS key ID. If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.TimeFormat -> (string)
The time format that you want to use. Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren’t supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.TrimBlanks -> (boolean)
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.TruncateColumns -> (boolean)
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.Username -> (string)
An Amazon Redshift user name for a registered user.
WriteBufferSize -> (integer)
The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.
Shorthand Syntax:
AcceptAnyDate=boolean,AfterConnectScript=string,BucketFolder=string,BucketName=string,ConnectionTimeout=integer,DatabaseName=string,DateFormat=string,EmptyAsNull=boolean,EncryptionMode=string,FileTransferUploadStreams=integer,LoadTimeout=integer,MaxFileSize=integer,Password=string,Port=integer,RemoveQuotes=boolean,ReplaceInvalidChars=string,ReplaceChars=string,ServerName=string,ServiceAccessRoleArn=string,ServerSideEncryptionKmsKeyId=string,TimeFormat=string,TrimBlanks=boolean,TruncateColumns=boolean,Username=string,WriteBufferSize=integer
JSON Syntax:
{
"AcceptAnyDate": true|false,
"AfterConnectScript": "string",
"BucketFolder": "string",
"BucketName": "string",
"ConnectionTimeout": integer,
"DatabaseName": "string",
"DateFormat": "string",
"EmptyAsNull": true|false,
"EncryptionMode": "sse-s3"|"sse-kms",
"FileTransferUploadStreams": integer,
"LoadTimeout": integer,
"MaxFileSize": integer,
"Password": "string",
"Port": integer,
"RemoveQuotes": true|false,
"ReplaceInvalidChars": "string",
"ReplaceChars": "string",
"ServerName": "string",
"ServiceAccessRoleArn": "string",
"ServerSideEncryptionKmsKeyId": "string",
"TimeFormat": "string",
"TrimBlanks": true|false,
"TruncateColumns": true|false,
"Username": "string",
"WriteBufferSize": integer
}
--cli-input-json
| --cli-input-yaml
(string)
Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton
. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml
.
--generate-cli-skeleton
(string)
Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input
, prints a sample input JSON that can be used as an argument for --cli-input-json
. Similarly, if provided yaml-input
it will print a sample input YAML that can be used with --cli-input-yaml
. If provided with the value output
, it validates the command inputs and returns a sample output JSON for that command.
--cli-auto-prompt
(boolean)
Automatically prompt for CLI input parameters.
See ‘aws help’ for descriptions of global parameters.
To modify an endpoint
The following modify-endpoint
example adds an extra connection attribute to an endpoint.
aws dms modify-endpoint \
--endpoint-arn "arn:aws:dms:us-east-1:123456789012:endpoint:GUVAFG34EECUOJ6QVZ56DAHT3U" \
--extra-connection-attributes "compressionType=GZIP"
Output:
{
"Endpoint": {
"EndpointIdentifier": "src-endpoint",
"EndpointType": "SOURCE",
"EngineName": "s3",
"EngineDisplayName": "Amazon S3",
"ExtraConnectionAttributes": "compressionType=GZIP;csvDelimiter=,;csvRowDelimiter=\\n;",
"Status": "active",
"EndpointArn": "arn:aws:dms:us-east-1:123456789012:endpoint:GUVAFG34EECUOJ6QVZ56DAHT3U",
"SslMode": "none",
"ServiceAccessRoleArn": "arn:aws:iam::123456789012:role/my-s3-access-role",
"S3Settings": {
"ServiceAccessRoleArn": "arn:aws:iam::123456789012:role/my-s3-access-role",
"CsvRowDelimiter": "\\n",
"CsvDelimiter": ",",
"BucketFolder": "",
"BucketName": "",
"CompressionType": "GZIP",
"EnableStatistics": true
}
}
}
For more information, see Working with AWS DMS Endpoints <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.html>`__ in the AWS Database Migration Service User Guide.
Endpoint -> (structure)
The modified endpoint.
EndpointIdentifier -> (string)
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can’t end with a hyphen or contain two consecutive hyphens.
EndpointType -> (string)
The type of endpoint. Valid values are
source
andtarget
.EngineName -> (string)
The database engine name. Valid values, depending on the EndpointType, include
"mysql"
,"oracle"
,"postgres"
,"mariadb"
,"aurora"
,"aurora-postgresql"
,"redshift"
,"s3"
,"db2"
,"azuredb"
,"sybase"
,"dynamodb"
,"mongodb"
,"kinesis"
,"kafka"
,"elasticsearch"
,"documentdb"
,"sqlserver"
, and"neptune"
.EngineDisplayName -> (string)
The expanded name for the engine name. For example, if the
EngineName
parameter is “aurora,” this value would be “Amazon Aurora MySQL.”Username -> (string)
The user name used to connect to the endpoint.
ServerName -> (string)
The name of the server at the endpoint.
Port -> (integer)
The port value used to access the endpoint.
DatabaseName -> (string)
The name of the database at the endpoint.
ExtraConnectionAttributes -> (string)
Additional connection attributes used to connect to the endpoint.
Status -> (string)
The status of the endpoint.
KmsKeyId -> (string)
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don’t specify a value for the
KmsKeyId
parameter, then AWS DMS uses your default encryption key.AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
EndpointArn -> (string)
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn -> (string)
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode -> (string)
The SSL mode used to connect to the endpoint. The default value is
none
.ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition -> (string)
The external table definition.
ExternalId -> (string)
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings -> (structure)
The settings for the target DynamoDB database. For more information, see the
DynamoDBSettings
structure.ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) used by the service access IAM role.
S3Settings -> (structure)
The settings for the S3 target endpoint. For more information, see the
S3Settings
structure.ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition -> (string)
The external table definition.
CsvRowDelimiter -> (string)
The delimiter used to separate rows in the source files. The default is a carriage return (
\n
).CsvDelimiter -> (string)
The delimiter used to separate columns in the source files. The default is a comma.
BucketFolder -> (string)
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn’t specified, then the path used is `` schema_name /table_name /`` .
BucketName -> (string)
The name of the S3 bucket.
CompressionType -> (string)
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don’t use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
EncryptionMode -> (string)
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
. To useSSE_S3
, you need an AWS Identity and Access Management (IAM) role with permission to allow"arn:aws:s3:::dms-*"
to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
ServerSideEncryptionKmsKeyId -> (string)
If you are using
SSE_KMS
for theEncryptionMode
, provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example: ``aws dms create-endpoint –endpoint-identifier value –endpoint-type target –engine-name s3 –s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
DataFormat -> (string)
The format of the data that you want to use for output. You can choose one of the following:
csv
: This is a row-based file format with comma-separated values (.csv).
parquet
: Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.EncodingType -> (string)
The type of encoding you are using:
RLE_DICTIONARY
uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.
PLAIN
doesn’t use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY
builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.DictPageSizeLimit -> (integer)
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of
PLAIN
. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts toPLAIN
encoding. This size is used for .parquet file format only.RowGroupLength -> (integer)
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum,
RowGroupLength
is set to the max row group length in bytes (64 * 1024 * 1024).DataPageSize -> (integer)
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion -> (string)
The version of the Apache Parquet format that you want to use:
parquet_1_0
(the default) orparquet_2_0
.EnableStatistics -> (boolean)
A value that enables statistics for Parquet pages and row groups. Choose
true
to enable statistics,false
to disable. Statistics includeNULL
,DISTINCT
,MAX
, andMIN
values. This parameter defaults totrue
. This value is used for .parquet file format only.IncludeOpForFullLoad -> (boolean)
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the
IncludeOpForFullLoad
parameter in versions 3.1.4 and later.For full load, records can only be inserted. By default (the
false
setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. IfIncludeOpForFullLoad
is set totrue
ory
, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.Note
This setting works together with the
CdcInsertsOnly
and theCdcInsertsAndUpdates
parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .CdcInsertsOnly -> (boolean)
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the
false
setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If
CdcInsertsOnly
is set totrue
ory
, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value ofIncludeOpForFullLoad
. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to I to indicate the INSERT operation at the source. IfIncludeOpForFullLoad
is set tofalse
, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .Note
AWS DMS supports the interaction described preceding between the
CdcInsertsOnly
andIncludeOpForFullLoad
parameters in versions 3.1.4 and later.
CdcInsertsOnly
andCdcInsertsAndUpdates
can’t both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.TimestampColumnName -> (string)
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the
TimestampColumnName
parameter in versions 3.1.4 and later.DMS includes an additional
STRING
column in the .csv or .parquet object files of your migrated data when you setTimestampColumnName
to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is
yyyy-MM-dd HH:mm:ss.SSSSSS
. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.When the
AddColumnName
parameter is set totrue
, DMS also includes a name for the timestamp column that you set withTimestampColumnName
.ParquetTimestampInMillisecond -> (boolean)
A value that specifies the precision of any
TIMESTAMP
column values that are written to an Amazon S3 object file in .parquet format.Note
AWS DMS supports the
ParquetTimestampInMillisecond
parameter in versions 3.1.4 and later.When
ParquetTimestampInMillisecond
is set totrue
ory
, AWS DMS writes allTIMESTAMP
columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and AWS Glue can handle only millisecond precision for
TIMESTAMP
values. Set this parameter totrue
for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.Note
AWS DMS writes any
TIMESTAMP
column values written to an S3 file in .csv format with microsecond precision.Setting
ParquetTimestampInMillisecond
has no effect on the string format of the timestamp column value that is inserted by setting theTimestampColumnName
parameter.CdcInsertsAndUpdates -> (boolean)
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is
false
, but whenCdcInsertsAndUpdates
is set totrue
ory
, INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the
IncludeOpForFullLoad
parameter. IfIncludeOpForFullLoad
is set totrue
, the first field of every CDC record is set to eitherI
orU
to indicate INSERT and UPDATE operations at the source. But ifIncludeOpForFullLoad
is set tofalse
, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .Note
AWS DMS supports the use of the
CdcInsertsAndUpdates
parameter in versions 3.3.1 and later.
CdcInsertsOnly
andCdcInsertsAndUpdates
can’t both be set totrue
for the same endpoint. Set eitherCdcInsertsOnly
orCdcInsertsAndUpdates
totrue
for the same endpoint, but not both.DmsTransferSettings -> (structure)
The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
ServiceAccessRoleArn
- The IAM role that has permission to access the Amazon S3 bucket.
BucketName
- The name of the S3 bucket to use.
CompressionType
- An optional parameter to use GZIP to compress the target files. To use GZIP, set this value toNONE
(the default). To keep the files uncompressed, don’t use this value.Shorthand syntax for these settings is as follows:
ServiceAccessRoleArn=string,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows:
{ "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
ServiceAccessRoleArn -> (string)
The IAM role that has permission to access the Amazon S3 bucket.
BucketName -> (string)
The name of the S3 bucket to use.
MongoDbSettings -> (structure)
The settings for the MongoDB source endpoint. For more information, see the
MongoDbSettings
structure.Username -> (string)
The user name you use to access the MongoDB source endpoint.
Password -> (string)
The password for the user account you use to access the MongoDB source endpoint.
ServerName -> (string)
The name of the server on the MongoDB source endpoint.
Port -> (integer)
The port value for the MongoDB source endpoint.
DatabaseName -> (string)
The database name on the MongoDB source endpoint.
AuthType -> (string)
The authentication type you use to access the MongoDB source endpoint.
When when set to
"no"
, user name and password parameters are not used and can be empty.AuthMechanism -> (string)
The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x,
"default"
is"mongodb_cr"
. For MongoDB version 3.x or later,"default"
is"scram_sha_1"
. This setting isn’t used whenAuthType
is set to"no"
.NestingLevel -> (string)
Specifies either document or table mode.
Default value is
"none"
. Specify"none"
to use document mode. Specify"one"
to use table mode.ExtractDocId -> (string)
Specifies the document ID. Use this setting when
NestingLevel
is set to"none"
.Default value is
"false"
.DocsToInvestigate -> (string)
Indicates the number of documents to preview to determine the document organization. Use this setting when
NestingLevel
is set to"one"
.Must be a positive value greater than
0
. Default value is1000
.AuthSource -> (string)
The MongoDB database name. This setting isn’t used when
AuthType
is set to"no"
.The default is
"admin"
.KmsKeyId -> (string)
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don’t specify a value for the
KmsKeyId
parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.KinesisSettings -> (structure)
The settings for the Amazon Kinesis target endpoint. For more information, see the
KinesisSettings
structure.StreamArn -> (string)
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat -> (string)
The output format for the records created on the endpoint. The message format is
JSON
(default) orJSON_UNFORMATTED
(a single line with no tab).ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
IncludeTransactionDetails -> (boolean)
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isFalse
.IncludePartitionValue -> (boolean)
Shows the partition value within the Kinesis message output, unless the partition type is
schema-table-type
. The default isFalse
.PartitionIncludeSchemaTable -> (boolean)
Prefixes schema and table names to partition values, when the partition type is
primary-key-type
. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default isFalse
.IncludeTableAlterOperations -> (boolean)
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table
,drop-table
,add-column
,drop-column
, andrename-column
. The default isFalse
.IncludeControlDetails -> (boolean)
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is
False
.KafkaSettings -> (structure)
The settings for the Apache Kafka target endpoint. For more information, see the
KafkaSettings
structure.Broker -> (string)
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example,
"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
.Topic -> (string)
The topic to which you migrate the data. If you don’t specify a topic, AWS DMS specifies
"kafka-default-topic"
as the migration topic.ElasticsearchSettings -> (structure)
The settings for the Elasticsearch source endpoint. For more information, see the
ElasticsearchSettings
structure.ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) used by service to access the IAM role.
EndpointUri -> (string)
The endpoint for the Elasticsearch cluster.
FullLoadErrorPercentage -> (integer)
The maximum percentage of records that can fail to be written before a full load operation stops.
ErrorRetryDuration -> (integer)
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings -> (structure)
The settings for the Amazon Neptune target endpoint. For more information, see the
NeptuneSettings
structure.ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.
S3BucketName -> (string)
The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder -> (string)
A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by
S3BucketName
ErrorRetryDuration -> (integer)
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize -> (integer)
The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount -> (integer)
The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled -> (boolean)
If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to
true
. Then attach the appropriate IAM policy document to your service role specified byServiceAccessRoleArn
. The default isfalse
.RedshiftSettings -> (structure)
Settings for the Amazon Redshift endpoint.
AcceptAnyDate -> (boolean)
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn’t match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript -> (string)
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder -> (string)
The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.
BucketName -> (string)
The name of the S3 bucket you want to use
ConnectionTimeout -> (integer)
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName -> (string)
The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat -> (string)
The date format that you are using. Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of ‘YYYY-MM-DD’. Usingauto
recognizes most strings, even some that aren’t supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.EmptyAsNull -> (boolean)
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.EncryptionMode -> (string)
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
. To useSSE_S3
, create an AWS Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
FileTransferUploadStreams -> (integer)
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
LoadTimeout -> (integer)
The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
MaxFileSize -> (integer)
The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).
Password -> (string)
The password for the user named in the
username
property.Port -> (integer)
The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes -> (boolean)
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.ReplaceInvalidChars -> (string)
A list of characters that you want to replace. Use with
ReplaceChars
.ReplaceChars -> (string)
A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.ServerName -> (string)
The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn -> (string)
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
ServerSideEncryptionKmsKeyId -> (string)
The AWS KMS key ID. If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.TimeFormat -> (string)
The time format that you want to use. Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren’t supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.TrimBlanks -> (boolean)
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.TruncateColumns -> (boolean)
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.Username -> (string)
An Amazon Redshift user name for a registered user.
WriteBufferSize -> (integer)
The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.