To enhance your DataSync integration for Amazon Redshift, you can configure the meshlet to the directives listed below. To check out the general meshlet configurations, see General Meshlet Configurations for DataSync.
Directive | Default Value | Description |
---|---|---|
username | AWS account username. perspectium: redshift: username: perspectium | |
password | AWS account password. perspectium: redshift: password: somePasswordHere | |
database | AWS database name. perspectium: redshift: database: database_name | |
schema | AWS schema name. perspectium: redshift: schema: schema_name | |
role | An IAM role set up in AWS that grants you permission to access Amazon S3, e.g. arn:partition:service:region:account-id:resource-type/resource-id. The role must have permissions to LIST and GET objects from the Amazon S3 bucket being used. The IAM role ARN, or Amazon Resource Name, is a required directive to leverage the COPY command. For more information, see AWS credentials and access permissions. perspectium: redshift: role: arn:aws:iam:123456789123:role/myRedshiftRole | |
connectionUrl | The JDBC URL for your Amazon Redshift cluster connection, e.g. jdbc:redshift://endpoint:port/database. perspectium: redshift: connectionUrl: jdbc:redshift://examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com:5439/dev | |
postInterval | 5 | Minutes to check dead periods. Check every x minutes to compare if the in memory collection is the same as the last x minutes. If so, write records to file and push to Amazon Redshift. perspectium: redshift: postInterval: 5 |
maxFileSize | Required Configuration. This configuration specifies the max size for temporary files created as the meshlet pushes records to Amazon Redshift. 10000 will be used if input is over 10000 to prevent possible performance and memory issues. A suggested value is 5000. perspectium: filesubscriber: maxFileSize: 5000 | |
customFileName | $table-$randomid | Names file with format of table - random id. File names MUST be unique. perspectium: filesubscriber: customFileName: $table-$randomid |
fileDirectory | /files | Directory where the locally created files get made. (In respects to where application is running). perspectium: filesubscriber: fileDirectory: /files |
deleteFiles | true | Indicates whether you want to keep or delete locally created CSV files. Will not have a noticeable performance hit. perspectium: filesubscriber: deleteFiles: true |
fileCleanerInterval | 1 | How often (in minutes) the file cleaner job will run to clean up local files created by the meshlet. This job is similar to the ServiceNow Data Cleaner Scheduled Job. For example, value of 4 will run the job every four minutes. perspectium: filesubscriber: fileCleanerInterval: |
fileMaxAge | 1 | Any csv file in the filesDirectory older than fileMaxAge in minutes, will be automatically deleted. perspectium: filesubscriber: fileMaxAge: 1 |
legacyDbViewId | false | Derivative to use the legacy version for database view tables of concatenating GUID values into a sys_id field. If false, meshlet will use the pre-constructed encoded sys_id created by ServiceNow. perspectium: filesubscriber: legacyDbViewId: false |
accessKey | Authorized AWS user's access key to the S3 bucket. For more information, see AWS credentials and access permissions. perspectium: s3: accessKey: AKIAIOSFODNN7EXAMPLE | |
secretKey | Authorized AWS user's full secret access key to the S3 bucket. For more information, see AWS credentials and access permissions. perspectium: s3: secretKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY | |
destinationBucket | The name of the S3 bucket where data will be bulk loaded from. perspectium: s3: destinationBucket: redshift-bulk-loading | |
region | The region the S3 bucket is located in. perspectium: s3: region: us-east-1 |