Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

To enhance your DataSync integration for Amazon Redshift, you can configure the meshlet to the directives listed below:. To check out the general meshlet configurations, see General Meshlet Configurations for DataSync.


DirectiveDefault ValueDescription
username

AWS account username. 

Code Block
languageyml
perspectium:
	redshift: 
		username: perspectium


password

AWS account password. 

Code Block
languageyml
perspectium:
	redshift: 
		password: somePasswordHere


database


AWS database name.

Code Block
languageyml
perspectium:
	redshift: 
		database: database_name


schema

AWS schema name.

Code Block
languageyml
perspectium:
	redshift: 
		schema: schema_name


role
An IAM role set up in AWS that grants you permission to access Amazon S3, e.g. arn:partition:service:region:account-id:resource-type/resource-id.

The role must have permissions to LIST and GET objects from the Amazon S3 bucket being used. The IAM role ARN, or Amazon Resource Name, is a required directive to leverage the COPY command

For more information, see AWS credentials and access permissions.

Code Block
languageyml
perspectium:
	redshift: 
		role: arn:aws:iam:123456789123:role/myRedshiftRole		


connectionUrl

The JDBC URL for your Amazon Redshift cluster connection, e.g. jdbc:redshift://endpoint:port/database.

Code Block
languageyml
perspectium:
	redshift: 
		connectionUrl: jdbc:redshift://examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com:5439/dev


postInterval5

Minutes to check dead periods. Check every x minutes to compare if the in memory collection is the same as the last x minutes. If so, write records to file and push to Amazon Redshift. 

Code Block
languageyml
perspectium:
	redshift: 
		postInterval: 5


maxFileSize

Required Configuration. This configuration specifies the max size for temporary files created as the meshlet pushes records to Amazon Redshift. 10000 will be used if input is over 10000 to prevent possible performance and memory issues. A suggested value is 5000.

Code Block
languageyml
perspectium:
	filesubscriber: 
		maxFileSize: 5000


customFileName$table-$randomid

Names file with format of table - random id. File names MUST be unique.

Code Block
languageyml
perspectium:
	filesubscriber: 
		customFileName: $table-$randomid


fileDirectory/files

Directory where the locally created files get made. (In respects to where application is running).

Code Block
languageyml
perspectium:
	filesubscriber: 
		fileDirectory: /files


deleteFilestrue

Indicates whether you want to keep or delete locally created CSV files. Will not have a noticeable performance hit.

Code Block
languageyml
perspectium:
	filesubscriber: 
		deleteFiles: true


fileCleanerInterval1

How often (in minutes) the file cleaner job will run to clean up local files created by the meshlet. This job is similar to the ServiceNow Data Cleaner Scheduled Job. For example, value of 4 will run the job every four minutes. 

Code Block
languageyml
perspectium:
	filesubscriber: 
		fileCleanerInterval:


fileMaxAge1

Any csv file in the filesDirectory older than fileMaxAge in minutes, will be automatically deleted. 

Code Block
languageyml
perspectium:
	filesubscriber: 
		fileMaxAge: 1


legacyDbViewIdfalse

Derivative to use the legacy version for database view tables of concatenating GUID values into a sys_id field. If false, meshlet will use the pre-constructed encoded sys_id created by ServiceNow.

Code Block
languageyml
perspectium:
	filesubscriber: 
		legacyDbViewId: false


accessKey

Authorized AWS user's access key to the S3 bucket. 

For more information, see AWS credentials and access permissions.

Code Block
languageyml
perspectium:
	s3: 
		accessKey: AKIAIOSFODNN7EXAMPLE


secretKey

Authorized AWS user's full secret access key to the S3 bucket.

For more information, see AWS credentials and access permissions.

Code Block
languageyml
perspectium:
	s3: 
		secretKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY


destinationBucket

The name of the S3 bucket where data will be bulk loaded from.

Code Block
languageyml
perspectium:
	s3: 
		destinationBucket: redshift-bulk-loading


region

The region the S3 bucket is located in. 

Code Block
languageyml
perspectium:
	s3: 
		region: us-east-1