You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »


To enhance your DataSync integration for Snowflake, you can configure the Snowflake Bulk Load Meshlet to the directives listed below:

(warning) WARNING:

  • If you will be setting up the DataSync Agent to also connect to the same database, you will want to configure the Agent to use the same column cases. By default, the Snowflake Bulk Load Meshlet uses lowercase for the column names. Also, the QUOTED_IDENTIFIERS_IGNORE_CASE setting in Snowflake should be kept with its default value of false.
  • The meshlet uses the TIMEZONE session parameter to connect to Snowflake in order to save all timestamps in UTC . By default, glide_date_time field is mapped to the TIMESTAMP_LTZ(9) column type. This allows you to query using a local timezone as needed. 

To check out the general meshlet configurations, see General Meshlet Configurations for DataSync.

DirectiveDefault ValueDescription

maxFileSize


Required Configuration. This configuration specifies the max size for temporary files created as the meshlet pushes records to Snowflake. 10000 will be used if input is over 10000 to prevent possible performance and memory issues. A suggested value is 5000.

perspectium:
	filesubscriber: 
		maxFileSize: 15000
maxStringSize500

Specifies the maximum string size for the entire collection. The maximum value you can use is 1000, if input is greater than that value, the input will default to 1000

perspectium:
	filesubscriber: 
		maxStringSize: 500

customFileName

$table-$randomid

Names file with format of table - random id. File names MUST be unique.

perspectium:
	filesubscriber: 
		customFileName: $table-$randomid

fileDirectory

/files

Directory where the locally created files get made. (In respects to where application is running)

perspectium:
	filesubscriber: 
		fileDirectory: /files

postInterval

2

Minutes to check dead periods. Check every x minutes to compare if the in memory collection is the same as the last x minutes. If so, write records to file and push to Snowflake

perspectium:
	snowflake: 
		postInterval: 2

deleteFiles

true

Indicates whether you want to keep or delete locally created CSV files. Will not have a noticeable performance hit.

perspectium:
	filesubscriber: 
		deleteFiles: true

fileCleanerInterval

1

How often (in minutes) the file cleaner job will run to clean up local files created by the meshlet. This job is similar to the ServiceNow Data Cleaner Scheduled Job. For example, value of 4 will run the job every four minutes. The default value is 1.

perspectium:
	filesubscriber: 
		fileCleanerInterval: 4

fileMaxAge

1

Any csv file in the filesDirectory older than fileMaxAge in minutes, will be automatically deleted. The default value is 1.

perspectium:
	filesubscriber: 
		fileMaxAge: 1
legacyDbViewIdfalse

Derivative to use the legacy version for database view tables of concatenating GUID values into a sys_id field. If false, meshlet will use the pre-constructed encoded sys_id created by ServiceNow.

perspectium:
	filesubscriber: 
		legacyDbViewId: false
useReceiptstrue

Enable data guarantee.

perspectium:
	message: 
		receipts:
			enable: true
receiptOutboundQueue

Queue receipts are returned to, typically default subscribed to on a ServiceNow instance.

perspectium:
	message: 
		receiptOutboundQueue: psp.out.servicenow.dev1234
useConnectionPropertiesfalse

When enabled, the user, password, and role properties on the yml are no longer being used to create the database connection.

perspectium:
	snowflake: 
		useConnectionProperties: true

This will allow you to add these or other parameters to your connectionUrl to leverage other forms of authentication. For example:  

connectionUrl: jdbc:snowflake://hga12424.us-east-1.snowflakecomputing.com/?USER=xxxx&PASSWORD=xxxxx&ROLE=xxxx

To enable debugging logs for the Snowflake Meshlet, you can configure your connection url in the application.yml with one the following: 

jdbc:snowflake://xxxxxx.snowflakecomputing.com/?TRACING=CONFIG
jdbc:snowflake://xxxxxx.snowflakecomputing.com/?TRACING=FINEST
jdbc:snowflake://xxxxxx.snowflakecomputing.com/?TRACING=ALL

Where ALL will provide the most detailed level of additional logging. See tracing=<string> for more information on the different logging levels available.

A tmp log file will be created. See How to generate log file on Snowflake connectors