Page History
Anchor | ||||
---|---|---|---|---|
|
To enhance your DataSync integration for Snowflake, you can configure the Snowflake Bulk Load Meshlet to the configurations below.
To check out the general meshlet configurations, see General Meshlet Configurations for DataSync.
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
|
Warning | ||
---|---|---|
| ||
WARNING:
|
List of Meshlet Configurations for Snowflake
Directive | Default Value | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
maxFileSize | Required Configuration. This configuration specifies the max size for temporary files created as the meshlet pushes records to Snowflake. 10000 will be used if input is over 10000 to prevent possible performance and memory issues. A suggested value is 5000.
| |||||||||||
maxStringSize | 500 | Specifies the maximum string size for the entire collection. The maximum value you can use is 1000, if input is greater than that value, the input will default to 1000.
| ||||||||||
customFileName | $table-$randomid | Names file with format of table - random id. File names MUST be unique.
| ||||||||||
fileDirectory | /files | Directory where the locally created files get made. (In respects to where application is running)
| ||||||||||
postInterval | 2 | Minutes to check dead periods. Check every x minutes to compare if the in memory collection is the same as the last x minutes. If so, write records to file and push to Snowflake
| ||||||||||
deleteFiles | true | Indicates whether you want to keep or delete locally created CSV files. Will not have a noticeable performance hit.
| ||||||||||
fileCleanerInterval | 1 | How often (in minutes) the file cleaner job will run to clean up local files created by the meshlet. This job is similar to the ServiceNow Data Cleaner Scheduled Job. For example, value of 4 will run the job every four minutes. The default value is 1.
| ||||||||||
fileMaxAge | 1 | Any csv file in the filesDirectory older than fileMaxAge in minutes, will be automatically deleted. The default value is 1.
| ||||||||||
legacyDbViewId | false | Derivative to use the legacy version for database view tables of concatenating GUID values into a sys_id field. If false, meshlet will use the pre-constructed encoded sys_id created by ServiceNow.
| ||||||||||
useReceipts | true | Enable data guarantee.
| ||||||||||
receiptOutboundQueue | Queue receipts are returned to, typically default subscribed to on a ServiceNow instance.
| |||||||||||
useConnectionProperties | false | When enabled, the user, password, and role properties on the yml are no longer being used to create the database connection.
This will allow you to add these or other parameters to your connectionUrl to leverage other forms of authentication. For example:
| ||||||||||
updateTimeOnInsert | true | In conjunction with Add custom fields in Snowflake, if you want the IO update column to be NULL on record inserts, set the value to false.
By default, updateTimeOnInsert is true, and IO update columns are populated on record inserts. | ||||||||||
useLocalSchema | false | Enabling this directive will force the snowflake meshlet to read schemas via files exclusively from the local file storage and disable all calls to ServiceNow. Any new tables shared will require table schemas to be added to the local file storage. See Load Schema Files on how to get started.
| ||||||||||
softDelete | false | This feature allows you to add columns to a database table that gets updated when the meshlet receives a message indicating a record is to be deleted. The meshlet will update said columns appropriately and will not delete the record in Snowflake. For more information, see Soft Deletes.
| ||||||||||
quotedColumns | false | Enabling this directive will add quotations around column names in the database.
| ||||||||||
purgeTable | Enable this directive to truncate or delete all the records for a selected table in your database before bulk sharing.
| |||||||||||
purgeTableOption | When the directive above is enabled (purgeTable), you can choose which option will be issued. The available options are:
| |||||||||||
The following are the required configurations for a Snowflake to ServiceNow integration: | ||||||||||||
tables | To prevent querying the entire database, comma separated list of tables to be shared.
| |||||||||||
timestampColumn | Column name timestamp to be used for query.
| |||||||||||
changesOnly | false | True/False. When true, only the columns changed will appear in the outbound message. When false, all columns will appear in the outbound message.
| ||||||||||
pollingInterval | How often in milliseconds to query against the Snowflake database. Suggested value: 30000
| |||||||||||
outboundQueue | Queue messages will be shared out to.
| |||||||||||
defaultEncryptionMode | 3 | Desired encryption method to be used for outbound messages. 1 - tripleDES, 2 - AES 128, 3 - base64, 4 - AES 256
| ||||||||||
encryptionKey | Key to be used for encrypting messages.
|
Debugging Logs
To enable debugging logs for the Snowflake Meshlet, you can configure your connection url in the application.yml with one the following:
Code Block |
---|
jdbc:snowflake://xxxxxx.snowflakecomputing.com/?TRACING=CONFIG |
Code Block |
---|
jdbc:snowflake://xxxxxx.snowflakecomputing.com/?TRACING=FINEST |
Code Block |
---|
jdbc:snowflake://xxxxxx.snowflakecomputing.com/?TRACING=ALL |
Where ALL will provide the most detailed level of additional logging. See tracing=<string> for more information on the different logging levels available.
A tmp log file will be created. See How to generate log file on Snowflake connectors.