Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Anchor
top
top

To enhance your DataSync integration for Snowflake, you can configure the Snowflake Bulk Load Meshlet to the directives listed below:

Warning
iconfalse

(warning) WARNING:

  • If you will be setting up the DataSync Agent to also connect to the same database, you will want to configure the Agent to use the same column cases. By default, the Snowflake Bulk Load Meshlet uses lowercase for the column names. Also, the QUOTED_IDENTIFIERS_IGNORE_CASE setting in Snowflake should be kept with its default value of false.
  • The meshlet uses the TIMEZONE session parameter to connect to Snowflake in order to save all timestamps in UTC . By default, glide_date_time field is mapped to the TIMESTAMP_LTZ(9) column type. This allows you to query using a local timezone as needed. 

To check out the general meshlet configurations, see General Meshlet Configurations for DataSync.


Panel
titleWhat's on this page?

Table of Contents
maxLevel2
absoluteUrltrue



DirectiveDefault ValueDescription

maxFileSize


Required Configuration. This configuration specifies the max size for temporary files created as the meshlet pushes records to Snowflake. 10000 will be used if input is over 10000 to prevent possible performance and memory issues. A suggested value is 5000.

Code Block
languageyml
perspectium:
	filesubscriber: 
		maxFileSize: 5000


maxStringSize500

Specifies the maximum string size for the entire collection. The maximum value you can use is 1000, if input is greater than that value, the input will default to 1000

Code Block
languageyml
perspectium:
	filesubscriber: 
		maxStringSize: 500


customFileName

$table-$randomid

Names file with format of table - random id. File names MUST be unique.

Code Block
languageyml
perspectium:
	filesubscriber: 
		customFileName: $table-$randomid


fileDirectory

/files

Directory where the locally created files get made. (In respects to where application is running)

Code Block
languageyml
perspectium:
	filesubscriber: 
		fileDirectory: /files


postInterval

2

Minutes to check dead periods. Check every x minutes to compare if the in memory collection is the same as the last x minutes. If so, write records to file and push to Snowflake

Code Block
languageyml
perspectium:
	snowflake: 
		postInterval: 2


deleteFiles

true

Indicates whether you want to keep or delete locally created CSV files. Will not have a noticeable performance hit.

Code Block
languageyml
perspectium:
	filesubscriber: 
		deleteFiles: true


fileCleanerInterval

1

How often (in minutes) the file cleaner job will run to clean up local files created by the meshlet. This job is similar to the ServiceNow Data Cleaner Scheduled Job. For example, value of 4 will run the job every four minutes. The default value is 1.

Code Block
languageyml
perspectium:
	filesubscriber: 
		fileCleanerInterval: 4


fileMaxAge

1

Any csv file in the filesDirectory older than fileMaxAge in minutes, will be automatically deleted. The default value is 1.

Code Block
languageyml
perspectium:
	filesubscriber: 
		fileMaxAge: 1


legacyDbViewIdfalse

Derivative to use the legacy version for database view tables of concatenating GUID values into a sys_id field. If false, meshlet will use the pre-constructed encoded sys_id created by ServiceNow.

Code Block
languageyml
perspectium:
	filesubscriber: 
		legacyDbViewId: false


useReceiptstrue

Enable data guarantee.

Code Block
languageyml
perspectium:
	message: 
		receipts:
			enable: true


receiptOutboundQueue

Queue receipts are returned to, typically default subscribed to on a ServiceNow instance.

Code Block
languageyml
perspectium:
	message: 
		receiptOutboundQueue: psp.out.servicenow.dev1234


useConnectionPropertiesfalse

When enabled, the user, password, and role properties on the yml are no longer being used to create the database connection.

Code Block
languageyml
perspectium:
	snowflake: 
		useConnectionProperties: true

This will allow you to add these or other parameters to your connectionUrl to leverage other forms of authentication. For example:  

Code Block
languageyml
connectionUrl: jdbc:snowflake://hga12424.us-east-1.snowflakecomputing.com/?USER=xxxx&PASSWORD=xxxxx&ROLE=xxxx


updateTimeOnInserttrue

In conjunction with Add custom fields in Snowflakeif you want the IO update column to be NULL on record inserts, set the value to false.

Code Block
languageyml
perspectium:
	snowflake: 
		updateTimeOnInsert: true

By default, updateTimeOnInsert is true, and IO update columns are populated on record inserts.

↑ Go to top of page




Debugging Logs

To enable debugging logs for the Snowflake Meshlet, you can configure your connection url in the application.yml with one the following: 

Code Block
jdbc:snowflake://xxxxxx.snowflakecomputing.com/?TRACING=CONFIG


Code Block
jdbc:snowflake://xxxxxx.snowflakecomputing.com/?TRACING=FINEST


Code Block
jdbc:snowflake://xxxxxx.snowflakecomputing.com/?TRACING=ALL

Where ALL will provide the most detailed level of additional logging. See tracing=<string> for more information on the different logging levels available.

A tmp log file will be created. See How to generate log file on Snowflake connectors

↑ Go to top of page




Add custom fields to Snowflake 
Anchor
custom-fields
custom-fields

There are currently 3 types of custom columns supported: IO insert columns, IO update columns, and a string field populated with the ServiceNow instance key. IO insert fields will be populated with the timestamp of the record insert into Snowflake, and IO update fields will be populated with the timestamp of the record insert into Snowflake and updated subsequently on record update into Snowflake.

To add your custom column names in your Snowflake tables, you can configure the databaseConfig.json file by following these steps:

UI Steps
sizesmall


UI Step

Navigate to the directory where you saved your meshlet when installing, and open the databaseConfig.json file.


UI Step

In the databaseConfig.json, the IO columns (insert, update) are grouped together. Each set of columns is a list.

To add your custom column in the IO insert columns, IO update columns, and the key columns, add them in between the brackets: 

Section


Column
width50%


Code Block
{...
	"IODateTime": {
		"insert":[],
		"updated":[]
	},
	"key":[]
}



Column
width50%


Code Block
{...
	"IODateTime": {
		"insert":["psp_example_insert"],
		"updated":["psp_example_update"]
	},
	"key":["psp_example_key"]
}




To have no custom columns, leave the lists empty.

Expand
titleClick here to see an example of the databaseConfig.json...


Code Block
{
   "SQLSyntax":{
      "showTables":"SHOW TABLES LIKE %s",
      "selectAll":"SELECT * FROM %s",
      "addColumn":"ALTER TABLE %s ADD COLUMN",
      "primaryKey":"Primary Key(%s)"
   },
   "tableDefinition":{
      "primaryKeys":[
      "sys_id"
      ]
   },
   "types":{
      "integer":"BIGINT",
      "boolean":"BOOLEAN",
      "glide_date":"DATE",
      "glide_date_time":"TIMESTAMP_LTZ(9)",
      "float":"DOUBLE",
      "reference":"VARCHAR",
      "default":"VARCHAR",
      "IODateTime": "TIMESTAMP_LTZ(9)",
      "key": "VARCHAR",
      "delete": "BOOLEAN"
   },
   "IODateTime":{
      "insert":["PSPpsp_PERexample_INSERT_DTinsert"],
      "update":["PSPpsp_PERexample_UPDATE_DTupdate"]
   },
   "key": ["PSPpsp_SOURCE_DETAILS"],
   "delete": ["IS_DELETEexample_key"]
}




UI Step

If you want the IO update column to be NULL on record inserts, set the following in your application.yml:

Code Block
languageyml
perspectium:
	snowflake: 
		updateTimeOnInsert: true

By default, updateTimeOnInsert is true, and IO update columns are populated on record inserts.



UI Text Box
typetip

By default, IO columns will behave as they do in ServiceNow; for rows inserted into Snowflake tables with a custom IO update column, the IO update column will be populated with the insert timestamp. This setting can be turned off with updateTimeOnInsert yml configuration.

If the table does not exist in Snowflake, the table will be created with the custom columns when the meshlet is run. If the table already exists, the table will be altered to add the custom columns into the table.