Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

 To enhance your DataSync integration for Snowflake, you can configure the Snowflake Bulk Load Meshlet with the additional features available below!


Anchor
top
top

Panel
titleWhat's on this page?

Table of Contents
maxLevel2
absoluteUrltrue





Add custom fields to Snowflake 

There are currently 3 types of custom columns supported: IO insert columns, IO update columns, and a string field populated with the ServiceNow instance key. IO insert fields will be populated with the timestamp of the record insert into Snowflake, and IO update fields will be populated with the timestamp of the record insert into Snowflake and updated subsequently on record update into Snowflake.

To add your custom column names in your Snowflake tables, you can configure the databaseConfig.json file by following these steps:

UI Steps
sizesmall


UI Step

Navigate to the directory where you saved your meshlet when installing, and open the databaseConfig.json file.


UI Step

In the databaseConfig.json, the IO columns (insert, update) are grouped together. Each set of columns is a list.

To add your custom column in the IO insert columns, IO update columns, and the key columns, add them in between the brackets: 

Section


Column
width50%


Code Block
{...
	"IODateTime": {
		"insert":[],
		"updated":[]
	},
	"key":[]
}



Column
width50%


Code Block
{...
	"IODateTime": {
		"insert":["psp_example_insert"],
		"updated":["psp_example_update"]
	},
	"key":["psp_example_key"]
}




To have no custom columns, leave the lists empty.

Expand
titleClick here to see an example of the databaseConfig.json...


Code Block
{
   "SQLSyntax":{
      "showTables":"SHOW TABLES LIKE %s",
      "selectAll":"SELECT * FROM %s",
      "addColumn":"ALTER TABLE %s ADD COLUMN",
      "primaryKey":"Primary Key(%s)"
   },
   "tableDefinition":{
      "primaryKeys":[
      "sys_id"
      ]
   },
   "types":{
      "integer":"BIGINT",
      "boolean":"BOOLEAN",
      "glide_date":"DATE",
      "glide_date_time":"TIMESTAMP_LTZ(9)",
      "float":"DOUBLE",
      "reference":"VARCHAR",
      "default":"VARCHAR",
      "IODateTime": "TIMESTAMP_LTZ(9)",
      "key": "VARCHAR",
      "delete": "BOOLEAN"
   },
   "IODateTime":{
      "insert":["psp_example_insert"],
      "update":["psp_example_update"]
   },
   "key": ["psp_example_key"]
}




UI Step

If you want the IO update column to be NULL on record inserts, set the following in your application.yml:

Code Block
languageyml
perspectium:
	snowflake: 
		updateTimeOnInsert: true

By default, updateTimeOnInsert is true, and IO update columns are populated on record inserts.



UI Text Box
typetip

By default, IO columns will behave as they do in ServiceNow; for rows inserted into Snowflake tables with a custom IO update column, the IO update column will be populated with the insert timestamp. This setting can be turned off with updateTimeOnInsert yml configuration.

If the table does not exist in Snowflake, the table will be created with the custom columns when the meshlet is run. If the table already exists, the table will be altered to add the custom columns into the table.


↑ Go to top of page




Compare tables from ServiceNow to Snowflake 

Table Compare allows you to compare tables from one ServiceNow instance to Snowflake. This is useful because you can see each table's record count, as well as a list of record discrepancies by sys_id between the two tables you're comparing—in other words, you can see which records exist in one table but not the other. To get started with comparing your ServiceNow tables to Snowflake, see Table Compare: ServiceNow to database table compare.  

(info) NOTE: This requires version Helium and above Perspectium Core update set. 


↑ Go to top of page




Load Schema Files 

This feature allows you to load your ServiceNow table schemas to the meshlet when a connection cannot be made to the originating ServiceNow instance.  Schema files are used to create and update tables in Snowflake. Enabling this feature will force the meshlet to read schemas via files exclusively from the local file storage and disable all calls to ServiceNow. Any new tables shared will require table schemas to be added to the local file storage.

UI Steps
sizesmall


UI Step

Export the table schemas in your ServiceNow instance. See Download Table Schemas.


UI Step

Unzip the folder that was created from the previous step. Then, put the folder in the static/config directory. 

The final location for each table should look like static/config/schemas/incident.xml.


UI Step

In the application-dev.yml, set the following configurations:

Code Block
languageyml
perspectium:  
	auth:    
		useLocalSchema: true

(info) NOTE: By default or if the configuration is not specified, useLocalSchema is set to false.



↑ Go to top of page




Azure External Storage

To start using Microsoft Azure as an external stage for bulk loading, follow these steps:

UI Steps
sizesmall


UI Step

In the application-dev.yml, set the following configurations:

DirectiveDescription
url

Azure storage URL. 

Code Block
perspectium:  
	azure:    
		url: azure://pspsnowflaketest.blob.core.windows.net/snowflakedev

Where pspsnowflaketest is the name of the Azure storage account and snowflakedev is the name of the Azure container.

sasToken

Shared Access Signatures (SAS) connection string for Azure Storage. See SAS connection string.

To access the connection string, go to Storage Account > Account Name > Shared Access Signature. Then enter the required fields, and generate SAS and connection String. 

Allowed Resource Types:

    • Container
    • Object

Allowed Permissions: 

    • Read
    • Write
    • Delete
    • List
    • Add
    • Create

Code Block
perspectium:  
	azure:    
  		sasToken: ?sv=2020-08-04&ss…..ejl%2BTE%3D


connectionUrlconnectionString

Connection URL for your Azure. To access the URL, go to Azure Portal > Storage Account > Access Keys > Show Keys > Connection String.

Code Block
perspectium:  
	azure:    
    	connectionUrlconnectionString: DefaultEndpointsProtocol=.....EndpointSuffix=core.windows.net


destinationContainer

Azure container you want to share your data to.

Code Block
perspectium:  
	azure:    
		destinationContainer: snowflakedev


deleteFiles

If you want to leave the temporary staging files in Azure. Setting this to false will not delete the staging files from Azure.

Code Block
perspectium:  
	fileSubscriber:    
 		deleteFiles: true





↑ Go to top of page




Soft Deletes

This feature allows you to add columns to a database table that gets updated when the meshlet receives a message indicating a record is to be deleted. The meshlet will update said columns appropriately and will not delete the record in Snowflake.

(info) NOTE: At least one of the following table columns are required to achieve soft deletes, if neither column has been added to the configurations but softDelete is set to true, no delete will occur. This is to prevent deletion of records when there is an intention of using soft deletes.

UI Steps
sizesmall


UI Step

In the application-dev.yml, set the following configurations:

Code Block
perspectium:    
	snowflake:       
		softDelete: true

(info) NOTE: By default or if the configuration is not specified, softDelete is set to false.


UI Step

One option is to add a boolean field to mark a record as true or false as to whether it has been deleted. 

In static/config/databaseConfig.json, add the following:

Code Block
"delete": {
 	"columnName":"is_delete"
 }



UI Step

Another option is to add an IODateTime field to indicate the date and time the record was marked as deleted.

In static/config/databaseConfig.json, add the following:

Code Block
"IODateTime": {
	"insert":["psp_per_insert_dt"],
	"update":["psp_per_update_dt"],
	"delete" :["psp_per_delete_dt"]
}




↑ Go to top of page