You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »


 To enhance your DataSync integration for Snowflake, you can configure the Snowflake Bulk Load Meshlet with the additional features available below!


What's on this page?




Add custom fields to Snowflake 

There are currently 3 types of custom columns supported: IO insert columns, IO update columns, and a string field populated with the ServiceNow instance key. IO insert fields will be populated with the timestamp of the record insert into Snowflake, and IO update fields will be populated with the timestamp of the record insert into Snowflake and updated subsequently on record update into Snowflake.

To add your custom column names in your Snowflake tables, you can configure the databaseConfig.json file by following these steps:

Navigate to the directory where you saved your meshlet when installing, and open the databaseConfig.json file.

In the databaseConfig.json, the IO columns (insert, update) are grouped together. Each set of columns is a list.

To add your custom column in the IO insert columns, IO update columns, and the key columns, add them in between the brackets: 

{...
	"IODateTime": {
		"insert":[],
		"updated":[]
	},
	"key":[]
}
{...
	"IODateTime": {
		"insert":["psp_example_insert"],
		"updated":["psp_example_update"]
	},
	"key":["psp_example_key"]
}

To have no custom columns, leave the lists empty.

{
   "SQLSyntax":{
      "showTables":"SHOW TABLES LIKE %s",
      "selectAll":"SELECT * FROM %s",
      "addColumn":"ALTER TABLE %s ADD COLUMN",
      "primaryKey":"Primary Key(%s)"
   },
   "tableDefinition":{
      "primaryKeys":[
      "sys_id"
      ]
   },
   "types":{
      "integer":"BIGINT",
      "boolean":"BOOLEAN",
      "glide_date":"DATE",
      "glide_date_time":"TIMESTAMP_LTZ(9)",
      "float":"DOUBLE",
      "reference":"VARCHAR",
      "default":"VARCHAR",
      "IODateTime": "TIMESTAMP_LTZ(9)",
      "key": "VARCHAR",
      "delete": "BOOLEAN"
   },
   "IODateTime":{
      "insert":["psp_example_insert"],
      "update":["psp_example_update"]
   },
   "key": ["psp_example_key"]
}

If you want the IO update column to be NULL on record inserts, set the following in your application.yml:

perspectium:
	snowflake: 
		updateTimeOnInsert: true

By default, updateTimeOnInsert is true, and IO update columns are populated on record inserts.


By default, IO columns will behave as they do in ServiceNow; for rows inserted into Snowflake tables with a custom IO update column, the IO update column will be populated with the insert timestamp. This setting can be turned off with updateTimeOnInsert yml configuration.

If the table does not exist in Snowflake, the table will be created with the custom columns when the meshlet is run. If the table already exists, the table will be altered to add the custom columns into the table.

↑ Go to top of page




Compare tables from ServiceNow to Snowflake 

Table Compare allows you to compare tables from one ServiceNow instance to Snowflake. This is useful because you can see each table's record count, as well as a list of record discrepancies by sys_id between the two tables you're comparing—in other words, you can see which records exist in one table but not the other. To get started with comparing your ServiceNow tables to Snowflake, see Table Compare: ServiceNow to database table compare.  

(info) NOTE: This requires version Helium and above Perspectium Core update set. 


↑ Go to top of page




Load Schema Files 

This feature allows you to load your ServiceNow table schemas to the meshlet when a connection cannot be made to the originating ServiceNow instance.  Schema files are used to create and update tables in Snowflake. Enabling this feature will force the meshlet to read schemas via files exclusively from the local file storage and disable all calls to ServiceNow. Any new tables shared will require table schemas to be added to the local file storage.

Export the table schemas in your ServiceNow instance. See Download Table Schemas.

Unzip the folder that was created from the previous step. Then, put the folder in the static/config directory. 

The final location for each table should look like static/config/schemas/incident.xml.

In the application-dev.yml, set the following configurations:

perspectium:  
	auth:    
		useLocalSchema: true

(info) NOTE: By default or if this configuration is not specified, useLocalSchema is set to false.

↑ Go to top of page




Azure External Storage

To start using Microsoft Azure as an external stage for bulk loading, follow these steps:

Configure bulk loading from Microsoft Azure by following configuring an Azure container for loading data.

In the application-dev.yml, set the following configurations:

DirectiveDescription
url

Azure storage URL. 

perspectium:  
	azure:    
		url: azure://pspsnowflaketest.blob.core.windows.net/snowflakedev

Where pspsnowflaketest is the storage account and snowflakedev is the container.

sasString

Shared Access Signatures (SAS) connection string for Azure Storage. See SAS connection string.

To access the connection string, go to Storage Account > Account Name > Shared Access Signature. Then enter the required fields, and generate SAS and connection String. 

Allowed Resource Types:

    • Container
    • Object

Allowed Permissions: 

    • Read
    • Write
    • Delete
    • List
    • Add
    • Create

perspectium:  
	azure:    
  		sasString: ?sv=2020-08-04&ss…..ejl%2BTE%3D
connectionUrl

Connection URL for your Azure. To access the URL, go to Azure Portal > Storage Account > Access Keys > Show Keys > Connection String.

perspectium:  
	azure:    
    	connectionUrl: DefaultEndpointsProtocol=.....EndpointSuffix=core.windows.net
destinationContainer

Container you want to share your data to.

perspectium:  
	azure:    
		destinationContainer: snowflakedev
deleteFiles

Stores files in Azure if necessary.

perspectium:  
	fileSubscriber:    
 		deleteFiles: true

↑ Go to top of page




Soft Deletes

This feature allows you to add a column or columns, either boolean and/or a date time field, that gets updated when a delete record is processed. The field will update said columns appropriately and will not delete the record in Snowflake. At least 1 of the following table columns are required to achieve softDeletes, if neither column has been added to the configurations, no delete will occur. This is to prevent deletion of records when there is an intention of using soft deletes.

In the application-dev.yml, set the following configurations:

perspectium:    
	snowflake:       
		softDelete: true

(info) NOTE: By default, softDelete is set to false.

Add a boolean field to mark a record has been deleted. 

In static/config/databaseConfig.json, add the following:

"delete": {
 	"columnName":"is_delete"
 }

Add IODateTime field to input execution time for deleted record.

In static/config/databaseConfig.json, add the following:

"IODateTime": {
	"insert":["psp_per_insert_dt"],
	"update":["psp_per_update_dt"],
	"delete" :["psp_per_delete_dt"]
}

↑ Go to top of page