You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »


To enhance your DataSync integration for Snowflake, you can configure the Snowflake meshlet to the directives listed below:

DirectiveDefault ValueDescription

maxFileSize

15000

Total of individual records (per table) for each file. After x records have been received by the meshlet, a file will be created with x records and an immediate push of all records in that file to Snowflake will occur.

perspectium:
	filesubscriber: 
		maxFileSize: 15000

buildHeader

false

Each CSV file will include headers of column names in the first row.

perspectium:
	filesubscriber: 
		buildHeader: false

customFileName

$table-$randomid

Names file with format of table - random id. File names MUST be unique.

perspectium:
	filesubscriber: 
		customFileName: $table-$randomid

fileDirectory

/files

Directory where the locally created files get made. (In respects to where application is running)

perspectium:
	filesubscriber: 
		fileDirectory: /files

postInterval

2

Minutes to check dead periods. Check every x minutes to compare if the in memory collection is the same as the last x minutes. If so, write records to file and push to Snowflake

perspectium:
	snowflake: 
		postInterval: 2

deleteFiles

true

Boolean to indicate whether you want to keep or delete locally created CSV files. Will not have a noticeable performance hit.

perspectium:
	filesubscriber: 
		deleteFiles: true

fileCleanerInterval

4

How many hours for a job to clean up local files. (Like ServiceNow DataCleaner Scheduled Job)

perspectium:
	filesubscriber: 
		fileCleanerInterval: 4

fileMaxAge

1

Any csv file in the filesDirectory older than fileMaxAge in hours, will be automatically deleted.

perspectium:
	filesubscriber: 
		fileMaxAge: 1