Page History
HTML |
---|
<style> .release-box { height: 30px; width: 100px; padding-top: 8px; text-align: center; border-radius: 5px; font-weight: bold; background-color: #8efeb3; border-color: #FCE28A; } .release-box:hover { cursor: hand; cursor: pointer; opacity: .9; } </style> <meta name="robots" content="noindex"> <div class="release-box"> <a href="https://docs.perspectium.com/display/krypton" style="text-decoration: none; color: #FFFFFF; display: block;"> Krypton </a> </div> |
You can configure your DataSync Agent to share data from ServiceNow to a Snowflake database by changing some additional configurations in your agent.xml file.
The DataSync Agent uses the Snowflake COPY command to bulk load data from the local file system where the Agent is running to optimize performance. Being that Snowflake is a cloud-based database, the COPY command is used to improve performance as standard SQL operations (i.e. INSERT, UPDATE, etc) on a record by record basis will have slow performance due to network latency.
Warning |
---|
|
Prerequisites
First, you will need to Install the DataSync Agent.
You will also need to create a ServiceNow dynamic share/bulk share.
Make sure to stop running your DataSync Agent before making any Agent configuration changes.
Procedure
To set up your DataSync Agent to share application data to Snowflake, follow these steps:
UI Steps | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Navigate back to the conf directory in the DataSync Agent installation folder and look for the wrapper.conf file.
|
Info |
---|
The Agent reports both the records it processes (i.e. records it reads from the Perspectium Integration Mesh queue and then prepares for saving into Snowflake) along with the records it pushes into Snowflake (records that are sent into Snowflake and then copied into the database tables with the COPY command). Because of how the Agent batches records together to a max file size before using the COPY command, these details will be reported separately and these two actions will not happen at the exact same time. For example, if looking at the Agent's log file (perspectium.log), you will see log entries as follows: 2024-03-28 23:33:51.049 INFO - PerformanceReportGenerator - PerformanceReport - change_request.bulk=[100, 425560]: 100 messages processed and 425560 bytes processed 2024-03-28 23:37:21.049 INFO - Timer-4 - SnowflakeSQLSubscriber - Scheduled flushing to push records 2024-03-28 23:37:28.144 INFO - Timer-4 - SnowflakeSQLSubscriber - Processed 100 record(s) to table: change_request The PerformanceReportGenerator logging will represent the processing of records by the Agent from the Integration Mesh queue (i.e. reading the messages, decrypting the content, formatting it and then saving into the local temporary files) and then the SnowflakeSQLSubscriber logging will represent the actual pushing of the records into the Snowflake database (taking the temporary files, pushing them into Snowflake and then using the COPY command to save into the actual Snowflake database tables themselves). |