The procedures described in this documentation are best practices for setting up Perspectium DataSync for the typical use cases when sharing data from your ServiceNow instance to target databases and files.
For other ways to get started with DataSync, click here or contact Perspectium Support.
First, request for the downloadable link for Perspectium for ServiceNow application.
Make sure that the device you are installing the DataSync Agent on meets these basic requirements.
You will also need to request the Agent Installer from Perspectium Support.
You will need the following information from Perspectium Support:
To install the Perspectium Application for ServiceNow you need to have admin access to the ServiceNow instance and should have received the update set from Perspectium Support which will be imported, validated and committed during installation. You can follow these steps to install in ServiceNow:
Once installation is finished you need set up some initial configurations:
In order to share data to the Perspectium Integration Mesh (Message Broker Service aka MBS) you need to configure at least one shared queue since MBS is queue based. This queue will be used by the ServiceNow instance to share records while the DataSync Agent will subscribe to the same queue in order to be able to receive records, decrypt the shared content and issue the necessary SQL queries to populate those records into the target database.
The Agent offers a variety of options for target endpoints the Agent will save data into, with databases being the most common use case. Below are the most common ways to install the Agent when saving to diffferent target endpoints.
The Perspectium DataSync Agent can be installed both on Windows or Linux machines. Before installing consulting with this article for minimum requirements.
The virtual machine where you will be installing the agent needs to have connectivity to:
The Perspectium message bus endpoint e.g. customername.perspectium.net via http port 443.
The target database via the port this database will listen to.
The source ServiceNow instance. This connection is only needed for the Agent to be able to fetch specific ServiceNow table schemas when it needs to create or alter table schema in the target database to reflect its ServiceNow structure.
Make sure the necessary network connectivity exists and the needed firewall rules are set properly before you start the installation.
Once you start the installation package perform the following steps:
Accept the Perspectium Licence Agreement.
Specify the directory where the Agent should be installed.
Specify the installed Java version which you want to use.
In the DataSync type specify the type of database for which you are installing the Agent.
Next specify the Perspectium Integration Mesh details as provided to you by Perspectium
Server - this is your Perspectium Integration Mesh endpoint in the form of customername.perspectium.net.
User - your Integration Mesh user.
Password - your Message Bus password.
Queue name - this is the shared queue you have created in ServiceNow, usually psp.out.replicator.mysharedqueuename.
Encryption/Decryption key - this should match the encryption key specified in ServiceNow Under Perspectium >> Properties >> DataSync Properties >> Encryption Key.
Max Reads - maximum number of reads per connection to the Message Bus - you can leave the default number here
Polling interval - the interval in seconds between two agent polls to fetch Message Bus records - you can leave the default number here.
Next you need to specify the information about the ServiceNow instance
Instance - the name of the instance in the form of customer.service-now.com
User - this must be a ServiceNow system account with at least the “perspectium” role. This user is needed by the Agent to be able to fetch dynamically the table schema for any table being shared to the database so that the Agent can create a similar table schema in the target database.
Password - this user's password.
Next step is to enter the database details
Server - this is the address of the database.
Port - which can be used by the Agent to connect to the database
User - the username of the system account which is used to connect to the database.
Password - this user's passwod
Database - the hostname of the database which is created or should be created by the Agent to store the replicated ServiceNow data. Note that if the Agent needs to create this database it needs the specific CREATE DATABASE or similar permission against the DB server.
On the next step review the specified settings and continue until the installation completes.
You can install the DataSync Agent to be able to share to files in a directory instead of a database. Currently the Agent supports sharing to json, xml or csv files.
For this type of set up you need to select “File DataSync (advanced)” as a DataSync Type when installing the Agent and then do some manual configurations in the agent.xml. Namely you need to specify:
Handler to specify the output file type as one of
<handler>com.perspectium.replicator.file.XMLFileSubscriber</handler>
<handler>com.perspectium.replicator.file.JSONFileSubscriber</handler>
<handler>com.perspectium.replicator.file.CSVFileSubscriber</handler>
Files directory in the form of <files_directory>Users\Downloads\subscribefiles\</files_directory>.
Usually you would want to create different files for the various tables you share out of ServiceNow by having the table name as part of the output file name and add a timestamp to each file so you can track when it was generated.. To achieve this in agent.xml you can specify <file_prefix>$table_$d{yyyyMMdd}_$i</file_prefix>.
If this is not needed and you want records form all table to be stored in a single file you can use the <filename> directory instead like <file_name>records.json</file_name>.
If your files will be separate for each shared table you should also specify file prefix and suffix as following based on your type of output
<file_suffix>.json</file_suffix>
<file_suffix>.xml</file_suffix>
<file_suffix>.csv</file_suffix>
Specify the maximum size of the file before the Agent moves to write to a new one in the form of <file_max_size>1MB</file_max_size>.
If you are not sharing too much data out of ServiceNow constantly, sometimes it may take too long before a file reaches its maximum size to be closed off. You can set up the Agent to close a file even if it has not reached the maximum size when it has not received new records for a table for a given specified interval using the following directive: <close_file_interval>180</close_file_interval> where the integer value is specified in seconds.
Your agent.xml should look like this:
<?xml version="1.0" encoding="ISO-8859-1" ?> <config> <agent> <!-- the following subscribe fragment defines subscribing class --> <!-- and its arguments --> <subscribe> <task> <task_name>test_file_subscriber</task_name> <message_connection password="password_here" user="admin" queue="psp.in.meshlet.example">https://<customer>.perspectium.net</message_connection> <instance_connection password="Adminadmin1" user="admin">https://<instance>.service-now.com</instance_connection> <max_reads_per_connect>1</max_reads_per_connect> <polling_interval>3</polling_interval> <decryption_key>Example_decryption_key_here</decryption_key> <handler>com.perspectium.replicator.file.JSONFileSubscriber</handler> <buffered_writes>10</buffered_writes> <files_directory>/Users/You/Downloads/Example</files_directory> <file_prefix>$table_$d{yyyyMMdd}_$i</file_prefix> <file_suffix>.json</file_suffix> <file_max_size>50KB</file_max_size> <translate_newline>%13</translate_newline> <separate_files>table</separate_files> <enable_audit_log/> <close_file_interval>180</close_file_interval> </task> </subscribe> </agent> </config> |
To install the Agent to share to Amazon S3 buckets during installation in the selection for DataSync Type select “Manual Configuration (advanced)” option. Then follow these steps:
Add the .jar files listed here to your Agent’s “extlib” folder.
Open agent.xml and modify the following tags like this:
<task_name>s3_agent_subscribe</task_name>
<handler>com.perspectium.replicator.file.S3Subscriber</handler>
Specify decryption key
<decryption_key>The cow jumped over the moon and back</decryption_key>
Specify the following options based on how you want to configure the replication
<access_key> - the access key associated with your AWS account, only when the DataSync Agent is not installed on an EC2 instance
<secret_access_key> - the secret access key associated with your AWS account, only when the DataSync Agent is not installed on an EC2 instance
<use_instance_credentials/> - if you are not using access key or secret access key, then you can use the instance permissions based on the IAM roles of the EC2 instance
<region> - this is the region your S3 bucket resides in
<s3_bucket> - Name of your AWS S3 bucket, including subdirectories if desired, to specify where the records will be uploaded e.g. bucketName/folder1/folder2
<file_format> - json or xml file format
<s3_bucket_encryption/> - this is optional setting . To use AWS server-side encryption when pushing files to the S3 bucket. Configuring this option will have the Agent request S3 to encrypt the data at rest using S3's built-in functionality
Your agent.xml should look like this:
<?xml version="1.0" encoding="UTF-8"?> <config> <agent> <share /> <subscribe> <task> <task_name>s3_agent_subscribe</task_name> <message_connection password="password" user="user">https://mesh.perspectium.net</message_connection> <instance_connection password="password" user="user">https://myinstance.service-now.com</instance_connection> <handler>com.perspectium.replicator.file.S3Subscriber</handler> <decryption_key>The cow jumped over the moon</decryption_key> <access_key>AccessKey</access_key> <secret_access_key>SecretAccessKey</secret_access_key> <region>us-west-2</region> <s3_bucket>examples3bucket</s3_bucket> <file_format>json</file_format> </task> </subscribe> <polling_interval>40</polling_interval> </agent> </config> |
You can also set up your Agent to share .json or .xml files in an Azure Blob storage.
During installation in the selection for DataSync Type select “Manual Configuration (advanced)” option. Then follow these steps:
Your agent.xml should look like this:
<?xml version="1.0" encoding="UTF-8"?> <config> <agent> <share /> <subscribe> <task> <task_name>azure_blob_storage_agent_subscribe</task_name> <message_connection password="password" user="user">https://mesh.perspectium.net</message_connection> <instance_connection password="password" user="user">https://myinstance.service-now.com</instance_connection> <handler>com.perspectium.replicator.file.AzureBlobStorage</handler> <decryption_key>Some decryption key here</decryption_key> <abs_container>pspcontainer</abs_container> <file_format>json</file_format> <connection_string></connection_string> </task> </subscribe> <polling_interval>40</polling_interval> </agent> </config> |
The Agent includes optional plugins you can configure to customize how your Agent saves data to the database. This is only applicable for the database use case. The one plugin that is used often is the IODatetime plugin.
The IO Datetime plugin allows the DataSync agent to add additional columns to the synchronized tables to specify the timestamps of when those records were first inserted, last updated or deleted into the database. As opposed to the usual ServiceNow sys_updated, sys_created_on which keep the date and time when those transactions happened in ServiceNow the IO Datetime plugin columns keep the timestamps of the database transactions triggered by the agent. This plugin can be activated by adding the following lines to the agent.xml within the <task> directive:
<!-- Create the columns --> <dynamic_columns> <dynamic_column column_type="93" column_size="32" deletes_only="true">psp_per_delete_dt</dynamic_column> <dynamic_column column_type="93" column_size="32" updates_only="true">psp_per_update_dt</dynamic_column> <dynamic_column column_type="93" column_size="32" inserts_only="true">psp_per_insert_dt</dynamic_column> </dynamic_columns> <!-- Update the columns --> <plugins> <plugin insert_column="psp_per_insert_dt" update_column="psp_per_update_dt" delete_column="psp_per_delete_dt">com.perspectium.replicator.sql.plugin.SQLSubscriberIODateTimePlugin</plugin> </plugins> |
After installation, you can use a specific executable to validate if your Agent is installed properly or get early information about specific issues. Go to the Agent's install directory and then navigate to the bin folder and start validateConfiguration.bat (Windows) or validateConfiguration (Unix) in order to perform this validation which will print out any error the Agent sees when it tests its configuration. The Agent will also do this validation on start up.
Usually when in operation the DataSync Agent is left running continuously unattended on a virtual machine. This is best achieved when the Agent is set up to run as a Windows or Linux service. To install the Agent as a Service, execute installService.bat or installService.sh from the agent bin directory. This should create a Perspectium DataSync Agent service that automatically starts when the machine starts.
If the service is not able to start automatically the best place to see any troubleshooting information would be to enable the wrapper log:
Once you have done the installation for both ServiceNow and the DataSync Agent it is always good to run a quick test by sharing records from ServiceNow and make sure they arrive in the database. Perform the following steps:
1. In ServiceNow go to Perspectium >> Shares >> Bulk Shares >> Create New Bulk Share.
2. Create a new bulk share with only some basic configuration:
3. Go to the VM hosting your DataSync Agent. If the Agent is not yet running as a service you can start the service or start the Agent in the foreground by double clicking the Agent.bat (Windows) or agefile in the bin folder.
4. Once the Agent is started it should process the records from the MBS queue. You can verify this in the Agent's logs by going to the logs folder and opening the most recent perspectium.log file. You should see lines in the log similar to the following:
2024-09-26 16:23:38.370 DEBUG - dev50002_subscribe - SQLSubscriber - dev50002_subscribe - processMessage active..., processed 123 messages
5. Check the logs for any errors or warnings indicating issues with message processing or connectivity to the target database. Errors and warnings are indicated as ERROR or WARNING in the logs.
6. If no errors are observed in the log, check your target database to make sure all of the shared records have arrived there successfully.
Once you have verified end to end connectivity and that you are able to share ServiceNow data successfully into the target database, you are ready to configure your shares.
A bulk share is a job that can be triggered manually or based on a schedule to share all or a subset of records from a given table to the DataSync Agent. Bulk shares are usually triggered manually to run an initial share to synchronize all records from a table and then added to run on a schedule to keep the target database in sync by only sharing the delta i.e. the records in the table created or updated after the previous run.
To configure a bulk share follow those steps: