The procedures described in this documentation are best practices for setting up Perspectium DataSync for the typical use cases when sharing data from your ServiceNow instance to target databases and files. 

For other ways to get started with DataSync, click here or contact Perspectium Support

What's on this page?




Prerequisites


(warning) First, request for the downloadable link for Perspectium for ServiceNow application.

(warning) Make sure that the device you are installing the DataSync Agent on meets these basic requirements.

(warning) You will also need to request the Agent Installer or Meshlet from Perspectium Support.

(warning) You will need the following information from Perspectium Support:

  • Perspectium Integration Mesh (MBS) server URL
  • Integration Mesh (MBS) username
  • Integration Mesh (MBS) password




ServiceNow Install

To install the Perspectium Application for ServiceNow you need to have admin access to the ServiceNow instance and should have received the update set from Perspectium Support which will be imported, validated and committed during installation. You can follow these steps to install in ServiceNow:

  1. Go to System Update Sets >> Retrieved Update Sets and import the update set xml file.
  2. Open the update set and click Preview. Errors or warnings are not expected but if there are any those need to be reviewed on a one to one basis.
  3. Click the Commit Update Set button to commit the update set.
  4. In the ServiceNow Navigator go to the new Perspectium module and open the Finish Install page.
  5. Click on the “Confirm” button and wait for a successful confirmation message.

Set up MBS Credentials and Encryption Keys

Once installation is finished you need set up some initial configurations:

  1. Go to Perspectium >> Properties >> General Settings.
  2. Enter the Perspectium Message Bus credentials provided to you by Perspectium
    1. Perspectium Integration Mesh username.
    2. Perspectium Integration Mesh password.
    3. Perspectium Integration Mesh server URL - this is usually in the form https://customername.perspectium.net
    4. The rest of the settings you can leave with their default values and scroll down to the bottom of the page to click Save.
    5. Go back to Perspectium >> Properties and navigate to the DataSync settings.
    6. Populate the Encryption key entry with an encryption string of your choice. We advise you to use at least 32 characters encryption key to be able to leverage AES256 payload encryption. This encryption key does not need to be shared with Perspectium. It is used to encrypt the data shared in transit when flowing through the Perspectium message bus so it can be properly secured.
    7. In the Decryption Key input specify a decryption string which can be used by the ServiceNow instance to descrypt any inbound messages. In many cases you will use the same key as your encryption key so data can be encrypted and decrypted the same in both directions.
    8. Scroll down to the bottom of the page to save it.

Create Queues

In order to share data to the Perspectium Integration Mesh (Message Broker Service aka MBS) you need to configure at least one shared queue since MBS is queue based. This queue will be used by the ServiceNow instance to share records while the DataSync Agent will subscribe to the same queue in order to be able to receive records, decrypt the shared content and issue the necessary SQL queries to populate those records into the target database.

  1. Go to Perspectium >> Shared Queues.
  2. Click the New button to create a new record.
  3. Set the status to Active.
  4. In the Queue type select DataSync from the dropdown.
  5. Enter a name of the queue without spaces e.g. acmedefault.
  6. You should notice that the queue user, password and encryption key should auto populate based on the details entered in the Perspectium Properties.
  7. Save the queue and notice in the record header the full queue name with the automatically added prefix. It should look like psp.out.replicator.acmedefault. You will need this queue name later when configuring the DataSync Agent to subscribe to it.
  8. Click on the Get Queue Status related link to query the Perspectium MBS for the validity of the queue. If the credentials are valid you should get a message indicating that either the queue exist and the credentials are valid or a message saying that the queue does not exist yet due to no sata being shared to it but still have valid credentials and will be auto created once you start sharing records.
  9. Go to Perspectium >> Subscribed Queues. You should notice one queue already existing there with a name of psp.out.servicenow.instancename where instancename is the name of your ServiceNow instance. Subscribed queues are used by ServiceNow to be able to receive messages form MBS. For example, messages are sent to the instance when the DatraSync agent needs to send some response back to ServiceNow. Usually you are fine with just the default subscribed queue and don’t need to add any more subscribed queues.




DataSync Agent Install

Install the Agent

The Agent offers a variety of options for target endpoints the Agent will save data into, with databases being the most common use case. Below are the most common ways to install the Agent when saving to diffferent target endpoints.

Install Agent to Share to a Database

The Perspectium DataSync Agent can be installed both on Windows or Linux machines. Before installing consulting with this article for minimum requirements.

The virtual machine where you will be installing the agent needs to have connectivity to:

  1. The Perspectium message bus endpoint e.g. customername.perspectium.net via http port 443.

  2. The target database via the port this database will listen to.

  3. The source ServiceNow instance. This connection is only needed for the Agent to be able to fetch specific ServiceNow table schemas when it needs to create or alter table schema in the target database to reflect its ServiceNow structure.

Make sure the necessary network connectivity exists and the needed firewall rules are set properly before you start the installation.

Once you start the installation package perform the following steps:

  1. Accept the Perspectium Licence Agreement.

  2. Specify the directory where the Agent should be installed.

  3. Specify the installed Java version which you want to use.

  4. In the DataSync type specify the type of database for which you are installing the Agent.

  5. Next specify the Perspectium Integration Mesh details as provided to you by Perspectium

    1. Server - this is your Perspectium Integration Mesh endpoint in the form of customername.perspectium.net.

    2. User - your Integration Mesh user.

    3. Password - your Message Bus password.

    4. Queue name - this is the shared queue you have created in ServiceNow, usually psp.out.replicator.mysharedqueuename.

    5. Encryption/Decryption key - this should match the encryption key specified in ServiceNow Under Perspectium >> Properties >> DataSync Properties >> Encryption Key.

    6. Max Reads - maximum number of reads per connection to the Message Bus - you can leave the default number here

    7. Polling interval - the interval in seconds between two agent polls to fetch Message Bus records - you can leave the default number here.

  6. Next you need to specify the information about the ServiceNow instance

    1. Instance - the name of the instance in the form of customer.service-now.com

    2. User - this must be a ServiceNow system account with at least the “perspectium” role. This user is needed by the Agent to be able to fetch dynamically the table schema for any table being shared to the database so that the Agent can create a similar table schema in the target database.

    3. Password - this user's password.

  7. Next step is to enter the database details

    1. Server - this is the address of the database.

    2. Port - which can be used by the Agent to connect to the database

    3. User - the username of the system account which is used to connect to the database.

    4. Password - this user's passwod

    5. Database - the hostname of the database which is created or should be created by the Agent to store the replicated ServiceNow data. Note that if the Agent needs to create this database it needs the specific CREATE DATABASE or similar permission against the DB server.

  8. On the next step review the specified settings and continue until the installation completes.

Install Agent to Share to Files

You can install the DataSync Agent to be able to share to files in a directory instead of a database. Currently the Agent supports sharing to json, xml or csv files.

For this type of set up you need to select “File DataSync (advanced)” as a DataSync Type when installing the Agent and then do some manual configurations in the agent.xml. Namely you need to specify:

  1. Handler to specify the output file type as one of

    1. <handler>com.perspectium.replicator.file.XMLFileSubscriber</handler>

    2. <handler>com.perspectium.replicator.file.JSONFileSubscriber</handler>

    3. <handler>com.perspectium.replicator.file.CSVFileSubscriber</handler>

  2. Files directory in the form of <files_directory>Users\Downloads\subscribefiles\</files_directory>.

  3. Usually you would want to create different files for the various tables you share out of ServiceNow by having the table name as part of the output file name and add a timestamp to each file so you can track when it was generated.. To achieve this in agent.xml you can specify <file_prefix>$table_$d{yyyyMMdd}_$i</file_prefix>.

  4. If this is not needed and you want records form all table to be stored in a single file you can use the <filename> directory instead like <file_name>records.json</file_name>.

  5. If your files will be separate for each shared table you should also specify file prefix and suffix as following based on your type of output

    1. <file_suffix>.json</file_suffix>

    2. <file_suffix>.xml</file_suffix>

    3. <file_suffix>.csv</file_suffix>

  6. Specify the maximum size of the file before the Agent moves to write to a new one in the form of <file_max_size>1MB</file_max_size>.

  7. If you are not sharing too much data out of ServiceNow constantly, sometimes it may take too long before a file reaches its maximum size to be closed off. You can set up the Agent to close a file even if it has not reached the maximum size when it has not received new records for a table for a given specified interval using the following directive: <close_file_interval>180</close_file_interval> where the integer value is specified in seconds.

Your agent.xml should look like this:

<?xml version="1.0" encoding="ISO-8859-1" ?>
<config>
    <agent>
        <!-- the following subscribe fragment defines subscribing class -->
        <!-- and its arguments -->      
        <subscribe>
            <task>
                <task_name>test_file_subscriber</task_name>
                <message_connection password="password_here" user="admin" queue="psp.in.meshlet.example">https://<customer>.perspectium.net</message_connection>
                <instance_connection password="Adminadmin1" user="admin">https://<instance>.service-now.com</instance_connection>        	
                <max_reads_per_connect>1</max_reads_per_connect>
                <polling_interval>3</polling_interval>         	
                <decryption_key>Example_decryption_key_here</decryption_key>  
                <handler>com.perspectium.replicator.file.JSONFileSubscriber</handler>
 
                <buffered_writes>10</buffered_writes>          	
                <files_directory>/Users/You/Downloads/Example</files_directory>                         	
                <file_prefix>$table_$d{yyyyMMdd}_$i</file_prefix>
                <file_suffix>.json</file_suffix>
                <file_max_size>50KB</file_max_size>
                <translate_newline>%13</translate_newline>
                <separate_files>table</separate_files>
                <enable_audit_log/>
                <close_file_interval>180</close_file_interval>
            </task>
        </subscribe>
    </agent>
</config>


Install Agent to Share to Amazon S3 Bucket

To install the Agent to share to Amazon S3 buckets during installation in the selection for DataSync Type select “Manual Configuration (advanced)” option. Then follow these steps:

  1. Add the .jar files listed here to your Agent’s “extlib” folder.

  2. Open agent.xml and modify the following tags like this:

<task_name>s3_agent_subscribe</task_name>

<handler>com.perspectium.replicator.file.S3Subscriber</handler>

  1. Specify decryption key

<decryption_key>The cow jumped over the moon and back</decryption_key>

  1. Specify the following options based on how you want to configure the replication

<access_key> - the access key associated with your AWS account, only when the DataSync Agent is not installed on an EC2 instance

<secret_access_key> - the secret access key associated with your AWS account, only when the DataSync Agent is not installed on an EC2 instance

<use_instance_credentials/> - if you are not using access key or secret access key, then you can use the instance permissions based on the IAM roles of the EC2 instance

<region> - this is the region your S3 bucket resides in

<s3_bucket> - Name of your AWS S3 bucket, including subdirectories if desired, to specify where the records will be uploaded e.g. bucketName/folder1/folder2

<file_format> - json or xml file format

<s3_bucket_encryption/> - this is optional setting . To use AWS server-side encryption when pushing files to the S3 bucket. Configuring this option will have the Agent request S3 to encrypt the data at rest using S3's built-in functionality

Your agent.xml should look like this:

<?xml version="1.0" encoding="UTF-8"?>
<config>
   <agent>
      <share />
      <subscribe>
         <task>
            <task_name>s3_agent_subscribe</task_name>
            <message_connection password="password" user="user">https://mesh.perspectium.net</message_connection>
            <instance_connection password="password" user="user">https://myinstance.service-now.com</instance_connection>  
            <handler>com.perspectium.replicator.file.S3Subscriber</handler>
            <decryption_key>The cow jumped over the moon</decryption_key>
            <access_key>AccessKey</access_key>
            <secret_access_key>SecretAccessKey</secret_access_key>
            <region>us-west-2</region>
            <s3_bucket>examples3bucket</s3_bucket>
            <file_format>json</file_format>
         </task>
      </subscribe>
      <polling_interval>40</polling_interval>
   </agent>
</config>


Install Agent to Share to Azure Blob Storage

You can also set up your Agent to share .json or .xml files in an Azure Blob storage. 

During installation in the selection for DataSync Type select “Manual Configuration (advanced)” option. Then follow these steps:

  1. Open agent.xml and modify the following tags like this:
    1. Set up <task_name> directive to  <task_name>azure_blob_storage_agent_subscribe</task_name>
    2. Set up <file_format> directive to either <file_format>json</file_format> or <file_format>xml</file_format>
    3. Set up <abs_container> to specify the name of your Azure Blob container, including subdirectories if desired, to configure where the records will be uploaded e.g. container/folder1/folder2.
    4. set up <connection_string> to specify the string to connect to Azure and upload to Azure Blob Storage container.

Your agent.xml should look like this:

<?xml version="1.0" encoding="UTF-8"?>
<config>
   <agent>
      <share />
      <subscribe>
         <task>
            <task_name>azure_blob_storage_agent_subscribe</task_name>
            <message_connection password="password" user="user">https://mesh.perspectium.net</message_connection>
            <instance_connection password="password" user="user">https://myinstance.service-now.com</instance_connection>  
            <handler>com.perspectium.replicator.file.AzureBlobStorage</handler>
            <decryption_key>Some decryption key here</decryption_key>            
            <abs_container>pspcontainer</abs_container>            
            <file_format>json</file_format>
            <connection_string></connection_string>
          </task>
      </subscribe>
      <polling_interval>40</polling_interval>
   </agent>
</config>


Agent Plugins Install

The Agent includes optional plugins you can configure to customize how your Agent saves data to the database. This is only applicable for the database use case. The one plugin that is used often is the IODatetime plugin.

IO Datetime Plugin

The IO Datetime plugin allows the DataSync agent to add additional columns to the synchronized tables to specify the timestamps of when those records were first inserted, last updated or deleted into the database. As opposed to the usual ServiceNow sys_updated, sys_created_on which keep the date and time when those transactions happened in ServiceNow the IO Datetime plugin columns keep the timestamps of the database transactions triggered by the agent. This plugin can be activated by adding the following lines to the agent.xml within the <task> directive:

<!-- Create the columns -->
<dynamic_columns>    
    <dynamic_column column_type="93" column_size="32" deletes_only="true">psp_per_delete_dt</dynamic_column>
    <dynamic_column column_type="93" column_size="32" updates_only="true">psp_per_update_dt</dynamic_column>
    <dynamic_column column_type="93" column_size="32" inserts_only="true">psp_per_insert_dt</dynamic_column>
</dynamic_columns>
  
<!-- Update the columns -->
<plugins>
    <plugin insert_column="psp_per_insert_dt" update_column="psp_per_update_dt" delete_column="psp_per_delete_dt">com.perspectium.replicator.sql.plugin.SQLSubscriberIODateTimePlugin</plugin>
</plugins>

Validate Agent Configuration

After installation, you can use a specific executable to validate if your Agent is installed properly or get early information about specific issues. Go to the Agent's install directory and then navigate to the bin folder and start validateConfiguration.bat (Windows) or validateConfiguration (Linux) in order to perform this validation which will print out any error the Agent sees when it tests its configuration. The Agent will also do this validation on start up.

Install the Agent to Run as a Service

Usually when in operation the DataSync Agent is left running continuously unattended on a virtual machine. This is best achieved when the Agent is set up to run as a Windows or Linux service. To install the Agent as a Service, execute installService.bat or installService.sh from the agent bin directory. This should create a Perspectium DataSync Agent service that automatically starts when the machine starts. 

If the service is not able to start automatically the best place to see any troubleshooting information would be to enable the wrapper log:

  1. Go to the Agent's conf folder.
  2. Open wrapper.conf in a notepad editor.
  3. Find the line which says #wrapper.logfile= and just below it uncomment the line to specify where the logs should be written e.g. ..\logs\wrapper.log
  4. Try to start the service again and check the log file for possible errors.




DataSync with Snowflake

Sharing ServiceNow data to Snowflake uses the Perspectium Meshlet for Snowflake application which serves the same purpose like the Perspectium DataSync Agent but is built to be able to communicate with a Snowflake data warehouse. 

Before installation you need to have the following prerequisites:

  1. A Windows or Linux virtual machine to host the meshlet with minimum parameters as per this article.
  2. Meshlet installation files provided to you by Perspectium
  3. A Snowflake instance with already created database, warehouse and schema to store the replicated ServiceNow data.
  4. Snowflake system account with the necessary privileges as per the below list:

    Object

    Privileges

    Virtual Warehouse

    USAGE

    Database

    USAGE

    Schema*

    USAGE, CREATE TABLE, CREATE FILE FORMAT, CREATE STAGE

    Table^

    SELECT, INSERT, UPDATE, DELETE, TRUNCATE

    Stage

    WRITE

    File Format

    USAGE

  5. ServiceNow system account with at least the perspectium role which is needed for this account to be able to fetch specific ServiceNow table schemas when it needs to create or alter table schema in the target database to reflect its ServiceNow structure.
  6. The virtual machine where you will be installing the meshlet needs to have connectivity to Perspectium MBS endpoint e.g. customername.perspectium.net.
  7. The virtual machine where you will be installing the meshlet needs to have connectivity to Snowflake.
  8. The virtual machine where you will be installing the meshlet needs to have connectivity to the source ServiceNow instance. This connection is only needed for the meshlet to be able to fetch specific ServiceNow table schemas when it needs to create or alter table schema in the target database to reflect its ServiceNow structure.
  9. Make sure the necessary network connectivity exists and the needed firewall rules are set properly.

Installation Steps

  1. Place the meshlet files provided by Perspectium in the installation folder of the virtual machine
  2. Open the config.application-dev.yml in a notepad editor to configure the specific meshlet properties.
  3. Under the http section provide
    1. url - the Perspectium Message Bus endpoint provided by Perspectium e.g. https://customername.perspectium.net/
    2. username - Perspectium Message Bus username
    3. password - the Message Bus password
  4. Under the snowflake section provide:
    1. username - Snowflake system account username
    2. password - Snowflake system account password
    3. warehouse - Snowflake warehouse 
    4. database - Snowflake database
    5. schema - Snowflake schema
    6. role - role of the user with the necessary permissions as described above
    7. connectionUrl - the Snowflake connection url in the following format jdbc:snowflake://instancename.snowflakecomputing.com/
    8. postInterval - this is the interval when the meshlet should post the generated records from the temporary csv file to the Snowflake database when the file has not reached its maximum limit. You can set this property to 1 to indicate 1 minute.
  5. Under fileSubscriber
    1. fileDirectory - the directory within the meshlet install folder where the temporary CSV files should be stored with the processed data before pushing to Snowflake. e.g. /files
    2. maxFileSize - the maximum number of records to be stored in each CSV file before copying the records to Snowflake. Values between 1000-5000 are recommended.
    3. deleteFiles - this is a boolean variable. We recommend setting this to true to delete files once they are processed and avoid running out of space.
    4. fileCleanerInterval - interval in minutes after which processed CSV files should be purged e.g. 5
  6. Under message
    1. inboundQueue - this is the Servicenow shared queue to which the meshlet should subscribe e.g. psp.out.replicator.default
    2. outbound Queue - this is the ServiceNow subscribe queue to which the meshlet should send back messages such as acknowledgments or table compare feedback e.g. psp.out.servicenow.instancename
    3. errorQueue - the queue to which the meshlet should send error messages if it errors out. For this you can use the ServiceNow default subscribe queue e.g. psp.out.servicenow.instancename
    4. defaultEncryptionMode - specify how the messages generated by the meshlet towards ServiceNow should be encrypted. Use one of the following values: 0 (Unencrypted), 1 (TripleDES), 2 (AES128), 3 (Base64), 4 (AES256).
    5. encryptionKey  - specify the encryption key used to encrypt the ServiceNow shared queue which will be used by the meshlet to decrypt the payloads
  7. Under the auth section:
    1. url - ServiceNow instance url e.g. https://instancename.service-now.com/
    2. username - ServiceNow system account username
    3. password - ServiceNow system account password

Your application-dev.yml should look like this:

perspectium:
    http:
      url: https://customername.perspectium.net/
      username: customername/customername
      password: password
    snowflake:
      username: username
      password: password
      warehouse: COMPUTE_WH
      database: TESTING
      schema: PUBLIC
      role: ACCOUNTADMIN
      connectionUrl: jdbc:snowflake://instancename.snowflakecomputing.com
      #Interval in minutes for posting during quiet periods when max file size not reached
      postInterval: 1
      purgeTable: true
      purgeTableOption: truncate
      batchSize: 10
    filesubscriber:
      fileType: csv
      buildHeader: true
      maxFileSize: 5000
      customFileName: $table-$randomid
      #Directory where files will be created
      fileDirectory: /testdir
      deleteFiles: true
      #How often in minutes to clean up files
      fileCleanerInterval: 5
      #Max file age in minutes to be kept
      fileMaxAge: 24
      purge: true
      #Legacy version of putting a concatinating guid values into a sys id field. If false, RecordId attribute is used as sys_id
      legacyDbViewId: false
    message:
      receipts:
        enable: false
      inboundQueue: psp.out.replicator.queuename
      outboundQueue: psp.out.servicenow.instancename
      errorQueue: psp.out.servicenow.instancename
      errorQueuePattern: psp.out.snowflake
      defaultEncryptionMode: 4
      encryptionKey: The cow jumped over the moon and back
    auth:
      url: https://instancename.service-now.com/
      username: pspuser
      password: password
    mapping:
      scheduler:
        reloadMappings: false
    directory: ${application_directory}
server:
    port: 8081




End to End Test

Once you have done the installation for both ServiceNow and the DataSync Agent/meshlet it is always good to run a quick test by sharing records from ServiceNow and make sure they arrive in the database. Perform the following steps:

1. In ServiceNow go to Perspectium >> Shares >> Bulk Shares >> Create New Bulk Share.

2. Create a new bulk share with only some basic configuration:

    1. Name - specify a name for the bulk share.
    2. Table name - select a relatively small table like cmn_location.
    3. Sharing set up - Share  the table records only.
    4. Encryption method - AES256.
    5. Target queue - the shared queue you created during installation.
    6. Save the bulk share and click the “Execute Now” button.
    7. Once you execute the Bulk Share you will be taken to the bulk share list.
    8. Open the Bulk Share once again and wait until it finishes. Once finished it’s status should change to Completed and you should see a number or Records processed greater than 0.
    9. Go to Perspectium >> Outbound >> Outbound messages to verify outbound payloads are generated and set to Sent status. You should see a number of messages where the topic is replicator and the name is tablename.bulk such as cmn_location.bulk. This number should match the number of records processed in the Bulk Share. Check the status of those messages to make sure they are in Sent status which would mean those were already sent to Perspectium MBS.

3. Go to the VM hosting your DataSync Agent/meshlet. If the Agent is not yet running as a service you can start the service or start the Agent in the foreground by double clicking the Agent.bat (Windows) or executing the runConsole.sh (Linux) file in the bin folder. For the meshlet, see here for how to run it as a service.

4. Once the Agent/meshlet is started it should process the records from the MBS queue. You can verify this in the Agent/meshlet logs by going to the logs folder and opening the most recent perspectium.log file. You should see lines in the log similar to the following:

2024-09-26 16:23:38.370 DEBUG - dev50002_subscribe - SQLSubscriber - dev50002_subscribe - processMessage active..., processed 123 messages

5. Check the logs for any errors or warnings indicating issues with message processing or connectivity to the target database. Errors and warnings are indicated as ERROR or WARNING in the logs.

6. If no errors are observed in the log, check your target database to make sure all of the shared records have arrived there successfully.




Start Sharing Data

Once you have verified end to end connectivity and that you are able to share ServiceNow data successfully into the target database, you are ready to configure your shares.

Bulk Shares Configuation

A bulk share is a job that can be triggered manually or based on a schedule to share all or a subset of records from a given table to the DataSync Agent. Bulk shares are usually triggered manually to run an initial share to synchronize all records from a table and then added to run on a schedule to keep the target database in sync by only sharing the delta i.e. the records in the table created or updated after the previous run.

To configure a bulk share follow those steps:

  1. Go to Perspectium >> Shares >> Bulk Shares >> Create New Bulk Share.
  2. Specify a share name in the Name field.
  3. In Table Name select the table you want to share.
  4. In Sharing Setup select whether you want to share only the records from the specific table (Share the table records only), the records from the specific table and all records belonging to any tables extending the current one (Share all table records and child records) or share only the last descending child of a record from the tables extending the current one if there are any extensions (Share the last descending child of each table record). For more information, see Replicating ServiceNow's Parent Child Hierarchy.
  5. Encryption method - select the encryption algorithm which you want to use to encrypt the payload before sharing so that confidential information cannot be read while in transit through Perspectium MBS. It is recommended that you use AES128 or AES256 as a secured way of encrypting data. 
  6. Select the target queue which is the shared queue to which you want to share this data. This target queue should be configured in your agent.xml file for your agent to be able to subscribe and fetch data from it.
  7. From the Related Links list at the bottom you can switch to Standard view to access some more advanced configurations of the bulk share
  8. Share only selected fields - allows you to specify only certain fields from the source table to be shared. This is useful in case you are only interested in some of the table columns and you do not want to overload your target database with unnecessary data.
  9. Include attachments - to share only attachment records related to the records of the main table you are sharing. Note that you can share the sys_attachment and sys_attachment_doc tables separately via dedicated shares but this option here allows you to easily limit to only share attachments related to the actual records you are sharing and not orphan attachments.
  10. Include journal fields - to share related sys_journal_field records such as comments or work notes. Note that journals along with attachments will be shared to the target database as separate records residing in their respective tables such as sys_journal_field, sys_attachment and sys_attachment_doc and not to the main table you are sharing within this Bulk Share.
  11. Share updates since then - usually you want this checkbox enabled to enable the Bulk Share to run incrementally and only share delta after its first run and not to share the whole table dataset every time it executes. This field is tied to the Last share time field which holds a timestamp of when this Bulk Share has been run last time. Initially when you create a Bulk Share the Last share time field is empty which will allow the Bulk Share to share all records from the table. But on subsequent runs it will only share records inserted or updated in ServiceNow after its past execution as defined in its Last share time timestamp and given that the Share updates since then checkbox is enabled.
  12. Condition - you can specify a filter condition to only share a subset of the records from the table which will match this filter.
  13. Scripting - you can customize your Bulk Share to run specific ServiceNow Javascript before it starts, for each record or once it finishes in case you want to modify or enrich the outgoing payload.
  14. Number of jobs - by default your Bulk Share will run with one job only. In case you are sharing a large table or a table which generates a large amount of delta records you can speed up sharing by selecting a higher number of jobs. This will speed up the generation of outbound messages which are placed in the Perspectium Outbound table. Note that each job here will consume one worker out of your ServiceNow nodes during execution time.
  15. Note that in order to save time, if you are creating Bulk Shares with similar settings you can clone existing Bulk Share using the Clone bulk configuration related link and do any changes needed on the cloned one before executing.

Dynamic Shares

As opposed to Bulk Shares, Dynamic Shares trigger real time when a record is inserted, updated or deleted from a table to share only this specific record.

To configure a new Dynamic Share follow these steps:

  1. Go to Perspectium >> Shares >> Dynamic Share >> Create New Dynamic Share
  2. Select the type of Dynamic Share, Business Rule or Flow Designer.
    1. Business Rule Dynamic Shares are triggered to create an outbound message by a Business Rule created behind the scenes by the Perspectium application to run on the source table. These Business Rules will trigger for each insert, update or delete operation for a record in the source table based on which operations are activated in the Dynamic Share and given that the record will match the Dynamic Share filter condition.
    2. Flow Designer Dynamic Shares will generate a separate flow which will trigger for the selected operations i.e. insert or update and also for each child table inheriting the selected source table. Note that Flow Designer Dynamic Shares do not trigger on deleting records from the source table.
  3. Check the Active option
  4. Specify the Name of the Share
  5. Specify the source table
  6. In Sharing Setup select whether you want to share only the records from the specific table (Share the table records only), the records from the specific table and all records belonging to any tables extending the current one (Share all table records and child records) or share only the last descending child of a record from the tables extending the current one if there are any extensions (Share the last descending child of each table record). For more information, see Replicating ServiceNow's Parent Child Hierarchy.
  7. Encryption method - select the encryption algorithm which you want to use to encrypt the payload before sharing so that confidential information cannot be read while in transit through the Perspectium Message Bus. It is recommended that you use AES128 or AES256 as a secured way of encrypting data. 
  8. Select the target queue which is the shared queue to which you want to share this data. This target queue should be configured in your agent.xml file for your agent to be able to subscribe and fetch data from it.
  9. From the Related Links list at the bottom you can switch to Standard view so access some more advanced configurations of the Dynamic Share
  10. Share only selected fields - allows you to specify only certain fields from the source table to be shared. This is useful in case you are only interested in some of the table columns and you do not want to overload your target database with unnecessary data.
  11. Include attachments - to share only attachment records related to the records of the main table you are sharing. Note that you can share the sys_attachment and sys_attachment_doc tables separately via dedicated shares but this option here allows you to easily limit to only share attachments related to the actual records you are sharing and not orphan attachments.
  12. Include journal fields - to share related sys_journal_field records such as comments or work notes. Note that journals along with attachments will be shared to the target database as separate records residing in their respective tables such as sys_journal_field, sys_attachment and sys_attachment_doc and not to the main table you are sharing within this Dynamic Share.
  13. Include referenced field records to share records from other tables referenced by this one. Once this option is enabled you will need to explicitly list any needed referenced fields in the related list which will appear at the bottom of the Dynamic Share.
  14. Create/Update/Delete - select the operations on a given record which will trigger sharing.
  15. Select column updates to share on - this will trigger sharing only if specific column value changes during a transaction. This is useful to limit the volume of records shared when a lot of transactions may happen over time on a record.
  16. Select column updates to ignore - this is similar to the Select column updates to share on but allows you to explicitly list column updates which are to be ignored. Both options will show a related list at the bottom of the Dynamic Share to list the fields in scope.
  17. Interactive only - having this option enabled will only share transactions triggered by ServiceNow end users. Usually you want to have this option disabled to share transaction also generated by integrations, backend scripts and other jobs.
  18. Condition - you can specify a filter condition to only share a subset of the records from the table which will match this filter.
  19. Scripting - you can customize your Dynamic Share to run specific ServiceNow Javascript before or after sharing a record in case you want to modify or enrich the outgoing payload.

Initial Data Seeding

Initial sharing of your ServiceNow data to the target database is usually done via Bulk Shares. You can configure Bulk Shares for each table you want to seed to the database and have the “Share updates since then” option enabled. This will allow the bulk shares to share all records during the first execution and then automatically start sharing the delta on every next execution.

Bulk Share Schedule

Once the initial seeding is completed you can add your Bulk Shares to a schedule to start running them on appropriate time intervals like 5 minutes, every hour, every day at a specified time, on specific days or weekends. This will allow data to be shared out on a scheduled basis so you can captures changes since the last time a Bulk Share ran and data was shared. To create one go to Perspectium >> Shares >> Scheduled Bulk Share >> Create New Scheduled Bulk Share.

  1. Specify a Name for the Schedule e.g. Hourly.
  2. Save the Scheduled Bulk Share.
  3. Enable repeating jobs - specify if you want to allow executions of a bulk share to overlap in case a previous run has not finished until the next one triggers.
  4. Number of bulk shares to run simultaneously - in case you have multiple Bulk Shares added to the schedule defines how many of those can run at the same time e.g. 5.
  5. In the Bulk Configurations list click Edit to add one or more Bulk Shares to the schedule so they can trigger once the schedule has reached the time.

Migrating Shares Across Enviroments

Shares and other configurations such as Scheduled Bulk Shares, Table and Transform Maps can be exported and imported into another ServiceNow environment to avoid manual configuration. For this you can go to Perspectium >> Tools >> Download Share Configurations. Select the objects you want to export and click download to get their xml definition. 

On the target ServiceNow instance where you want to migrate those go to Perspectium  >> Tools >> Import Share Configurations, specify the file to upload and select a queue which exists on the target instance. All imported shares will be linked to this shared queue since queues are different between environments.




Specific Use Cases

Sharing display value for reference and choice fields

By default when you share a table, the reference field values in the table records will be shared with their sys_id values. The same happens for choice fields which will share their backend values and not the display values. For example, sharing Priority choice fields and sharing the backend values 1, 2, 3, etc and not the display values Critical, High, Medium, etc.

Sometimes for reporting on top of the target database data you need to know their display values and avoid running extensive lookup queries to fetch those from another table. This can be achieved by enabling the Share display values setting by going to Perspectium >> Properties >> DataSync Settings and toggling on the Enable DataSync to add display value fields for reference and choice fields (prefixed with 'dv_' by default) option. This will add one extra column to the target database which will be prefixed with dv_ and will contain the display value of the original column.

Sharing large volumes of data

There are certain configuration changes which can be performed both on ServiceNow and the DataSync Agent to avoid bottlenecks when sharing large volumes of data and allow sharing of large amounts of records to complete in a timely fashion. 

Increase number of Bulk Share jobs

At the Bulk Share level the jobs of the share can be increased from the default one to higher number such as 2/4/8/16. If you are doing this on an existing Bulk Share which has already been executed you need to clone it to a new Bulk Share. Switch to Standard view from the Related links button and open the Advanced tab and then Change the number of jobs dropdown value.

Increase number of MultiOutput jobs

Once you increase the number of Bulk Share jobs this will make your shares run faster but then the number of records placed in the Perspectium Outbound table may become a bottleneck. You can increase also the number of Perspectium MultiOutput processing jobs from the default one to 2/4/8/16 jobs by going to Perspectium >> Tools >> Create MultiOutput Jobs and select a Job Type dropdown and selecting higher number of jobs in the Number of Jobs dropdown.

Meshlet Single Threading

Whereas the DataSync Agent is meant to be multi-threaded (see below), meshlets are intended to be microservice single threaded applications. To increase the consumption speed into the database, it is suggested that you run multiple meshlets (similar to how people run multiple containers). For more information, contact Perspectium Support.

Agent Multi Threading

If you are sharing a big volume of data from ServiceNow the DataSync Agent may not be able to process all the records with a single queue in a timely fashion. You can notice such a bottleneck if you open your ServiceNow shared queue and click the Get Queue Status related link. In the status field you should see a message saying Connection to queue was successful and there are XXX messages. This is the number of messages currently sitting in this queue in MBS waiting to be processed by the Agent. In case this number is too high, say above 1 million, then the Agent will have a huge backlog and may not be able to process those quickly. 

There are two approaches to make your Agent multi threaded to process this large volume of records quickly:

Use multiple queues

The Agent can be configured to use multiple queues by copying the section within the <task></task> directive in the agent.xml file. Then in the new task instance rename the <task_name>directive and inside the <message_connection> directive specify a different queue from which the agent should consume. 

From the ServiceNow side you need to configure this new queue as a Shared Queue and you need to redirect some of your Bulk or Dynamic Shares to use this new queue.

Note that it is not good practice to share data from the same table to different queues as this may lead to race condition issues where records will be updated out of order and your target database may not have the most up to date versions of the shared records. So if you have multiple shares sharing the same table make sure they always use the same Shared Queue. 

Use child queues

Another approach would be to configure your Shared Queue to have child queues. This way you split the load to this queue to the number of queues specified for it. For this configuration you need to open the Shared Queue in ServiceNow, go to the Miscellaneous tab and select a higher Number of Queues from the dropdown. This will turn all Bulk Shares using this queue to “Conditional” Bulk Shares meaning a condition will automatically be applied to them to split the load to each child queue based on the first character of the sys_id of the record being shared. Given that each record’s sys_id is randomly generated and starts with a hexadecimal character this will guarantee almost even distribution of records to each child queue. 

The Agent also needs to be reconfigured to use child queues so it can process each child queue as a separate queue. This is done by going to the agent.xml file and adding the num_queues attribute to the <message_connection> directive and specifying the number of child queues. For example if a queue has 4 child queues in agent.xml it will be configured like:

<message_connection password="password" queue="psp.out.replicator.prodqueue" user="user" num_queues="4">https:://customername.perspectium.net</message_connection>

Truncate before share

There are cases where incremental sharing cannot be configured for a given ServiceNow table. For example when the sys_created_on and sys_updated_on columns are not present on a table or other cases where you cannot continuously keep the table in sync with the target database such as when deletes cannot be captured in ServiceNow (such as when discovery runs). A good approach in such cases especially for smaller tables would be to truncate the target table before sharing the full set of records once again. This can be achieved by enabling the Truncate before share option of the Bulk Share. 

In addition to this the <truncate_table>true</truncate_table> directive needs to be added to the DataSync Agent's agent.xml configuration file inside the <task> directive.

Support of record deletion

In case you want to keep in sync records deleted from ServiceNow after those were shared via DataSync you need to use Dynamic Shares where the Delete operation is enabled as a trigger. There are three ways you can enable a record deletion to trigger an outbound messages towards Perspectium MBS to be further consumed by the Agent to delete the record in the target table:

  1. Normal Business Rule on the source table - this is the most straight forward way where you have the Delete option enabled on the Dynamic Share which behind the scenes will generate a Business Rule on the source table to fire once a record is deleted.
  2. Use Audit Delete Listener. Sometimes Business Rules may not fire on a table if records are deleted via some background ServiceNow activities. In order to still capture deletions you can enable the Use audit delete listener checkbox to fire a Business Rule on the sys_audit_delete table once an audit record is created for the given deletion. Note that this option will appear only if the source table is audited for deletions and the Delete option will still need to be enabled for the Dynamic Share.
  3. Use Delete Recovery Listener. When a ServiceNow record is deleted this event should be also tracked in the ServiceNow Delete Recovery. This can be also used as a trigger for the Dynamic Share to share this deletion if the Use Delete Recovery Listener option is enabled. It also requires the Delete option to be enabled for the Dynamic Share. 




Table Compare and Synchronize Deletes

Table Compare and Synchronize Deletes are two tools provided with Perspectium DataSync which allow you to remediate discrepancies between ServiceNow and the target database.

Table Compare

In case you see individual records missing from the target database you can run Table Compare which will trigger a job to automatically identify and re-share those records.

  1. Go to Perspectium >> Tools >> Table Compare: ServiceNow to Database >> Compare Records by Table
  2. Select a table for which you want to compare and re-share the missing records. Note that there should be a Bulk Share configured for this table.
  3. Choose the Comparing Setup.
  4. Choose the Bulk Share you want to use to re-share the missing records.
  5. Add any filter conditions for the source table records.
  6. Click the Add button and then the Run Table Compare button.
  7. Wait for the job to finish and check the target table to verify that the missing records were successfully re-shared.

Synchronize Deleted Records

Synchronize Deleted Records will run a job to compare all records in the target database table against the records currently residing in the corresponding ServiceNow table and trigger deletions for any record existing in the target database which is no longer present in ServiceNow because it was already deleted.

  1. Go to Perspectium >> Tools >> Table Compare: ServiceNow to Database >> Synchronize Deleted Records.
  2. Choose a table for which you want to run the job to compare deleted records.
  3. Choose a Bulk Share.
  4. Click the Run Table Compare button.
  5. Wait for the job to finish and check the target table to verify the extra records were deleted.

 




Regular Integration Health Checks

Here is a list of checks you can perform on a regular basis to make sure your DataSync integration is in a healthy state:

  1. Implement monitoring at the server/VM level where the Agent/meshlet is installed on or manually check if 
    1. The instance is not running out of disk space
    2. The Agent/meshlet service is continuously running
  2. In ServiceNow you can check
    1. The count of records in the Perspectium Outbound table psp_out_message. If it has grown to a large state (such as several million records in the table), this would decrease the integration performance and indicate that more Perspectium MultiOutput jobs are needed to be able to sustain the sharing speed.
    2. Check shared and subscribed queue status to see if there are too many messages sitting in Perspectium MBS which would indicate a delay in replication and potential bottleneck.
    3. Go to Perspectium >> Dashboard which would also show relevant statistics and health indicators




Troubleshooting DataSync

When the records shared from ServiceNow are not reaching the target database you first need to identify which part of the pipeline is causing problems. Usually first you need to check ServiceNow.

ServiceNow Troubleshooting Steps

  1. Check if your Bulk or Dynamic Shares are sharing records.
    1. If the missing records are intended to be shared by a Bulk Share check if it has executed recently.
    2. Try sharing an example record and see if the Bulk Share will share it.
    3. If the missing records are intended to be shared by a Dynamic Share try to insert, update or delete a record according to the use case and see if an outbound message for this record is generated in the Perspectium Outbound table by going to Perspectium >> Outbound >> Outbound Messages.
    4. If no message is being generated check if the Bulk or Dynamic Share filter condition is not configured to filter out some records
  2. Check the Perspectium log in ServiceNow by going to Perspectium  >> Troubleshooting >> Logs and look for errors or warnings.
  3. If your share is generating outbound messages go to the Outbound table under Perspectium >> Outbound >> Outbound Messages, locate the messages and check if they are in Sent status. If the status is Ready this means those messages are not yet sent to MBS. This could be either due to a huge backlog in this table where you may need to increase the number of Perspectium Multioutput Jobs to speed up sending or you may have the Perspectium Jobs in inactive state.
  4. To check the jobs go to Perspectium >> All Scheduled Jobs and see if the Perspectium Multioutput Processing jobs are enabled. You can enable all Perspectium jobs if needed by going to Perspectium >> Start / Stop All Jobs and click the Start button.

If the outbound records are in Sent status you can go and check the Shared Queue Status by going to Perspectium >> Shared Queues and opening the record for the given Shared Queue. Click the Get Queue Status related link. Then in the Status field you should see a message like Connection to queue was successful and there are X messages. If the number of messages currently sitting in the queue is too high this would mean that the Agent/meshlet is either not running or is still processing records from the queue and you can expect some delays in the target database.

Agent Troubleshooting Steps

  1. Log in to the machine which hosts the Perspectium DataSync Agent. 
  2. Check if the Agent service is up and running.
  3. Check the Agent's logs located in the logs folder under the Agent's installation directory. Look for log records on ERROR or WARNING level. 
  4. If your Agent shows to be running in a healthy state but you still are missing records, you can enable Debug level logging for the Agent to log more details for troubleshooting. Debug logging will print in the logs the sys_id of each record it is processing so you can verify if the missing records are going through the agent.
  5. To enable debug logging go to the agent conf directory and open the file log4j2.xml in a notepad editor. Locate the line which says 

    <Root level="info">

    and replace it with

    <Root level="debug">

  6. Save the file. 
  7. Restart the Agent to capture the new debug logging configuration.
  8. You can also verify that the Agent subscribes to the same queue which is used by ServiceNow for sharing by checking the <instance_connection> directive in the agent.xml configuration file.

NOTE: Make sure to disable debug logging when you are finished troubleshooting as debug logging will generate large log files quickly since it's logging more details than standard (info) logging.