This guide will provide you the optimization of your DataSync integration for the best performance while avoiding issues with data updates happening incorrectly and out of order in your database.

This is an advanced feature for those who have heavy loads of data to share out. Use this feature with caution, as it may use more resources on your ServiceNow instance.

To optimize performance while avoiding data loss, we will split the sharing on the hexadecimal numbering system (16 characters, 0-9 and A-F) used for ServiceNow sys_id values and share records to different queues. So you can optimize sharing by splitting it based on the 16 characters, having Perspectium jobs share data with 2, 4, 8 or 16 jobs running in parallel with each job sharing to a different queue.

This enables higher performance since multiple jobs are running in parallel to share data. With each job using a different queue and us splitting the sharing by the sys_id hexadecimal character, we ensure that any updates to one particular record goes to the same queue all the time. This way, we capture all updates to the record since the records are fed first in first out to the same queue and you avoid the issues of records being shared out of order and as a result causing changes to be missed.

For example, if you are splitting the sharing as follows:

One bulk share runs every minute to capture all updates to records starting with sys_id 1 go to psp.out.replicator.queue01

A second bulk share runs every minute to capture all updates to records starting with sys_id 2 go to psp.out.replicator.queue02

Then if you the following sequence happens:

  1. sys_id 1 record is created → goes to psp.out.replicator.queue01 (first message in the queue)
  2. sys_id 2 record is created → goes to psp.out.replicator.queue02 (first message in the queue)
  3. sys_id 1 record is updated → goes to psp.out.replicator.queue01 (second message in the queue)
  4. sys_id 1 record is updated → goes to psp.out.replicator.queue01 (third message in the queue)

And since we have two jobs running, these changes will be capture in parallel and sent to the two queues for processing by the DataSync Agent. With the DataSync Agent configured properly as described below with one thread each running against those two queues, it would consume as follows:

  1. DataSync Agent thread 1 configured to read psp.out.replicator.queue01 would read the first message
  2. DataSync Agent thread 2 configured to read psp.out.replicator.queue02 would read the second message
  3. DataSync Agent thread 1 configured to read psp.out.replicator.queue01 would read the second message
  4. DataSync Agent thread 1 configured to read psp.out.replicator.queue01 would read the third message

Thus ensuring we capture all updates and in the correct order.


If we instead had the legacy approach of creating two jobs running and sharing to the same queue we would have the records as follows in the queue:

  1. sys_id 1 record is created → goes to psp.out.replicator.queue (first message in the queue)
  2. sys_id 2 record is created → goes to psp.out.replicator.queue (second message in the queue)
  3. sys_id 1 record is updated → goes to psp.out.replicator.queue (third message in the queue)
  4. sys_id 1 record is updated → goes to psp.out.replicator.queue (fourth message in the queue)

And then if we had the legacy approach of the DataSync Agent with two threads processing from the same queue, it in some cases (depending on how fast each thread is performing) processes the records in the following sequence:

  1. DataSync Agent thread 1 configured to read psp.out.replicator.queue would read the first message and finishes processing
  2. DataSync Agent thread 2 configured to read psp.out.replicator.queue would read the second message and finishes processing
  3. DataSync Agent thread 1 configured to read psp.out.replicator.queue would read the third message and process this message slower this time
  4. DataSync Agent thread 2 configured to read psp.out.replicator.queue would read the fourth message and finish processing before thread 1 finishes the third message

Which would cause the database to then reflect an incorrect update as the database would reflect whichever thread finishes last (in this case thread 2), resulting in sys_id 1 record showing incorrect data since it reflects an older update.

So the goal of the configuration below is to ensure we optimize performance while at the same avoid any data loss.

(info) NOTE: Running multiple jobs in parallel means more ServiceNow scheduled job workers are being used. So you do want to choose the option that best suits your workload and data being shared to ensure optimal performance of your instance.

What's on this page?


ServiceNow Configuration

Create a shared queue

Create a shared queue using the Number of Queues option, choosing an option to match the workload expected. This will create child queues so you can share data in parallel to different queues to avoid data loss. See the Number of Queues option for further information on configuring this feature.

Create a bulk share using the created shared queue

Create a bulk share and select the shared queue you created in the previous step. By selecting the previously created shared queue, it will create child bulk shares to match the number of child queues of the shared queue. This will allow running multiple bulk shares in parallel to different queues to improve performance while keeping updates to a particular record in correct order so we can avoid data loss. See the Sharing to a queue with the Number of Queues option for further information on configuring bulk shares with this shared queue feature.

Create multiple MultiOutput jobs

Once we've created the shared queue and bulk share, we'll want to ensure we have enough MultiOutput jobs pushing data to the Perspectium Mesh (MBS) so as to keep up with the amount of data being generated by the bulk share. This way we can ensure your outbound table (psp_out_message) doesn't get too large so as to impact performance on your instance. You can use the Multiple MultiOutput Jobs feature to configure jobs for MultiOutput processing by outbound queue.

Generally using one of the options available (2, 4, 8 or 16) should be able to line up with the number of queues and bulk shares you have running i.e. if you created a shared queue with the Number of Queues option as 4 you can select 4 for the number of MultiOutput jobs. However you can use the All (One Job Per Queue) if you have more than 16 queues and have the available jobs on your instance to run all the jobs to process all the queues. 

(info) NOTE: Running the All (One Job Per Queue) can create many jobs if you have many queues on your instance since its a 1:1 ratio of jobs created to the number of queues. Ensure your instance has enough jobs available to process these jobs along with any other jobs from other applications/processes on your instance so as to not affect your instance's overall performance.


Agent Configuration

Once you've configured the Perspectium application in ServiceNow and started sharing data out to queues in the Perspectium Mesh, you'll want to configure your DataSync Agent to consume from those queues. To ensure your Agent is configured so as to not have race conditions where multiple threads from the Agent are reading from the same queue and causing updates to happen out of order (or be lost), configure the Agent as follows:

Create a message connection

For the shared queue you created in ServiceNow above, you will create a <message_connection> with the num_queues attribute matching what you specified in the shared queue. This will create threads to consume the child queues along with the base queue.

For example, with the configuration:

<message_connection password="password" queue="psp.out.replicator.prodqueue" user="user" num_queues="4">amqp://instance.perspectium.net</message_connection>

The Agent will create 5 threads, one thread to read from the base queue psp.out.replicator.prodqueue as well as a separate thread for each of the four child queues as follows:

psp.out.replicator.prodqueue01

psp.out.replicator.prodqueue02

psp.out.replicator.prodqueue03

psp.out.replicator.prodqueue04

The base queue psp.out.replicator.prodqueue can be used as a "fast lane" for records that need to have priority for sharing into the database. By default with this optimization setup, the bulk share will use the child queues 01, 02, 03 and 04 to evenly distribute the records to be shared out of the table the bulk share is running on in ServiceNow. 

Configure the Agent for your other settings

Configure the Agent for your other settings (how to connect to the target database, your ServiceNow instance information, etc.) using the configurations as relevant to your setup.




Upgrading from a Legacy Configuration

If you already have your ServiceNow app and DataSync Agent configured and want to upgrade to optimize your setup as above, you will need to upgrade to the latest Perspectium versions that support the above functionality (Perspectium ServiceNow Core Krypton 8.1.0+ and Perspectium DataSync Agent Krypton 8.0.2+) and do the following:

Update your shared queue in ServiceNow

If you already have a shared queue and want the queue to be changed to this new setup, go into the shared queue and use the Number of Queues option to select a value higher than the default and save the record. This will create child queues for the new value specified (i.e. 4 will create 4 child queues).

Update your ServiceNow bulk shares to use the updated shared queue

With the shared queue updated to have child queues, you will then want to update your completed bulk share using this shared queue to also have child bulk shares. If you go into a bulk share that uses a shared queue with child queues, you will see the Create child bulk shares for queues related list UI action:

Since you can't directly update completed bulk shares, you can use this option to create child bulk shares to match the child queues. This will then allow you to split out the bulk share to run in parallel to different queues the next time the bulk share is executed.

(info) NOTE: If your bulk share is using a target queue with child queues and does not have child bulk shares, a message will display when the bulk share record is opened about using the Create child bulk shares for queues option.

Update your MultiOutput jobs to match your queues and bulk shares

With the shared queue and bulk share using child queues and child bulk shares to share out in parallel, use the Multiple MultiOutput Jobs feature to configure jobs for MultiOutput processing by outbound queue to match your changes to the shared queue and bulk share i.e. choose the option for 4 MultiOutput jobs if you split the shared queue and bulk share to have four children each. This way you can ensure the MultiOutput jobs pushing data to the Perspectium Mesh (MBS) can keep up with the amount of data being generated by the bulk share and your outbound table (psp_out_message) doesn't get too large so as to impact performance on your instance.

Note the warning above in the ServiceNow Configuration section and its Create multiple MultiOutput jobs step when using the All (One Job Per Queue) option if you decide to create a job for every queue and you have more than 16 queues.

Update your message connection in your DataSync Agent configuration file

Update your <message_connection> in your agent.xml configuration file with the num_queues attribute matching what you specified in the shared queue. If you have the instances attribute on <task> set to a value greater than 1, you will need to update this to a lower value.

  • No labels