You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »


Below are some common issues you may come across while using DataSync for ServiceNow.  Contact support@perspectium.com if your issue is not listed below or you have any other questions.


Verify if your Quebec instance has a properly functioning GlideEncrypter

Log in to your ServiceNow instance with admin privileges.

In the Filter Navigator, type in Scripts - Background.

Run the following script:

var ge = new GlideEncrypter();
var plainText = "The cow jumped over the moon";
var encrypted = ge.encrypt(plainText);
gs.print("Encrypting: " + plainText + ", and got: " + encrypted);
var decrypted = ge.decrypt(encrypted);
gs.print("Decrypting: " + encrypted + ", and got: " + decrypted);

If the result is successful, you will see the following:

*** Script: Encrypting: Some encryption key here, and got: plzF5fF0yab+qzzglBWoW+co191O2CUx+3l9W2kqQdA=
*** Script: Decrypting: plzF5fF0yab+qzzglBWoW+co191O2CUx+3l9W2kqQdA=, and got: Some encryption key here

if the result is NOT successful, there will be an error displayed with a large stack trace in the ServiceNow System Logs.

What if my bulk share shows as "Running" for a long time but I don't see any new outbound messages being created?

Verify if the bulk share scheduled job is actually still running.

First, go to System Scheduler > Scheduled Jobs > Scheduled Jobs and see if there is a job with the name Perspectium Replicator Bulk Share <bulk_share_table_name> (Gold and older versions) or Perspectium DataSync Bulk Share <bulk_share_table_name> where <bulk_share_table_name> is the name of the table as selected when creating the bulk share. 

You can also look at System Diagnostics > Active Transactions (All Nodes) and see if the job with the aforementioned name shows up there. 

If you don't see the job in either place, then more than likely the job was terminated because the platform cleans up jobs that are running too long in order to conserve resources and prevent memory leaks.

You can look in the System Logs > System Log > All and see if there's a log that may indicate why the bulk share job was terminated. 

Another reason the bulk share job may be terminated is because of System Quota Rules.


To prevent a bulk share from killed, split the bulk share into multiple bulk shares of smaller record counts using filter conditions. This way the bulk share won't run so long to cause the transaction to be terminated. For help splitting up your bulk shares into multiple ones contact support@perspectium.com.

Why is the state of my outbound message stuck on "Ready"?

One possible solution is to verify that the queue you are trying to send the messages to is valid. 

Go to Perspectium > DataSync > Shared Queues. See if the target queue that the bulk share or dynamic share is using is still active. If it is not active, even though it was previously active, see the Status field. If the message states that the queue has been deactivated due to HTTP POST failing, then the queue has invalid credentials (401 error). Thus, enter the right credentials in the Queue user field and Queue user password field. Click Get Queue Status again to check if connection is successful. 

If connection is successful, check Active to start using the queue. 

If connection is still not successful, contact support@perspectium.com. 

What if my old outbound messages are not being deleted or I am getting a warning log that I may need to reset my data cleaner rules?

If your outbound messages are unable to clear up periodically you may need to reset your data cleaner rules used to delete sent messages. To do this, go to the u_psp_data_cleaner table and click on Reset Data Cleaner Rules.

(warning) WARNING: This will reset the table with the default data cleaner rules. If you have any custom rules or changes, they will be deleted.

For further help, contact support@perspectium.com.

Sharing records from ServiceNow instance to another ServiceNow instance using Data Guarantee is unsuccessful?

If your records cannot be shared out from ServiceNow to the DataSync Agent successfully, try the following:

  1. In ServiceNow, navigate to Perspectium > Tools > Receipts.

  2. Check the checkbox next to the Receipt record(s) with a Pending or Error Delivery Status.

  3. In the bottom left-hand corner of the screen, click the Resend Message(s) button. This will create a new receipt record in pending status and set the original receipt record to success status. The new resend receipt will remain in the pending status until it gets an acknowledgement from the agent.

  4. Refresh the page and check the Delivery Status for your Receipt record(s) again.

Why am I receiving data or updates when my subscribe is inactive?

There are two different factors that can be at play if you are encountering this situation.

The first is relates to messages being consumed but not acted on. When you create a subscribe configuration you will check the actions that you want to accept (Create, Update, Delete). If a message is pushed to your queue for a table that you do not have a Configuration set or the message is an action that you are not accepting you will still consume the message from the queue but you will not act on it. This can be seen in the Inbound messages tab, where the message will read “Skipped” in the state.

The second is that you are Subscribed to a table and its parent table. For example, you have created a Subscribe for the table Incident and another Subscribe for Task (Incident extends Task). ServiceNow handles this hierarchy like Java hierarchies, i.e. an Incident is also a Task but a Task isn't necessarily an Incident. ServiceNow will also have a copy of the record in both tables, they will have the same fields, sys_id, and everything. Then at some point you set the Incident Subscribe to “inactive” but leave the Task Subscribe as “active”. There are several cases where if someone pushes a Task record update, and that record happens to be an Incident you will consume and act on that message.

To avoid this second matter you could either remove the blanket Subscribe for the parent table or specifically set the parent Subscribe to ignore certain classes within the Filter portion. 

This condition within the Task Subscribe will make it so you will ignore updates to Problem and Incident.

Why are there duplicate outbound messages?

For the cases where two duplicate outbound messages are created, check to make sure there are not two Perspectium Replicate business rules on the table with each rule being in a different domain. The same rule in two domains (especially if one is in global and another in a lesser domain) can cause both rules to run to create Replicator messages when dynamically sharing a record.

Business rule exception: TypeError: Cannot convert null to an object

If you get the following error:

Exception (TypeError: Cannot convert null to an object. (; line 1)) occured while evaluating'Condition: 
PerspectiumReplicator.isReplicatedTable(current.getTableName(), "share", current.operation().toString())'
in business rule 'Perspectium Replicate' on cmn_location:Test; skipping business rule

This is generally due to recursive Business Rules calls.

If there is a Business Rule which calls current.update(), then the Business Rules will recursively: update itself, trigger the Business Rules, update itself, trigger the business rules, update itself - until ServiceNow's recursive handling will kill it. Since our dynamic share is triggered on the Business Rules it will work for the true correct Business Rule call, but, it will fail on the recursive calls. So you will see errors of the failed run, however, the correct call would correctly replicate the data.

This occurs because on the Business Rule condition, we check for the current.operation().toString() to help validate you have the share configurations correctly set up and we aren't replicating erroneously. Within a normal Business Rule call current.operation() returns the operation as expected (insert/update/delete). On a recursive Business Rule call current.operation() is null resulting in this error.

There are two ways you can narrow down on this problem:

  • Type “Debug Business Rule” in the filter navigator and enable the “Debug Business Rule (Details)” to start the ServiceNow trace of Business Rules and repeat the test to trigger these errors (edit and save a record). You may need to be in the form view of the record instead of the list view. Then copy and paste the Business Rule trace at the bottom of the form and send it to support@perspectium.com. This will help confirm that this is the case and narrow down on the rule.

  • Review Business Rules on the table to be shared for any current.insert() or current.update() being called as those will be the ones that would cause the recursive issue.

How do I clear the cache in my instance?

Sometimes after installing the newest Perspectium update set, the ServiceNow instance may still be using the old version of the included Perspectium scripts. To fix this you can add “/cache.do” to the end of your Instance URL like so:

You should be taken to a page that looks like this:

This will clear the instance cache so that it will use the newest versions of the included scripts. Then just use the back-arrow in your browser to return to your instance.

How do I end long running Perspectium background jobs?

Sometimes, the Perspectium scheduled job running in the background (such as the bulk share job to bulk share records) will get stuck running. In these cases, you will want to kill the job so it doesn't affect your instance's performance.

In the below example, we will be killing a bulk share scheduled job against the incident table. Bulk share scheduled jobs will have the name “Perspectium Replicator Bulk Share <table_name>” where <table_name> is the name of the table in the bulk share configuration. Note that the schedule job's name field is only 40 characters max so for a bulk share on the incident table the name will be “Perspectium Replicator Bulk Share incide”.

Go to User Administration > All Active Transactions and see if the Perspectium job is there. If so, delete it.

Go to System Scheduler > Scheduled Jobs > Scheduled Jobs and see if the Perspectium job is there. If so, delete it. For a bulk share scheduled job, this will be a Run Once trigger type scheduled job.

Go to the sys_cluster_state table and look to see if a node has this job running. You can find the node that has this job running by filtering on the stats field, searching for where the field contains the job name.

For example, using the above “Perspectium Replicator Bulk Share incide” job, you can filter on “Bulk” in the stats field:

If you find a node that has this job, click on the node to view its details in form view and then click on the “Run script” UI action at the bottom:

In the Run script dialog that appears, enter the following script to run:

Packages.com.glide.sys.WorkerThreadManager.get().shutdown();
Packages.com.glide.util.GlidePropertiesDB.set("glide.worker.startup", "com.glide.schedule.GlideScheduler");
Packages.com.glide.sys.WorkerThreadManager.get().init();

If the above steps doesn't work, please contact ServiceNow support to have them kill the job.

Perspectium Tables Being Cloned

When requesting a clone of a ServiceNow instance, you will need to select the “Exclude audit and log data” option to not clone Perspectium tables and their data:

Though the Perspectium tables are listed in Exclude Tables list, per the Exclude audit and log data option's description, this option also needs to be selected for this list to be honored.

Error Processing Shared Queue java.net.SocketException: Broken Pipe

If you are seeing this error in your Perspectium Logs the issue is caused by improper naming of the endpoint url of the shared queue that you are using for bulk or dynamic shares. To verify this is the case and resolve the issue you can use these steps:

The message should have the name of the queue that is generating the error. Ex: Error processing shared queue psp.out.yourQueue on http://yourEnpoint/: java.net.SocketException: Broken pipe (Write failed)

Access the configuration of this queue by searching for Shared Queues in the ServiceNow Searchbar and click the shared queues link that appears.

Now click on the shared queue that was named in the java.net.SocketException error.

Once in verify that the endpoint URL contains http. This is the cause of this error.

Change the http to https and update your shared queue.

Once this is done you should see your outbound messages being sent out. If you don not see their state changing from “ready” to “sent” you can manually start the Multioutput Processing job by:

Search for the All Scheduled Jobs link under the Perspectium app in the ServiceNow menu and click on that link

Click on the Perspectium Multioutput Processing job

Click the Execute Now button.

You should see your messages begin to be sent out. If you are still seeing this error after these steps, please contact Perspectium Support for further assistance.

I am having issues with the Subscribe performance

For issues, with subscribe performance in your ServiceNow instance, review the following:

  • Check the Slow Query Logs to see if any of the subscribed tables or Perspectium tables are listed and what queries are being run.

  • Check if auditing was enabled on the Perspectium inbound table (psp_in_message). This table does not come with auditing enabled by default.




Can't find what you're looking for?  

See the FAQ or browse the Perspectium Community Forum.