You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

Overview

This guide provides a methodical approach to identifying causes for data to be out of sync.  It requires a birds-eye understanding of the DataSync Architecture to see how data flows out of ServiceNow into the Perspectium Mesh where the data sits until the Perspectium DataSync Agent is able to consume the messages to perform the insert/update/delete into the Target Database.  Here is a diagram that shows the DataSync workflow:

Listed below are the components where this document details how to troubleshoot each of them:

  1. ServiceNow (Source)
  2. Perspectium Mesh
  3. DataSync Agent
  4. Database (Target)


Before deep diving from Source to Target, it is best to perform STEP 1 to identify problematic tables where discrepancies have been identified.  Prioritizing the tables then allows us to focus on specific shares and possibly time frame of an issue.  If possible, it would also be helpful to find sys_id values that either exist in ServiceNow, but not in the Database or found in the Database but not in ServiceNow. 


Identify tables where there is discrepancy

Create a spreadsheet that includes the following columns and their values:

  • Table Name
  • Count in ServiceNow
  • Count in Database
  • Count difference
  • Date/time of when counts were recorded
  • Flag indicating if there are more records in Database or ServiceNow
  • Examples (if possible): sys_id values with a discrepancy 

Progress through these upcoming steps as we investigate from source to target to possibly identify the root cause for each discrepancy identified or isolate where the problem may be.

ServiceNow

The Perspectium DataSync Application in the ServiceNow platform simply creates PSP Outbound Message (psp_out_message) records and publishes them to the Perspectium Mesh (MBS). 

Majority of discrepancy issues from a ServiceNow perspective are due to:

  • Dynamic/Bulk Share not creating psp_out_message record
  • The PSP Outbound Message is not properly created (Example: empty value.  See Known Issues)
  • Dynamic Share does not capture final version of source record
  • ‘Perspectium MultiOutput Processing’ Job(s) is inactive or platform blocks it from running
  • ‘Perspectium MultiOutput Processing’ Job(s) is unable to communicate with Perspectium Mesh




Test Dynamic/Bulk Share for table(s) identified in the previous step (ServiceNow)

We want to test the Dynamic/Bulk Share for the problematic table to see if it properly creates the PSP Outbound Message and gets published to your Perspectium Mesh.  If this is successful, then we can quickly eliminate the Share configuration in ServiceNow and connection to the Mesh from being an issue.  

Dynamic Share

  • Use Test Record feature or create/update a test record 

Bulk Share

  • Clone Bulk Share
  • Rename Bulk Share
  • Populate Limit number of records shared to a small value like 5 or set a filter to share a specific set of records
  • Confirm the expected number of psp_out_message records are created with topic=replicator and are eventually Sent.


(info) NOTEThe Attributes field of psp_out_message records contain useful information to help determine whether the expected PSP Outbound Messages are created:

FieldDescription
set_idsys_id of Dynamic/Bulk Share that created PSP Outbound Message
RecordIdsys_id of Source record being replicated
cipherEncryption method used by the Dynamic/Bulk Share
sharedQueuesys_id of Shared Queue record

Review the Dynamic Share/Bulk Share/Scheduled Bulk Share configuration for tables found in Step 1 (ServiceNow)

Review the configuration to ensure it is configured to capture everything expected.  Here a couple things to keep in mind when reviewing the share options:

Dynamic Share


Parent/Child Hierarchy > Child Table Only (Default)

This DEFAULT setting is enabled if the 'Share base table only' and 'Include all child tables' options are NOT selected. It will share all Child Tables 1 level below the specified Table. This may be misleading as most people expect Dynamic Shares are auto-configured to share data from the Table configured.

Trigger Conditions Tab > Interactive Only

This option will only replicate data if a User executed an insert/update/delete. It will NOT share data if a Background Job or Script performed an operation against the table. Customers may not always be aware of Background Jobs or scripts performing actions on Tables they are attempting to sync to a Database.

Filter and Enrichment Tab

Review the conditions and all options in this tab to ensure nothing prevents replication from occurring for an expected record.

Business Rule order and Business Rule when

There is a possibility that the ‘Perspectium Replicate’ Business Rule does not capture the final version of a record.  Ensure that the Dynamic Share is configured to share the record after all Business Rules and Workflows are complete.

Duplicate Perspectium Replicate Business Rules

Sometimes, a single Dynamic Share can have two or more business rules associated to it. This will lead to sharing the source record multiple times. The Dynamic Share Business Rules Dashboard allows you to quickly identify any duplicates and you can Reset the Dynamic Share Business Rules to re-create all business rules for your active Dynamic Shares such that only a single business rule is associated with each active Dynamic Share.  


Bulk Share


Conditions

Review the conditions and all options in this Tab to ensure nothing prevents replication from occurring for an expected record.

Share updates since then

This will only share records that have been inserted or updated since the date/time shown in the 'Last share time' Field. It simply modifies the query against the Base Table to capture records updated since the last execution time. This option is typically used with Scheduled Bulk Shares.

(info) NOTE: There is a Known Issue with this Bulk Share option that is fixed in Fluorine Plus Patch 1.1 and above.

Scheduled Bulk Shares


Check to see if the Scheduled Bulk Share(s) are configuerd to execute as expected and the associated Bulk Shares are running at those times.

Related Bulk Shares with filters using date/time stamps such as sys_updated_on can lead to data gaps if not properly configured based on the Repat Interval of the Scheduled Bulk Share.  Ensure that the Bulk Share condition is configured to capture records update since the start time of the Scheduled Bulk Share.


(info) NOTE: Please review the KNOWN ISSUES section of this document as it contains additional information that may impact the Perspectium Shares from functioning properly

Perspectium logs and debug logging (ServiceNow)

To access Perspectium logs, go to Perspectium > Control and Configuration > Logs.

If there is no explanation as to why PSP Outbound Message is not being generated for a Share, then you can review the Perspectium Logs.  Sometimes it will capture errors during the processing of the Share such as problems with finding or encrypting the source record.  It should also show if there have been any communication problems from ServiceNow to the Mesh.  

If you can reproduce an issue on demand where a Dynamic or Bulk Share is not sharing expected records, then enabling debug logging can be used to gather additional info. 

Depending on the scenario it can be enabled at the global level or from a specific share:


Global level

Go to Perspectium > Control and Configuration > Properties, then check the Generate debugging logs option.

(info) NOTE: Do not enable in high-volume Instances such as Production unless advised by Perspectium Support.

Dynamic Share 

Go to Perspectium > Replicator > Dynamic Shares, then check the Advanced checkbox. An Advanced tab will appear. In the Advanced tab, check the Enable debug logging option. 


Bulk Share 

Go to Perspectium > Replicator > Bulk Shares, then check the Advanced checkbox. An Advanced tab will appear. In the Advanced tab, check the Enable debug logging option. 


Global level debugging should not be used in Production environments unless coordinated with Perspectium Support.  Ideally, it should be used in sub-prod environments where you are able to reproduce a broad issue but still not able to isolate a specific Dynamic/Bulk Shares.  However, you can temporarily use the Share debug logging in a Production Instance if you are able to single out particular Shares with an issue. 

DataSync Agent 

The Perspectium DataSync Agent typically sits within a customer’s firewall/network to ensure control and security of your data.  Initial struggles with the installation of the DataSync Agent are often due to communication issues between the Agent and Perspectium Mesh, source ServiceNow Instance or Database.  Once the Agent is able to establish connection with each item, it goes through the following workflow to sync data:

  1. Consume messages from the Target Queue in the Perspectium Mesh
  2. Decrypts and Deserialize message
  3. If configured, Agent will fetch table schema from ServiceNow and alter tables in the Database 
  4. Query the Database to determine if it needs to perform an insert or update/delete (for .bulk messages)
  5. Build the SQL Statement
  6. Execute SQL Statement

Typically, longest delays occur during an interaction with the Database (steps 4 and 6). 




Test Connectivity (DataSync Agent) 

If you are seeing a backlog of messages in the Queue, then you’ll want to confirm whether the Agent is able to communicate with the Perspectium Mesh to retrieve the data.  The Validate Configuration Tool can be used to quickly test the connection between the Agent and the following components as defined in the agent.xml file.

  • Perspectium Mesh 
  • Your ServiceNow Instance
  • Your Target Database

The agent will not be able to startup successfully if connection to one of these endpoints is failing.  This tool can be executed with the following commands from the Agent’s base install directory:

  • Windows: bin\validateConfiguration.bat (NOTE: Double-clicking will work as well)
  • Linux: bash bin/validateConfiguration

The resulting output will be printed into the ConfigurationValidationReport.log file found in the bin directory/folder.  If the ConfigurationValidationReport.log shows a successful connection to all components, then there are no network related issues.  Example output from executing validateConfiguration:

Agent Information:
Agent version: Fluorine_4.6.6
						
Agent name: 'psp_replicator_fluorine_466'
						
Configured Tasks:
        Task: psp_replicator_fluorine_466_subscribe Type: subscribe

						
               psp_replicator_fluorine_466_subscribe instances: 4

						
Attempting to connect to the Message Broker Service at: https://training.perspectium.net as user: training/training - SUCCESS
						
Attempting to connect to the database: psp_repl_466 at host: localhost port: 3306 as user: root - SUCCESS Attempting to fetch schema from: https://dev63629.service-now.com- SUCCESS
						
Validation Completed, results: SUCCESS


Report has been saved in file: ConfigurationValidationReport.log 

Typically, these connectivity issues are due to network/proxy/firewall configuration.  Here is a list of ports that need to be open depending on the protocol used in the value of the <message_connection> directive in the agent.xml file:

Protocol 

Port

Comment

AMQPS

TCP/5671

Outbound to your Perspectium Mesh (amqps://<mesh_name>.perspectium.net)

AMQP

TCP/5672

Outbound to your Perspectium Mesh (amqp://<mesh_name>.perspectium.net)

HTTPS

TCP/443


Protocol 

Port

Comment

AMQPS

TCP/5671

Outbound to your Perspectium Mesh (amqps://<mesh_name>.perspectium.net)

AMQP

TCP/5672

Outbound to your Perspectium Mesh (amqp://<mesh_name>.perspectium.net)

HTTPS

TCP/443