If you have issues installing or configuring the Agent, see the Installation/Configuration troubleshooting page for help troubleshooting.  Below are FAQs and some common issues you may come across while running the DataSync Agent and subscribing to records.  Contact support@perspectium.com if your issue is not listed below or you have any other questions.

What's on this page?




FAQs


Core Application

Does DataSync work with ServiceNow Database Encryption?

ServiceNow provides an option for Database Encryption where data is encrypted at rest in the database. Since Database Encryption happens at the database layer and the Perspectium application runs at the application layer, by the time we call to get data from ServiceNow, the data will be accessible to our application to be shared out.  ServiceNow's documentation mentions with Database Encryption that you can add another level of encryption by also encrypting at the application layer which is what our application supports as well.


DataSync Agent


How do I check the Agent version in Windows?

To check what version of the Agent you are running you can use the following command in the command window from the agent directory:

bin\version.bat

You could also double-click the version.bat file in the Agent's bin directory.

What Java processes does the DataSync Agent run?

The Perspectium DataSync Agent leverages two Java virtual machine instances, or processes, to run.

These Java processes can be started as a service or interactively through the executables in the Agent's bin folder.

Should I configure my DataSync Agent to connect to the Perspectium Mesh via HTTPS or AMQPS?

When you received your Perspectium Mesh credentials, you may have been given two different addresses. Such as:

The choice between these different protocols will vary per customer (largely firewall rules). ServiceNow does not handle AMQP connections, so do not include AMQP within the ServiceNow instance URL for any of your <instance_connection> directives.

How does the Agent handle ServiceNow schema changes?

The DataSync Agent handles schema changes in your ServiceNow instance as follows:

  • Columns that are added to a ServiceNow table will be automatically added to the table in the database.

  • When a column's max size is increased, the Agent will automatically increase the column's size to the maximum size for that database. In the case of MySQL, the column will automatically transition to a CLOB data type.

  • If a column is changed from a different data type to another data type, the data in this column will be skipped (the record itself will insert/update all other columns).

How does the SQL agent commit to the database?

The DataSync SQL Agent leverages the default connection commit strategy of the JDBC driver for which Oracle is auto commit. The agent does not explicitly decide when to perform a commit. The JDBC driver makes this decision.

The Agent retrieves a message from the message store in the order they were published, performs the required processing such as decryption, validation etc., possibly determines the type of SQL operation required (such as update or insert), and then issues the request to the database. The Agent then determines the response and does any further processing required. Once completed, the Agent will fetch the next message from the message store in the queue.

(info) NOTE: You can configure either multiple tasks to run against a single queue or you can configure multiple instances of a single task to run against a single queue. This is done primarily when throughput of the Agent is an issue. Both of these configurations introduce more than a single consumer of the queue and so the order in which the database transaction occurs could be different than the order of the messages within the message store due to scheduling of the task or thread.

What are the required external libraries HDFSFileSubcriber handler?

The HDFSFileSubcriber handler requires the use of the Hadoop external library. You will need the jar files from each of these Hadoop librarires:

  1. Hadoop Annotations

  2. Hadoop Auth

  3. Hadoop Cient

  4. Hadoop Common

  5. Hadoop HDFS

  6. Hadoop Mapreduce

  7. Hadoop Yarn

The Maven repository for these files can be found here. Once you have these files they need to be placed in the extlib folder in the File Replicator agent's directory.

How can I improve my DataSync Agent's overall performance?
  • If you find that your DataSync Agent is running slowly due to a large number of SQL statements being processed, you may want to configure the Agent to perform SQL statements in batches.

(info) NOTE: This feature should only be used when you send messages of the same table into one queue. If your queue has messages from different tables (including sys_audit, sys_journal_field, sys_attachment and sys_attachment_doc records), do not enable this feature as it will cause errors in saving records.  To properly use this feature, separate your dynamic and bulk shares to save each type of table record to a different queue.

  • To do this, open the agent.xml file that was created upon installation of your Agent in a text editing application. Within the <task> directive of the agent.xml file, nest the following directives:
DirectiveDescription
<batch_update/>Self closing tag that configures your Agent to batch SQL UPDATE statements
<batch_insert/>Self closing tag that configures your Agent to batch SQL INSERT statements
<max_batch_size>

Number of SQL statements that will trigger a batch SQL statement execution.  A larger suggested value is 200. 

(info) NOTE: By default, this directive's value will be set to 10

  • In your agent.xml, change the <polling_interval> to a lower number
  • In your agent.xml, increase the number of <tasks>
    • (info) NOTE: Increasing the number of task will require more memory and connections to the database. We do not recommend setting tasks to more than 10. 
  • In your agent.xml, change <message_connection> attributes so use_basic_consume=false
  • In ServiceNow, change your bulk share configuration to Insert Only=true
Handling of long Table and Column names when replicating to the Oracle database

Since Oracle only permits a string of 30 characters by default, a table or a column name that has over 30 characters will be truncated upon replication. The first 15 characters of the name will be displayed, followed by a “_” and then the last 14 of the table name. The “_” denotes the truncated values between the first 15 and the last 14 characters.

For example, when replicating the table “Workflow Estimated Runtime Configuration” (41 characters total) to Oracle database, the replicated name will be “u_workflow_esti__configuration”.

(info) NOTE: When replicating a table or column that have similar names, meaning if the first 15 and the last 14 of the name are the same, only one of the two records will be replicated since after truncation, both of the names will be the same.

(info) NOTE: If you are using Oracle 12.2+ with the 12.2 JDBC driver, you can set the table and column names to support Oracle 12.2's larger 128 bytes limit (prior to 12.2, the limit was 30 bytes meaning table and column names could only be up to 30 standard characters).  See here for further information.

How can I run the agent without performing minimum Java requirement check?

Starting with Iodine 7.0.0 release, the Agent will requires Java 11 as the minimum version. By adding the following directive, you will be able to skip the Java version check with agent: 

<system_resource_check>false<system_resource_check/>

Alternatively, if you want to run the Agent without performing a minimum Java requirement check for testing purposes, see Running DataSync Agent in Test Mode.


Security

Is data at rest in the Perspectium Cloud database encrypted?

Yes, the data at rest inside the Perpectium Cloud database can be secured by encryption for an additional cost. Perspectium uses Amazon Web Services which allows the encryption of data using the industry standard AES-256 encryption algorithm. Please refer to the following link for further details on Encrypting Amazon RDS Resources.

Is my connection to the database in the Perspectium Cloud secure?

The connection to our database is highly secure. We can also set it up as a secure SSL connection for an additional cost. Please refer to the following link for further details on Using SSL to Encrypt a Connection to a DB Instance.

Which encryption algorithm does Perspectium used for data encrypted at ServiceNow?

The cipher Perspectium uses is Triple DES by default and Advanced Encryption Standard 128 (AES-128) as an option.

Does Perspectium support 128 bit AES encryption?

Yes, we support 128 bit AES encryption.

How does data get encrypted at rest and in transit?

The data is encrypted within the ServiceNow Instance before being transmitted to to Perspectium cloud using HTTPS. The payload remains encrypted while at rest within the Perspectium Message Bus until it’s consumed by either a ServiceNow Instance or a DataSync Agent. The former will decrypt the data just prior to being inserted into a ServiceNow table and in the later case the DataSync Agent decrypts the data just before it’s sent to the database server.

Does the Perspectium QA team assess security aspects of the offering during code reviews?

Perspectium performs a weekly infrastructure review which includes security. We also perform a daily code review as part of our sprints which includes security as needed.

What are the security features that Perspectium offers?

All Perspectium data is transmitted using HTTPS and AMQP/AMQPS as secure protocols. An option of having the data encrypted while in Perspectium's Cloud based Database is available. The on-premise DataSync Agent supports a proxy using HTTP/HTTPS to talk to external servers such as those within the Perspectium Cloud environment. VPN's can be set up at an additional cost.

Can the Perspectium Agent connect to ServiceNow through a proxy server?

Yes, you can use a proxy server to connect the Perspectium Agent to ServiceNow v3.2.2. For further details, see Configuring the DataSync Agent to Leverage a Proxy




Troubleshooting


General Issues

What if I'm seeing the following error: java.lang.OutOfMemoryError: Java heap space?

Open the wrapper.conf file located in the Agent's conf folder and change the following configuration:

#wrapper.java.maxmemory=64

Removing the “#” and putting a numeric value higher than 64. This numeric value is a size in MB for the Java memory heap space the agent can use. Generally, you would base this value on the memory available on the server where the Agent is running. For example, if the server has 1GB of memory, you can set it to be 512MB here:

wrapper.java.maxmemory=512
How do I troubleshoot if the Agent is having slow performance issues to the database or the Integration Mesh?

Start by enabling the timing feature in the Agent to see the times the Agent takes to get messages from the queue in the Integration Mesh as well the time it takes to build statements and execute those against the database. This will help determine where the performance bottlenecks are. Contact support@perspectium.com for more information.

What if I keep seeing timeout errors in my Agent logs?

You can alter the default <connection_request_timeout> by setting it to 120000. This should give your connection plenty more room to handle all the IO of large transaction. You would place it within your agent.xml like so:

<config>
    <agent>
        <subscribe>
            <task>
                <task_name>timeout_example</task_name>
                <message_connection connection_request_timeout="120000" user="XXX" password="XXX" >your_url</message_connection>
                ...
            <task>
        <subscribe>
    </agent>
</config>

This should be placed on the <message_connection> within the task level of the desired connection. This attribute will only be set for the specified <message_connection>, so if you have separate connections for monitoring or replicating data they will use the default unless specified.

Another option is if you have firewall access to both your https and AMQPS connections (https://your_instance.perspectium.net & amqps://your_instance-amqp.perspectium.net) you can try either

  • Setting your <max_reads_per_connect> to 1 and use the HTTPS connection

  • Setting your <max_reads_per_connect> to 4000 and use the AMQPS connection

Sharing records from ServiceNow to the DataSync Agent using Data Guarantee is unsuccessful?

If your records cannot be shared out from ServiceNow to the DataSync Agent successfully, try the following:

  1. In ServiceNow, navigate to Perspectium > Tools > Receipts.

  2. Check the checkbox next to the Receipt record(s) with a Pending or Error Delivery Status.

  3. In the bottom left-hand corner of the screen, click the Resend Message(s) button. This will create a new receipt record in pending status and set the original receipt record to success status. The new resend receipt will remain in the pending status until it gets an acknowledgement from the agent.

  4. Refresh the page and check the Delivery Status for your Receipt record(s) again.

What if I keep seeing database connection timeout errors in my Agent logs?

You can add a loginTimeout database parameter to the agent.xml configuration file to control the DB connection timeout.

In your agent.xml, under each <task> entry, add <database_parms>loginTimeout=NN</database_parms> where nn is in seconds.

For example:

 <database_parms>loginTimeout=30</database_parms>

If you already have <database_parms> configured, then append the loginTimeout parameter using:

 <database_parms>integratedSecurity=true;loginTimeout=30</database_parms>
DataSync Agent cannot start as a Windows Service

If you are seeing the following errors: 

“Windows could not start the Perspectium DataSync Agent service on Local Computer. 
Error 1053: The service did not respond to the start or control request in a timely fashion.”
 
“Windows could not start the Perspectium DataSync Agent service on Local Computer. 
Error 2: The system cannot find the file specified.”

This is usually related to the wrong set up of Java paths and environment variables. Here are the best practice instructions to configure and troubleshoot:

  • Check if the Java version installed is supported by the respective agent version DataSync Agent requirements.

  • Verify java runs successfully on the local machine by running on the command line (CMD): 

    java -version

    If you get a positive result with the java version installed then we know java is installed successfully.

  • Verify that the installService.bat script is executed As Administrator to install the service. If not uninstall it and install again “As Administrator”. 

  • Try to reinstall the agent from the command line As Administrator and not from the Windows UI if not done initially. 
    1. Open CMD “As Administrator” 

    2. Go to the folder where the agent install file is located

      cd /folderName/
    3. Enter the following 

      java -jar <installation_file.jar>
    4. Hit Enter

  • Check wrapper.conf file. If you can run “java -version” then in wrapper.conf you can only use wrapper.java.command=java and the JAVA_HOME variable does not need to be set. Like in the example below: 

    # Java Application
    #set.JAVA_HOME= 
    wrapper.java.command=java
  • If you still want to use JAVA_HOME environment variable you need to make sure that this variable matches the actual java path to the /bin folder both in wrapper.conf and in Windows Environment Variables

  • Check System Environment Variables - Path variable. When selecting “Edit” for the Path variable you should see in the list a path to the java /bin folder. If not, add one.
Troubleshooting synchronized delete in table compare

If you notice that delete sync is not working properly and you are using MBS3, it may be due to ServiceNow’s limitation of String object of 32 MB. See Server side scripting is failing with Error "String object would exceed maximum permitted size of 33554432". - Support and Troubleshooting - Now Support Portal

When sending messages to MBS 3.0, you can put any number of messages in each batch and it will create a file for that batch.

When retrieving the messages from MBS 3.0, it will retrieve one file at a time.

You can configure the Agent to send a set amount of messages in each batch so that ServiceNow can process these messages.

To do so, add <message_batch_size>number of messages in each batch</message_batch_size>. The number must be between 1 - 5000. 


To see if ServiceNow can handle the number of bytes, use the following formula and see if it is less than 33554432 (ServiceNow's limit):

<number set in <message_batch_size>> * <number of characters in one idList message> * 2

(info) NOTE: 2 represent how in Javascript, which ServiceNow use, each character use 2 bytes in a String object.

An idList message can have around 44422 characters, which can be less or more based on the field values (i.e. attributes, key, name) and JSON formatting. You can take an idList message and use a character counter to count the number of characters.


MySQL Specific Issues

Why am I unable to consume messages?

If you are unable to consume messages in your agent, one possible reason could be the log file size. To increase the size, locate the innodb_log_file_size directive, and increase the limit. 


Microsoft SQL Server Specific Issues

Why is my data not being saved into my specified schema?

The DataSync Agent saves data into the default schema of the database user specified in the agent.xml configuration file. To change the default schema of a user in your SQL Server database, see the DEFAULT_SCHEMA argument.

(info) NOTEA database user that has the sysadmin fixed server role will always have a default schema of dbo. So specifying a user with this role will always save data into the dbo schema. Also any tables previously created in the dbo schema will need to be manually moved to the new schema (i.e. you will need to do the queries in SQL Server directly) as otherwise these tables will remain in the dbo schema and continued to be updated there. See here for information on how to create a schema.


What if I'm continuously seeing "NULL" is closed and Database connection is being re-established errors in my log?

If your Agent is continuously showing logs similar to this:

2021-01-01 11:31:35.766 ERROR - psp_agent_subscribe - StatementBuilder - [incident] index: 1 error: org.apache.commons.dbcp2.DelegatingPreparedStatement with address: "NULL" is closed.
2021-01-01 11:31:35.766 WARN - psp_agent_subscribe - TaskDatabase - Database connection is being re-established...

This may be due to database configurations in your SQL Server causing the connection to timeout before the Agent is able to finish processing messages. This is shown by the "NULL" is closed message and the Agent having to re-establish the connection.

To help troubleshoot if this is the case, do the following:

  1. Execute the query SELECT @@LOCK_TIMEOUT AS [Lock Timeout]; on your database and see what the result is. If the result is -1, there is no timeout set. Otherwise, the number returned is the number of milliseconds that a statement waits on a blocked resource. Adjust the LOCK_TIMEOUT value to be a higher number so the connection doesn't time out as quickly.
  2. Check the agent.xml configuration file in your Agent's conf folder for any parameters like queryTimeout or anything timeout related in the <database_parms> tag. If there are, remove these timeout parameters or adjust them to be higher. 
  3. Check if you have a databases.xml file in your Agent's conf folder. If so, check if there are any parameters like queryTimeout or anything timeout related in the <database_uri> tag within the section of <database_type>sqlserver</database_type>. If there are, remove these timeout parameters or adjust them to be higher. 
  4. If neither of the above steps fixes the issue, enable more debug logging of the MS SQL JDBC driver and provide these logs to Perspectium Support for further analysis and troubleshooting. 


Oracle Specific Issues

Why am I getting the following error "ORA-04031: unable to allocate 592 bytes (<-byte size may vary) of shared memory ("shared pool","unknown object","sga heap(1,1)","KGLHD")" on my Agent when I'm replicating to an Oracle Database?

The reason the you're receiving this error is due to Oracle's parameters for SHARED_POOL_SIZE. Note that when SGA_TARGET is set and the parameter is not specified, then the default is 0 (internally determined by the Oracle Database), but if the parameter is specified, then your specified value indicates a minimum value for the memory pool.

In the case where a value was set for SGA_TARGET, that would be the value you would need to update rather than the SHARED_POOL_SIZE since by setting SGA_TARGET, you are using automatic SGA management. Hence, there is no need to manually set the value of SHARED_POOL_SIZE because Oracle will internally transfer memory between the SGA components.

In the case where you are more concerned with setting a larger value for SGA_TARGET, you can also make a larger value for SHARED_POOL_SIZE but the value must be smaller than SGA_TARGET to avoid encountering the following issue:

SGA_TARGET = 1GB
SHARED_POOL_SIZE = no value

You will encounter an issue when the value of SHARED_POOL_SIZE exceeds the value of SGA_TARGET.

It is recommended to set the SGA_TARGET value at a minimum of 5GB. Therefore, if the SHARED_POOL_SIZE value is at 1GB, the SGA_TARGET will still have at least 4GB for allocation of other memory components that are concurrently stored in SGA_TARGET.

(info) NOTE: Be sure to restart the Oracle Database after making the described value changes. For additional information, refer to SHARED_POOL_SIZE or SGA_TARGET.

Why is the NVARCHAR2 column is not converting to an NCLOB?

Oracle does not allow data type changes on from NVARCHAR2 to NCLOB. One thing to prevent this from happening for new tables, add in <database_column_max_size> and set a value to your agent configuration. This will allow new columns for new tables to be created as NCLOB if it exceeds the value in <database_column_max_size>. The default value is 251.

Another alternative is to go to your ServiceNow instance and to the related table's fields. For any field that may require a large amount of text, set the max length higher than 251.


Foreign Character Issues

What if I'm having issues with multibyte characters and foreign language encryption/decryption?

Do one of the following:

  • If you are expecting multibyte characters from ServiceNow it is recommended to turn on multibyte encryption within the Perspectium Properties page.

  • If you are running a MySQL agent it is recommended to place characterEncoding=UTF-8 within the database_parms tag.

  • If you are running a SQL Server agent on Windows then you must be using at least Agent V3.11.0 and include SendStringParametersAsUnicode=true within the Database Parms tag.

The format is:

<!-- MySQL multibyte decryption -->
<database_parms>characterEncoding=UTF-8</database_parms>
<!-- SQL Server multibyte decryption (Windows) -->
<!-- Note: Requires Agent V3.11.0 or greater -->
<database_parms>SendStringParametersAsUnicode=true</database_parms>
What if I cannot see foreign characters in my database?

Ensure that your database is using the correct character set. Oracle requires the AL32UTF8 character set. MySQL requires utf8mb4 or utf8 character set and utf8_general_ci collation for its character set and collation

In MySQL, you can update this by running the following command on the database:

ALTER DATABASE [DBNAME] CHARACTER SET utf8 COLLATE utf8_general_ci;

Also verify that you are using the <byte_padding> option in your agent.xml. To setup byte padding please see DataSync Agent configurations.


Can't find what you're looking for?  

Contact Perspectium Support or browse the Perspectium Community Forum.