<style>
.release-box {
	height: 30px; 
	width: 100px; 
	padding-top: 8px;
	text-align: center; 
	border-radius: 5px; 
	font-weight: bold; 
	background-color: #8efeb3;  
	border-color: #FCE28A;
}

.release-box:hover {
  	cursor: hand;
    cursor: pointer;
    opacity: .9; 
}
</style>
<meta name="robots" content="noindex">

<div class="release-box">
<a href="https://docs.perspectium.com/display/krypton" style="text-decoration: none; color: #FFFFFF; display: block;">
Krypton
</a>
</div>



For troubleshooting configurations for your DataSync Agent, contact Perspectium Support if you have any questions.


(warning) First, you will need to install a DataSync Agent configured to your use case.

(warning) You will also need to create a ServiceNow dynamic share with an update trigger or create a ServiceNow bulk share.

(warning) Finally, make sure to stop running your DataSync Agent before making any Agent configuration changes.








Logging 

To set the logging level at the command line you can enter in the appropriate command for your agent version. At the command line you can set the logging level for a certain duration or indefinitely. You can also manually set the logging level.

(info) NOTE: Setting it manually will set it indefinitely.

It can be helpful to have the logging set to FINEST/DEBUG during installation but it is recommended to have logging set to INFO during production.

Dynamically Setting the Logging Version

By default, the logging level of the DataSync agent is set at INFO. This setting incurs minimal logging and is a good candidate for every day use.

The logging level can be dynamically changed using the setLogging command. This command is useful during troubleshooting to obtain greater detail of the processing taking place. You may be asked to increase the logging level to finest by using the setLogging command.

The setLogging command has the following syntax:

setLogging LOGGING_LEVEL DURATION

The LOGGING_LEVEL argument is required and must be either finest, info, or debug.

The DURATION argument is used to specify how many seconds the change in logging level is to remain in affect. The default is one hour and by setting it to -1 it will remain indefinitely. Typically the logging level is changed to finest only during troubleshooting so the default value ensures that info level debugging is resumed after an hour.

For example you can call the following commands from the root Perspectium directory:

# Setting to finest logging for an hour
bin/setLogging finest
 
# Setting to finest logging for 3 hours (10800 seconds) 
bin/setLogging finest 10800
 
# Setting to info logging indefinitely
bin/setLogging info -1 

(info) NOTE: This is will stay in affect until the  DataSync Agent is restarted or the duration expires.

Manually Setting the Logging Level

You can manually set the logging level by editing a single line of the config. Log into the host that the agent is installed on and navigate to the home directory of the Agent and edit the log4j2.xml file within the conf folder.

Change the property <Root level="debug"> to the desired logged level: info, debug, or finest.

By default the Agent is configured with info but like the setLogging command, can be changed to different levels to aid in troubleshooting. The levels are the same, with finest providing the most detail and info provides the least. Save the file and restart the agent to implement the change. This will stay in affect through Agent restarts until it is modified back. You can still modify the logging level through the previous bin/setLogging command above.

Additional Configurations

There are additional changes you can make to alter the logs specific to different databases and libraries used by the Agent to aid in troubleshooting. The file logging.properties within the conf folder contains information and examples for such changes.


↑ Go to top of page




Database Debugging


To enable debugging logs for I/O through the JDBC of MSSQL you can append the following to the end of logging.properties in the Agent's conf folder.

1) Create a logging.properties file in the conf folder if it does not exist, creating it with the following content:

# Specify the handler, the handlers will be installed during VM startup.  
handlers=java.util.logging.ConsoleHandler, java.util.logging.FileHandler
  
# Default global logging level.  
.level= OFF  
  
java.util.logging.FileHandler.pattern = ../logs/perspectium.log 
java.util.logging.FileHandler.limit = 5000000  
java.util.logging.FileHandler.count = 20  
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter  
java.util.logging.FileHandler.level = FINEST  
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.ConsoleHandler.level = FINEST

2) Add the following to the bottom of the logging.properties file:

com.microsoft.sqlserver.jdbc.level = FINEST

For example:

# Specify the handler, the handlers will be installed during VM startup.  
handlers=java.util.logging.ConsoleHandler, java.util.logging.FileHandler
  
# Default global logging level.  
.level= OFF  
  
java.util.logging.FileHandler.pattern = ../logs/perspectium.log 
java.util.logging.FileHandler.limit = 5000000  
java.util.logging.FileHandler.count = 20  
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter  
java.util.logging.FileHandler.level = FINEST  
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.ConsoleHandler.level = FINEST
  
# Facility specific properties.  
com.microsoft.sqlserver.jdbc.level = FINEST

3) Open the wrapper.conf file in the conf folder and verify you have a wrapper.java.additional.# entry that points to the logging.properties file.

For example an entry such as wrapper.java.additional.1=-Djava.util.logging.config.file=../conf/logging.properties

wrapper.java.additional.1=-Djava.util.logging.config.file=../conf/logging.properties
wrapper.java.additional.2=-Dlog4j.configurationFile=../conf/log4j2.xml
#wrapper.java.additional.3=-Dfile.encoding=UTF-8

If you do not have an entry similar to wrapper.java.additional.1=-Djava.util.logging.config.file=../conf/logging.properties, add one at the end of the wrapper.java.additional configuration section, incrementing to the next number not being used and ensuring there isn't a # in front of the line. The # will comment out that line so that it's not active.

In the above example, if there was no entry for wrapper.java.additional.1=-Djava.util.logging.config.file=../conf/logging.properties, we would add it as:

wrapper.java.additional.3=-Djava.util.logging.config.file=../conf/logging.properties

So it can be the third entry added since there already exists entries for 1 and 2 and the current 3 entry is not being used.

4) (Windows Only) If the Agent is installed on Windows, open up the agent.bat in the Agent's bin folder and add the -Djava.util.logging.config.file="%~dp0..\conf\logging.properties" argument after the -Dlog4j.configurationFile="%~dp0..\conf\log4j2.xml" argument.

That is, change this block from:

IF [%1]==[maxHeap] (
    SET JAVA_ARGS=-Xmx%2 -Dlog4j.configurationFile="%~dp0..\conf\log4j2.xml" -classpath "%~dp0..\jars\*";"%~dp0..\extlib\*" com.perspectium.replicator.Replicator
) else (
    SET JAVA_ARGS=-Dlog4j.configurationFile="%~dp0..\conf\log4j2.xml" -classpath "%~dp0..\jars\*";"%~dp0..\extlib\*" com.perspectium.replicator.Replicator
)

To This:

IF [%1]==[maxHeap] (
    SET JAVA_ARGS=-Xmx%2 -Dlog4j.configurationFile="%~dp0..\conf\log4j2.xml" -Djava.util.logging.config.file="%~dp0..\conf\logging.properties" -classpath "%~dp0..\jars\*";"%~dp0..\extlib\*" com.perspectium.replicator.Replicator
) else (
    SET JAVA_ARGS=-Dlog4j.configurationFile="%~dp0..\conf\log4j2.xml" -Djava.util.logging.config.file="%~dp0..\conf\logging.properties" -classpath "%~dp0..\jars\*";"%~dp0..\extlib\*" com.perspectium.replicator.Replicator
)

5) Restart the Agent and you should see start seeing MS SQL JDBC logs in this format:

FINEST: TDSCommand@281ae0e3 (SQLServerPreparedStatement:111 executeXXX): request complete



To enable additional logging for Snowflake's JDBC driver, you can configure your agent.xml as follows: 

<database_parms Tracing=CONFIG/>


<database_parms Tracing=FINEST/>


<database_parms Tracing=ALL/>

Where ALL will provide the most detailed level of additional logging. See tracing=<string> for more information on the different logging levels available.

A tmp log file will be created. See How to generate log file on Snowflake connectors


To enable this Snowflake Meshlet, you can configure your connection url in the application.yml with one the following: 

jdbc:snowflake://xxxxxx.snowflakecomputing.com/?TRACING=CONFIG


jdbc:snowflake://xxxxxx.snowflakecomputing.com/?TRACING=ALL


jdbc:snowflake://xxxxxx.snowflakecomputing.com/?TRACING=FINEST



To enable debugging logs for MySQL, you can configure your agent.xml with the following: 

<database_parms>characterEncoding=UTF-8&logger=com.mysql.cj.log.StandardLogger&profileSQL=true</database_parms>

See MySQL configuration properties.


To enable debugging logs for Postgres, you can configure your agent.xml with one the following: 

<database_parms>loggerLevel=TRACE&loggerFile=pgjdbc.log</database_parms> 


<database_parms>loggerLevel=DEBUG&loggerFile=pgjdbc.log</database_parms> 

See Logging using java.util.logging.


To enable debugging logs for I/O through the JDBC of Oracle you can configure the logging.properties with the following:

oracle.level=ALL
oracle.jdbc.driver.level=ALL
oracle.jdbc.pool.level=ALL
oracle.jdbc.util.level=ALL
oracle.sql.level=ALL
oracle.jdbc.handlers=java.util.logging.FileHandler

java.util.logging.FileHandler.level=ALL
java.util.logging.FileHandler.pattern=%h/jdbc4.log 
java.util.logging.FileHandler.count=1
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
oracle.handlers=java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter




↑ Go to top of page




Monitoring DataSync Agent logs

The DataSync Agent generate log messages in the logs directory that is created upon installation of the agent. To view logs for your agent, open the perspectium.log file within the logs directory. 

Here are some examples of log messages that might appear within your perspectium.log file that indicate potential errors with your DataSync Agent:

Error typeSample log message(s)Suggested action
Connection timeout2019-07-11 11:15:54.064 ERROR - main - Replicator - new connection attempt to target: https://example.perspectium.net as user: example password length: 10 failed: com.perspectium.api.MessageBusException: Send Error: Connect to example.perspectium.net:443 [example.perspectium.net] failed: Connection timed out: connect, GET https://example.net/customerstatus HTTP/1.1Check your network connection and that your Agent has outbound access through your organization's firewall. Also, check that your <message_connection> URL is correct.
Invalid credentials2019-07-11 11:21:35.854 ERROR - main - Replicator - new connection attempt to target: https://ofc.example.net:4443/ as user: admin12 password length: 10 failed: com.perspectium.api.MessageBusException: Send Error: Unexpected response status: 401 reason: , GET https://ofc.example.net:4443//customerstatus HTTP/1.1 (Please check that your queue exists and that your credentials are correct)Check that your username, password, and queue name are correct.
Database primary key error2019-07-11 11:27:49.778 ERROR - main - Scheduler - java.lang.IllegalStateException: Unable to locate the database entry using task defined type: randomsqlCheck that the database that you're trying to update exists.
Database connection error2019-07-11 11:32:12.745 WARN  - main - TaskDatabase - The TCP/IP connection to the host localhost, port 1411 has failed. Error: "Connection refused: connect. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".Check your network connection and that your Agent has outbound access through your organization's firewall. Also, check that your database port and name are correct.
Database login error2019-07-11 11:29:43.223 ERROR - main - SQLDataSource - Error: Cannot create PoolableConnectionFactory (Login failed for user 'exampleuser'. ClientConnectionId:98e4b368-e34d-47f8-9019-19a2f6d66715)Check that your database username and password are correct.
Invalid decryption key2019-07-11 11:37:03.935 ERROR - test_temporal_subscribe - SubscriberTask - Subscriber Error: com.perspectium.replicator.SubscribeException: Failed to decrypt message, make sure the shared secret keys match!Check that your decryption key is correct (i.e. matches the key as used to encrypt the data in your source instance such as ServiceNow).
Invalid agent.xml format

2019-07-11 11:39:25.392 ERROR - main - Replicator - (stderr) [Fatal Error] :27:12: The element type "random" must be terminated by the matching end-tag "</random>".

2019-07-11 11:39:25.392 ERROR - main - Replicator - Error parsing null

2019-07-11 11:41:37.659 ERROR - main - Replicator - message_connection configuration directive is missing!

Validate that your agent.xml file is formatted correctly. Also, check that your agent.xml is configured correctly.


↑ Go to top of page




Timeout Values

It may be necessary to adjust the timeout values that the DataSync Agent is using. You can do this using the attributes of <message_connection> with connection_request_timeout and connection_timeout. For example, you adjust these values to 120000 for 120 seconds

(info) NOTE: This is only applicable if you are using https for the Agent. 


<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
<config>
    <agent>
        <subscribe>
            <task>
                <task_name>timeout_example</task_name>
                <message_connection user="XXXX" password="XXX" connection_timeout="120000" connection_request_timeout="120000">https://example.perspectium.net</message_connection>
                .
                .
                .
            <task>
        <subscribe>
    </agent>
</config>






CLOB Configuration

By default, the DataSync Agent's maximum column size is 251 characters for character string type columns such as NVARCHAR.  When the Agent sees a ServiceNow column defined with more than 251 characters, it will create the column as a CLOB type in the database the Agent is connected to. Columns are created with this length and type to ensure all content is preserved from ServiceNow. 

Since ServiceNow uses their own translations to map column types they define for “fields” to the column types of the backend database that powers their platform, these fields don’t always have the exact same length as what’s shown in the ServiceNow UI. When a field is created with less than 251 characters in ServiceNow, that field is considered a single line string which maps to an NVARCHAR type with the same number of characters in the ServiceNow platform's backend database (generally MySQL).  If it’s more than 251 characters, that field would be mapped and created as a CLOB type in their backend database. 

As a result, when you define a field in ServiceNow with a type of “String” and a length of 400 and because this is mapped to a CLOB type in the backend database, you can actually enter a value greater than 400 characters in the field and it will save properly in ServiceNow both in the UI and in background scripts. When the data is sent out from ServiceNow using the Perspectium DataSync application, this field’s data will appear with more than 400 characters. 

If the agent were to create a column in the database with 400 characters, this would result in the data being truncated and loss of content. To prevent this, the Agent creates the field with a larger length and uses a CLOB type that has no length restrictions.

The Agent does provide a way to customize this value to a different one than the default value of 251. To do this, add the following directive to the agent.xml file saved in your Agent's installation directory:

Directive

Value

<database_column_max_size>

A number to be the maximum before converting to a CLOB.

This value cannot be greater than the database maximum size for a character string column (see below).

(info) NOTE: Each Agent-supported database has a maximum value that a text column can be created as.  Any column size greater than the value specified below will be created as a CLOB type.


Database

Maximum Column Size

Oracle2000
SQL Server (MSSQL)4000
MySQL*4000
SAP Hana5000
Amazon Redshift65535
Amazon Aurora65535
IBM DB232672
HP Vertica65000
PostGreSQL10485760
Snowflake4000
Sybase32767

(info) NOTEMySQL also has table row size limits that should be considered before setting this value too high as you may get "Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. This includes storage overhead, check the manual. You have to change some columns to TEXT or BLOBs" exceptions if you create too many columns as character string types and not CLOB types.



<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
<config>
  <agent>
     <max_reads_per_connect>4000</max_reads_per_connect>
     <polling_interval>5</polling_interval>
     <skip_message_set_processing/>
     <share/>
     <subscribe>
        <task instances="1">
           <task_name>test_subscribe</task_name>
           <message_connection password="encrypted:vlOtU71yu8N/EFIJH85SSBtaIt7qEEfvqiqft9VZyYE=" queue="psp.out.replicator.testqueue" user="admin" use_basic_consume="false">https://testperspectium.net</message_connection>
           <instance_connection password="encrypted:vlOtU71yu8N/EFIJH85SSPN9aF0P5/YViVwPEVFcGW4=" user="test.user">https://myinstance.service-now.com</instance_connection>
           <handler>com.perspectium.replicator.sql.SQLSubscriber</handler>
           <decryption_key>This is my decryption key for testing</decryption_key>
           <database_type>mysql</database_type>
           <database_server>localhost</database_server>
           <database_port>3306</database_port>
           <database_user>testuser</database_user>
           <database_password>testpassword</database_password>
           <database_parms>characterEncoding=UTF-8 & useSSL=false</database_parms>
           <database>psp_repl</database>
           <database_column_max_size>4000</database_column_max_size>
        </task>
     </subscribe>
  </agent>
</config>



↑ Go to top of page




Instance Filter

The DataSync Agent will process all data which comes into its queue. This means that by default multiple instances can write to the same queue and the agent will process them all. If you do not want this behavior you can modify your configuration so the agent will only process data from a specific instance and skip the rest.

To set up instance filter for the DataSync Agent, follow these steps: 

  1. Navigate to the directory where you saved your agent.xml file when installing your DataSync Agent. Then, open your agent.xml file in a text editing application.

  2. Add the <key> directive to the <task> directive of your agent.xml. For example if you wanted to only process data from the ServiceNow instance dev12345 than you would put:

    <key>dev12345</key>


  3. Save the agent.xml and restart the agent.

 (info) NOTE: If you have multiple tasks in your agent.xml you should place this tag in each task. You can have one task have one key and a different task with a different key.


<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
<config>
    <agent>
        <subscribe>
            <task>
                <task_name>example_subscribe</task_name>
                <message_connection password="encrypted:XXXX" user="XXXX" queue="psp.out.replicator.example" >amqp://example.perspectium.net</message_connection>
                <instance_connection password="encrypted:XXXX" user="XXXX">http://dev18450.service-now.com</instance_connection>
                <handler>com.perspectium.replicator.sql.SQLSubscriber</handler>
                <decryption_key>The cow jumped over the moon</decryption_key>
                <database_type>mysql</database_type>
                <database_server>MyAddress</database_server>
                <database_port>3306</database_port>
                <database_user>XXXX</database_user>
                <database_password>XXXX</database_password>
                <database>psp_repl</database>
 
                <key>dev18450</key>
            </task>
        </subscribe>
 
        <max_reads_per_connect>2000</max_reads_per_connect>
        <polling_interval>5</polling_interval>
    </agent>
</config>



↑ Go to top of page




Error Directory

Users can specify a log directory for messages that have caused errors. These logs contain the messages' data to help you review and recover records. 

To add an Error Directory, follow these steps:

  1. Navigate to the directory where you saved your agent.xml file when installing your DataSync Agent. Then, open your agent.xml file in a text editing application.

  2. Place the tag <error_directory></error_directory> within the agent.xml. 

    For example: <error_directory>../errors</error_directory> will create a directory named errors at the base Agent directory. See more examples below.

  3. Save the agent.xml and restart the agent.

Examples: 

Task Level 

Place the <error_directory> directive within the <task> brackets. Defining it explicitly (or avoiding it) for each task. You can do this if you want to configure different error directories per task / queue.


<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?> 
<config> 
	<agent> 
		<subscribe> 
			<task> 
				<task_name>example_subscribe_foo</task_name> 
				<message_connection user="example" password="XXXX" queue="YYYY">amqps://example.perspectium.net</message_connection> 
				<instance_connection user="foo" password="XXXX">https://foo.service-now.com</instance_connection> 
				<handler>com.perspectium.replicator.sql.SQLSubscriber</handler> 
				.... 
				<error_directory>../errorsFoo</error_directory> 
			</task>   
			<task> 
				<task_name>example_subscribe_bar</task_name> 
				<message_connection user="example" password="XXXX" queue="ZZZZ">amqps://example.perspectium.net</message_connection> 
				<instance_connection user="bar" password="XXXX">https://bar.service-now.com</instance_connection> 
				<handler>com.perspectium.replicator.sql.SQLSubscriber</handler> 
				.... 
				<error_directory>../errorsBar</error_directory> 
			</task> 
		</subscribe>   
		<max_reads_per_connect>2000</max_reads_per_connect> 
		<polling_interval>5</polling_interval> 
	</agent> 
</config>


Agent Level

Place the <error_directory> directive within the <agent> level to have all the tasks inherit this behavior. You can do this if you just want one single error directory for all your tasks / queues.


<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?> 
<config> 
	<agent> 
		<subscribe> 
			<task> 
				<task_name>example_subscribe_foo</task_name> 
				<message_connection user="example" password="XXXX" queue="YYYY">amqps://example.perspectium.net</message_connection> 
				<instance_connection user="foo" password="XXXX">https://foo.service-now.com</instance_connection> 
				<handler>com.perspectium.replicator.sql.SQLSubscriber</handler> 
				.... 
			</task>   
			<task> 
				<task_name>example_subscribe_bar</task_name> 
				<message_connection user="example" password="XXXX" queue="ZZZZ">amqps://example.perspectium.net</message_connection> 
				<instance_connection user="bar" password="XXXX">https://bar.service-now.com</instance_connection> 
				<handler>com.perspectium.replicator.sql.SQLSubscriber</handler> 
				.... 
			</task> 
		</subscribe>   
	<error_directory>../errors</error_directory> 
	<max_reads_per_connect>2000</max_reads_per_connect> 
	<polling_interval>5</polling_interval> 
	</agent> 
</config>









↑ Go to top of page




Log Timezone

By default, the DataSync Agent will produce logs in the timezone of server that is running the agent. You can change the timezone manually by modifying the log4j2.xml file in the conf directory.

To change the timezone in the agent logs, follow these steps:

  1. Navigate to the directory where you saved your agent.xml file when installing your DataSync Agent. Then, open your agent.xml file in a text editing application.

  2. The <RollingRandomAccessFile> directive controls the log files the Agent creates. Within this directive find the <PatternLayout> directive and add your preferred timezone to the <Pattern> directive. 

    You can see a list of timezones here under the TZ database name column.


    <?xml version="1.0" encoding="UTF-8"?>
    <Configuration>
        <Appenders>
            <Console name="CONSOLE" target="SYSTEM_OUT">
                <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} %-5p - %t - %c{1} - %m%n"/>
            </Console>
            <RollingRandomAccessFile name="FILE" fileName="logs/perspectium.log"
                    filePattern="logs/perspectium-%d{MM-dd-yyyy}-%i.log.gz">
                <PatternLayout> 
                	<Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS}{America/New_York} %-5p - %t - %c{1} - %m%n</Pattern>
                </PatternLayout>
                <Policies>
                    <TimeBasedTriggeringPolicy />
                    <SizeBasedTriggeringPolicy size="20 MB"/>
                </Policies>
                <DefaultRolloverStrategy max="14"/>
            </RollingRandomAccessFile>
            <Async name="ASYNC">
                <AppenderRef ref="FILE"/>
                <AppenderRef ref="CONSOLE"/>
            </Async>
        </Appenders>
        <Loggers>
            <logger name="org.apache.http" level="error" additivity="false">
                <AppenderRef ref="ASYNC"/>
            </logger>
            <logger name="com.perspectium.db" level="info">
                <AppenderRef ref="ASYNC"/>
            </logger>
            <logger name="org.springframework" level="info">
                <AppenderRef ref="ASYNC"/>
            </logger>
            <Root level="info">
                <AppenderRef ref="ASYNC"/>
            </Root>
        </Loggers>
    </Configuration>



    Notice the <Pattern> directive has the {America/New_York} tag added so logs will be using the Eastern timezone ({America/New_York} will account for both daylight and standard times).  In this example we're only updating the timezone of the log files but if you would like to update the console (in the case of running in foreground mode), change <PatternLayout> in the <Console> directive as well.

  3. Save the agent.xml and restart the agent.


↑ Go to top of page




Use Cashe 

The <use_cache/> directive on the Agent can be used to tell it to NOT reach out to the ServiceNow instance to get the latest table schemas. It will instead attempt to find the table schema based on the local file system it is installed in. This can be used when you want to block the Agent from hitting the ServiceNow instance and instead use the provided schemas.

In the agent.xml, you will include the <use_cache/> directive within the <task> brackets like so:

<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?> 
<config> 
	<agent> 
		<subscribe> 
			<task> 
				<task_name>example_subscribe</task_name> 
				<message_connection password="encrypt:XXXX" user="XXXX" queue="..." >amqp://example.perspectium.net</message_connection> 
				<instance_connection password="encrypt:XXXX" user="XXXX">http://example.service-now.com</instance_connection> 
				. . . 
				<use_cache/> 
			</task> 
		</subscribe> 
	</agent> 
</config> 


As for agent schemas, when the Agent receives a message from the instance “acme” it will attempt to grab the Perspectium created schema from the acme instance, authenticating based off of the user in the <instance_connection>. After successfully grabbing the schema for a specific table it will place that XML file in the local file system in the bin directory of the Agent. More specifically it will create a folder in the bin directory named acme.service-now.com.schemas_directory (based off the instance name acme). Using the use_cache directive will have the Agent pull the schemas from here.

If you intend to use this and you do not have this structure already, create this folder and import the table schemas. You can get a ZIP file of the schemas you currently need from your instance by running the Exporting Table Schemas option.

You could also manually grab them by saving the file generated by going to: 

https://acme.service-now.com/task.do?PSP_SCHEMA

per your ServiceNow instance and per table.


Possible Errors

If you are using this be aware that the replication will fail if:

  1. You do not have the directory for the schemas based off the instance name

  2. You do not have a table schema XML file for the table in the message

  3. You do not have the proper XML structure of this schema

You should also be aware that this means you will no longer get the updated schemas in the event of structure changes to the table or schemas pertaining to a new table.


↑ Go to top of page




Sending Receipts in Intervals

You can configure the DataSync Agent to send receipts back to your ServiceNow instance at regular intervals and when the DataSync Agent is shut down.

The following directive is required for this Agent configuration:

Directive

Default Value

Description

<receipt_send_interval>1800Indicates the number of seconds in which the Agent will regularly send receipts back to the sharing/publishing instance

(info) NOTE: The default value is set to 30 minutes (1800).

To configure the DataSync Agent to send receipts at regular intervals and upon shutdown, follow these steps:

  1. Navigate to the directory where you saved your agent.xml file when installing your DataSync Agent. Then, open your agent.xml file in a text editing application.

  2. Add the <receipt_send_interval> opening and closing tags anywhere within <task> tags.

  3. Add a value within the <receipt_send_interval> tags to indicate the number of seconds that should elapse before the Agent sends receipts back to your sharing/publishing instance.

  4. Save and close the agent.xml file and run the DataSync Agent. The Agent will now send receipts at the interval value you have entered and when the Agent is shut down.


↑ Go to top of page




Enable Data Reporting

By default, performance reporting on the DataSync Agent is disabled. However, to enable reporting on DataSync Agent performance data, you can add the <enable_performance_report/> tag within the <agent> tag of the agent.xml file that was created upon initial installation of the DataSync Agent.


<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
<config>
   <agent>
      <skip_report/> 
      <share/>
      <subscribe>
         <task instances="1">
            <task_name>datasyncforservicenow_to_mysql_subscribe</task_name>
            <message_connection password="encrypted:vlOtU71yu8N/EFIJH85SSBtaIt7qEEfvqiqft9VZyYE=" queue="psp.out.replicator.testqueue" user="admin" use_basic_consume="false">https://<name>.perspectium.net</message_connection>
            <instance_connection password="encrypted:vlOtU71yu8N/EFIJH85SSPN9aF0P5/YViVwPEVFcGW4=" user="test.user">https://myinstance.service-now.com</instance_connection>
            <handler>com.perspectium.replicator.sql.SQLSubscriber</handler>
            <decryption_key>The cow jumped over the moon</decryption_key>
            <database_type>mysql</database_type>
            <database_server>localhost</database_server>
            <database_port>3306</database_port>
            <database_user>user</database_user>
            <database_password>encrypted:vlOtU71yu8N/EFIJH85SSMoilKLTeJHQrNZPJ7c5tFU=</database_password>
            <database_parms>autoReconnect=true & characterEncoding=UTF-8 & useSSL=false</database_parms>
            <database_column_max_size>251</database_column_max_size>
            <database>psp_repl</database>
         </task>
      </subscribe>
   </agent>
</config>



↑ Go to top of page




Skip Reporting

Adding the <skip_report/> directive to your agent.xml will cause the agent to skip sending reporting messages. 

(info) NOTE: Adding this directive will also turn off sending receipts back for Data Guarantee.


<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
<config>
   <agent>
      <skip_report/> 
      <share/>
      <subscribe>
         <task instances="1">
            <task_name>datasyncforservicenow_to_mysql_subscribe</task_name>
            <message_connection password="encrypted:vlOtU71yu8N/EFIJH85SSBtaIt7qEEfvqiqft9VZyYE=" queue="psp.out.replicator.testqueue" user="admin" use_basic_consume="false">https://<name>.perspectium.net</message_connection>
            <instance_connection password="encrypted:vlOtU71yu8N/EFIJH85SSPN9aF0P5/YViVwPEVFcGW4=" user="test.user">https://myinstance.service-now.com</instance_connection>
            <handler>com.perspectium.replicator.sql.SQLSubscriber</handler>
            <decryption_key>The cow jumped over the moon</decryption_key>
            <database_type>mysql</database_type>
            <database_server>localhost</database_server>
            <database_port>3306</database_port>
            <database_user>user</database_user>
            <database_password>encrypted:vlOtU71yu8N/EFIJH85SSMoilKLTeJHQrNZPJ7c5tFU=</database_password>
            <database_parms>autoReconnect=true & characterEncoding=UTF-8 & useSSL=false</database_parms>
            <database_column_max_size>251</database_column_max_size>
            <database>psp_repl</database>
         </task>
      </subscribe>
   </agent>
</config>



Some examples of reporting messages:

2019-10-10 08:13:38.534 INFO  MessageManager - bytes:167  first one: {"name":"status","topic":"monitor","attributes":"component_type=replicator_agent","type":"heartbeat","value":"Perspectium-Version: Dubnium_4.2.2","key":"examplekey"}

2019-10-10 08:13:40.764 INFO  MessageManager - bytes:189  first one: {"topic":"monitor","type":"replicator","key":"examplekey","name":"Msg/s : EventSubscriberInternalTest","attributes":"","extra":"","psp_timestamp":"2019-10-10 15:13:10.472","value":"0.00"}

2019-10-10 08:13:41.120 INFO  MessageManager - bytes:185  first one: {"topic":"monitor","type":"replicator","key":"examplekey","name":"Msg/s : EventSubscriberInternal","attributes":"","extra":"","psp_timestamp":"2019-10-10 15:13:12.712","value":"0.00"}

2019-10-10 08:13:42.490 INFO  MessageManager - bytes:186  first one: {"extra":"","name":"Msg/s : PagentSubscriberCUSTOM_A","topic":"monitor","attributes":"","type":"replicator","psp_timestamp":"2019-10-10 15:13:40.467","value":"2.20","key":"examplekey"}

2019-10-10 08:13:43.236 INFO  MessageManager - bytes:178  first one: {"extra":"","name":"Msg/s : PagentSubscriber","topic":"monitor","attributes":"","type":"replicator","psp_timestamp":"2019-10-10 15:13:33.255","value":"0.00","key":"examplekey"}


↑ Go to top of page




Timing

You can enable additional timing configurations within your agent.xml file which will give a more detailed view of how it is spending its time processing. The two configurations give details on

  1. How long it takes to grab data from the Perspectium Integration Mesh (Queue Timing)

  2. How long it takes to both build the statement and send the statement to the database (DB Timing). DB Timing records will include those to do a database action such as inserting, updating or delete as well as timings for querying for records to see if they exist as is done in the .bulk and .delete scenarios before the action action of insert/update/delete.  Querying for records is indicated with Query DB IO in the logs while other database actions are indicated with DB IO

From this and finer level logging we can get some further details on any bottlenecks which you may see.

To change timing for the DataSync Agent, follow these steps:

  1. Navigate to the directory where you saved your agent.xml file when installing your DataSync Agent. Then, open your agent.xml file in a text editing application.

  2. For queue timing add in the tag <queue_timing/> to your task. For database timing add in the tag <db_timing/> to your task.

  3. Save your agent.xml and restart your agent



<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
<config>
   <agent>
      <subscribe>
         <task>
            <task_name>example_subscribe</task_name>
            <message_connection password="YYYY" user="XXXX">https://example.perspectium.net</message_connection>
            <instance_connection password="YYYY" user="XXXX">https://example.service-now.com</instance_connection>
            <handler>com.perspectium.replicator.sql.SQLSubscriber</handler>
            <decryption_key>The cow jumped over the moon</decryption_key>
            <database_type>mysql</database_type>
            <database_server>localhost</database_server>
            <database_port>3306</database_port>
            <database_user>XXXX</database_user>
            <database_password>YYYY</database_password>
            <database_parms></database_parms>
            <database_column_max_size>251</database_column_max_size>
            <database>test_db</database>
 
            <queue_timing/>
            <db_timing/>
         </task>
      </subscribe>
      <max_reads_per_connect>4000</max_reads_per_connect>
      <polling_interval>5</polling_interval>
      <skip_message_set_processing/>
   </agent>
</config>



Timing Logging 

Queue Timing

You will likely see logs like the following for the queue_timing. This is showing when it did retrieve these 11 (4 KB) records it did so in 262 ms.

2019-01-18 10:42:13.872 INFO  - example_subscribe - SubscriberTask - Queue item retrieved in 0ms size: 4292
2019-01-18 10:42:13.896 INFO  - example_subscribe - SubscriberTask - Queue item retrieved in 0ms size: 4292
2019-01-18 10:42:13.950 INFO  - example_subscribe - SubscriberTask - Queue item retrieved in 0ms size: 4292
2019-01-18 10:42:14.004 INFO  - example_subscribe - SubscriberTask - Queue item retrieved in 1ms size: 4292
2019-01-18 10:42:14.129 INFO  - example_subscribe - SubscriberTask - Queue item retrieved in 102ms size: 0
2019-01-18 10:42:14.129 INFO  - example_subscribe - SubscriberTask - example_subscribe 11 items retrieved from queue: psp.out.replicator.example in 262ms avg: 23ms
2019-01-18 10:42:14.232 INFO  - example_subscribe - SubscriberTask - Queue item retrieved in 102ms size: 0
2019-01-18 10:42:14.232 INFO  - example_subscribe - SubscriberTask - example_subscribe 1 items retrieved from queue: psp.out.replicator.example in 102ms avg: 102ms


DB Timing

You will likely see logs like the following for the db_timing. This is showing for that given record the DB IO took 2 ms, with an average of 4 ms so far.
Note: This will only log the DB IO for every 50 records.

2019-01-18 10:44:29.183 INFO  - example_subscribe - SQLCommand - DB IO completed in 2ms - total: 232 avg: 4


Finest Logs

If you enable “FINEST” level of logging you can also drill down into the logs see the time it takes to execute our DB commands. When the Agent receives a message it will

  1. Decrypt & Deserialize it

  2. Determine & Verify the table schema

  3. Query the database to determine if it needs to insert or update

  4. Build the statement to send to the database

  5. Send the statement to the database

For the most part the longest delays are steps 3 and 5, querying the database and updating the database.


Query

Here is an example of the Agent finishing verifying the Table Schema and querying for whether to do an insert or update:

...
2019-01-08 16:44:33.172 FINEST - example_subscribe - SQLRow - Processing column: "state"
2019-01-08 16:44:33.172 FINEST - example_subscribe - SQLCommand - executeQuery calling build...
2019-01-08 16:44:33.172 FINEST - example_subscribe - StatementBuilder - processing row size: 0
2019-01-08 16:44:33.203 FINE  - example_subscribe - SQLCommand - PreparedStatement results: false closed: false
2019-01-08 16:44:33.203 FINE  - example_subscribe - TableAction - Action: insert
2019-01-08 16:44:33.203 FINE  - example_subscribe - TableAction - Action: insert
2019-01-08 16:44:33.203 FINEST - example_subscribe - StatementBuilder - processing row size: 50
...

At 16:44.33 172 it finished verifying the schema and executed the query. At 16:44:33.203 we got the response from the database. So in this case the database took 31 milliseconds to return this value to us. From this we determined that we have to perform an insert.


Insert/Update

Here is an example of the Agent finishing building the statement to send to the database:

2019-01-08 16:44:33.204 FINE  - example_subscribe - StatementBuilder - Table: "incident" Column: "state" index: 50 type: 2 value: 1
2019-01-08 16:44:33.204 FINEST - example_subscribe - StatementBuilder - SourceColumnName: state ColumnName: "state" SQLType: 4 DBType: 2 column size: 40 table size: 38 value size: 1
2019-01-08 16:44:33.238 FINE  - example_subscribe - SQLCommand - PreparedStatement results: false updateCount: 1 closed: false
2019-01-08 16:44:33.238 FINEST - example_subscribe - MessageBus - recv issued using: Uri: https://example.perspectium.net queue: psp.out.replicator.example

At 16:44.33 204 it finished building the statement and executed the insert. At 16:44:33.238 we got the response from the database. So in this case the database took 34 milliseconds to commit the insert.


↑ Go to top of page




Running DataSync Agent in Test Mode

The DataSync Agent can be run in test mode for testing purposes. Test mode means that the Agent will run without performing minimum requirements checks for client performance. 

(info) NOTE: Only follow the steps to configure test mode below if you want to run the Agent in a QA environment as we cannot guarantee the agent will run stable and without performance issues if your environment does meet the minimum requirements.

To install the DataSync Agent in test mode, follow these steps:

  1. Navigate to the directory where you saved your agent.xml file when installing your DataSync Agent. Then, open your agent.xml file in a text editing application.

  2. Within the <agent> directive, add the self-closing tag <test_mode/>.

  3. Save your agent.xml and restart your agent


<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
<config>
   <agent>
      <test_mode/>
      <subscribe>
         <task instances="1">
            <task_name>test_agent_subscribe</task_name>
            <message_connection password="encrypted:XXXX" queue="psp.out.replicator.example" use_basic_consume="true" user="example">amqps://example.perspectium.net</message_connection>
            <instance_connection password="encrypted:XXXX" user="example">https://example.service-now.com</instance_connection>
            <handler>com.perspectium.replicator.sql.SQLSubscriber</handler>
            <decryption_key>The cow jumped over the moon</decryption_key>
            <database_type>mysql</database_type>
            <database_server>localhost</database_server>
            <database_port>3306</database_port>
            <database_user>example</database_user>
            <database_password>encrypted:XXXX</database_password>
            <database_parms>characterEncoding=UTF-8</database_parms>
            <database>psp_repl</database>
         </task>
      </subscribe>
      <max_reads_per_connect>4000</max_reads_per_connect>
      <polling_interval>5</polling_interval>
      <skip_message_set_processing/>
   </agent>
</config>



↑ Go to top of page




Using Basic Consume

The DataSync Agent has a directive to poll for messages from the Integration Mesh that provides higher reliability when faced with connection problems as well as improved throughput when communicating over AMQPS.

This directive should only be used for AMQPS connections and is configured by setting the following attribute to the <message_connection> tag: use_basic_consume="true".


<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
<config>
   <agent>
      <subscribe>
          <task>
             <task_name>example_subscribe</task_name>
             <message_connection use_basic_consume="true" user="XXXX" password="XXXX">amqps://example.perspectium.net</message_connection>
             ...
             ...
             ...
          </task>
      </subscribe>
 
      <max_reads_per_connect>4000</max_reads_per_connect>
      <polling_interval>5</polling_interval>
   </agent>
</config>



↑ Go to top of page




Subscriber Task Level Message Filtering

In some situations, it is possible to require that a replicator agent only process messages containing specific topic, type, or key values. This is especially true when instances of ServiceNow is cloned but the targeted queues of dynamic and bulk shares have not changed, causing messages that target your replicator agent but with a different key value. To enable this level of filtering, add specific topic, type, or key directives directly in the subscriber task section of your agent configuration (agent.xml).

The following example will cause the subscriber task to only process messages that contain key “ven01225”

<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
<config>
    <agent>
	<share>
	</share>
     	<subscribe>
     	<task>
            <task_name>dloo_mac</task_name>
            <message_connection password="xxxx" user="admin">amqp://nvirginia-cluster-2-amqp.perspectium.net</message_connection>
            <instance_connection password="xxxx" user="admin">https://ven01225.service-now.com</instance_connection>
            <key>ven01225</key>
            <handler>com.perspectium.replicator.sql.SQLSubscriber</handler>
            <decryption_key>xxxx</decryption_key>
            <database_type>mysql</database_type>
            <database_server>localhost</database_server>
            <database_port>3306</database_port>
            <database_user>xxxx</database_user>
            <database>psp_repl</database>
            <error_directory>../errors</error_directory>
         </task>
      </subscribe>
    <max_reads_per_connect>4000</max_reads_per_connect>
      <polling_interval>30</polling_interval>
   </agent>
</config>



↑ Go to top of page