Skip to main content

Check out Interactive Visual Stories to gain hands-on experience with the SSE product features. Click here.

Skyhigh Security

Troubleshoot Outages

Skyhigh Cloud Connector behaves as described in the following outage scenarios. For assistance, contact Skyhigh Security Support.

Cloud Connector is Down In Normal Mode

  • All states are persisted. When the process is restarted, Cloud Connector continues to process files.
    • The file that was interrupted is not picked up automatically for processing and must be renamed.
  • Cloud Connector does not process any logs during this period.
  • Cloud Connector cannot receive any Syslog messages from the source.
  • Data cannot be detokenized until Cloud Connector comes up.
  • Skyhigh CASB can be used, but cannot display any detokenized data.

One Cloud Connector is Down in Active/Active Mode

  • When Active/Active Mode is enabled, if one instance of Cloud Connector is down, other instances of Cloud Connector pick up processing.
  • Once you have reset the disabled process, a message similar to the following is logged to the file shnlogprocessor-debug.log after server restart. This message is generated because every JVM has a different ID. The lock file is created and owned by the first shnlps process/JVM. The restarted shnlps process is not an owner of this lock file. This does not interrupt Log Processing:
    01 Apr 2016 15:53:57,497 [ERROR] [main] LocksFileProcessLogDao   | Failed to upsert FileProcessLog for: 
    C:\shnlp_logs\Skyhigh_Generated_BC_Logs_20160401071947_4092787272.txt com.shn.common.io.lock.FileLockNotOwnerException: 
    File is not locked by this JVM+thread

Connection to Skyhigh CASB is Down

  1. Registry updates and configuration fail to download.
  2. Because Cloud Connector requires authorization, a connection refused exception occurs.
  3. Cloud Connector continuously checks for access and recovers automatically when the connection is back up.

Log Collector is Down

  1. Cloud Connector can process logs but fails during event publishing.
  2. The connection is retried multiple times within the next 30 minutes to publish events.
  3. If not successful, events are lost and file processing is selected as failed.
  • Was this article helpful?