Skip to main content
McAfee Enterprise MVISION Cloud

Deploy Connectors

Download and deploy connectors alongside the private applications. You can deploy multiple connectors for redundancy and scaling. When you add an application, you can associate it with several connector groups for high availability. For example, If the VM running a connector fails, your application is still secured and accessible by the other running connector.

Before you begin

Skyhigh Security strongly recommends to use a Virtual Machine (VM) having Ubuntu 18.x and later, 4CPU, 8 GB RAM with 50 GB HDD for deploying connectors. You can also deploy connectors on Red Hat Enterprise Linux versions 7 and 8. The hostname of a VM is used to update the POP name in the Skyhigh CASB UI, so it is a good practice to have the hostname length less than 64 characters.

NOTE: Each connector is associated with a connector group. When you create a connector group, remember to copy the provisioning key it generates. A connector is identified with a connector group through this provisioning key. To achieve optimal performance, Skyhigh recommends that you deploy the connectors to the closest PoP.

When you are using a firewall, you must set up your firewall to allow the following domains and HTTP(S) ports:

Domains  Port Purpose
myshn.net 443 Updates the PoP status in Skyhigh CASB UI
index.docker.io 443 Docker hub container image library to pull an image and token authentication
registry-1.docker.io
auth.docker.io
production.cloudflare.docker.com
storage.googleapis.com 443 Storage which keeps information of the latest Kubernetes release
k8s.gcr.io 443 Main Kubernetes image-serving system which stores images
cdn.fwupd.org   Open-source daemon to manage the installation of firmware updates on the Linux systems
api.snapcraft.io 443 Snap daemon installation
canonical-lgw01.cdn.snapcraftcontent.com
canonical-bos01.cdn.snapcraftcontent.com
security.ubuntu.com 443 Download and install packages on the host (Ubuntu) as a part of connector deployment
azure.archive.ubuntu.com
packages.microsoft.com
changelogs.ubuntu.com
motd.ubuntu.com
iam.mcafee-cloud.com 443 Register token or get access for the user accounts from the IAM service
us.pa-wgcs.mcafee-cloud.com 443 Create a OpenVPN tunnel with the Private Access Gateway
de.pa-wgcs.mcafee-cloud.com
sg.pa-wgcs.mcafee-cloud.com
wgcs.mcafee-cloud.com:8080 8080 Endpoint for registering connector

 

  1. On the Skyhigh CASB navigation bar, click the settings icon.
  2. From the drop-down list, click Service Management.
  3. Click Add Service Instance.
  4. Select VMware vCenter.
  5. In the Instance Name field, enter the service instance name.
  6. Click Done.
    Adds the selected service instance.
  7. Under Services on the Service Management page, select the name of the service instance.
  8. Click Setup.
  9. Click Download Deployment Package.
    Downloads the PoPPackage.tar.
  10. Unzip the PoPPackage.tar file.
  11.  Unzip the infrastructure.tar file, and extract the infra.sh file from the vCenter folder.
  12. Copy both PoPDeployment.tar and infra.sh to the Ubuntu VM.
    NOTE: Make sure that the VM is set to the UTC timezone.
  13. Configure Domain Name System (DNS) in the host for name resolution.
    NOTE: You can configure a maximum of three DNS name servers in a host.
  14. Execute infra.sh on the VM and provide the following parameters:

sudo bash infra.sh ‑‑provision_key="<PROV_KEY>" ‑‑gateway=<GATEWAY_IP> ‑‑proxy=<PROXY> ‑‑no_proxy=<NO_PROXY>

NOTE: The provisioning key is generated when you create a connector group. The provisioning key is a text string that identifies a connector with a connector group. The maximum number of connectors you specify while creating a connector group is the number of times you can use a provisioning key.

  • Infra.sh invokes the deployment of a connector
  • GATEWAY_IP is the nearest Private Access Gateway deployed in the following PoPs:
    • US PoP - us.pa-wgcs.mcafee-cloud.com
    • Germany PoP - de.pa-wgcs.mcafee-cloud.com
    • Singapore PoP - sg.pa-wgcs.mcafee-cloud.com
    • London PoP - gb.pa-wgcs.mcafee-cloud.com
    • Brazil PoP - br.pa-wgcs.mcafee-cloud.com

      NOTE: We recommend that you select a PoP location that is nearest to the location where you deploy the connectors to achieve optimal performance.
  • <proxy> is the address of the proxy server
  • <no_proxy> is the list of domains you can add to bypass the proxy
     
     NOTE: Set the <proxy> and <no_proxy> parameters only when your connector uses the proxy server. When you use proxy, make sure to add corp.nai.org,.internalzone.com, .scur.com, and .corp.mcafee.com to the <no_proxy> parameter.

The following is an example of a sudo command:

sudo bash infra.sh --provision_key="ey.....LTUwRTVCOUE2NTFFNCJ9" --gateway=us.pa-wgcs.mcafee-cloud.com 

Example with proxy between connector the Internet
sudo bash infra.sh --provision_key="ey.....LTUwRTVCOUE2NTFFNCJ9" --gateway=us.pa-wgcs.mcafee-cloud.com \
     --proxy=http://10.212.24.192:9090 --no_proxy=localhost,.corp.mcafee.com,172.17.0.1,ubuntu,127.0.0.1


sudo bash ./infra.sh --provision_key="<PROV_KEY>" --gateway=<GATEWAY>
Where,
<PROV_KEY> = eyJjb25uZWN0b3JOYW1lIjoiWlROQWFscGhhIiwiY3VzdG9tZXJJZCI6IjEjcyLTUwRTVCOUE2NTFFNCJ9
<GATEWAY> = us.pa-wgcs.mcafee-cloud.com
<PROXY> = http://10.212.24.192:9090
<NO_PROXY> = localhost,corp.nai.org,.internalzone.com,.scur.com,.corp.mcafee.com,172.17.0.1,ubuntu,127.0.0.1
  1. Execute sudo kubectl get pods -n cwpp to check the status of the pods.
    The following is an example of pod's status:

 root@lubuntu-core:~# sudo kubectl get pods -n cwpp
NAME                                READY   STATUS      RESTARTS   AGE
connector-ztna-5454cd865c-6hhdk     1/1     Running     0          6d21h
cwpp-cicd-56d6dcc9b7-dl5cq          1/1     Running     0          6d21h
cwpp-connector-7f8kj                1/1     Running     0          6d21h
cwpp-logging-4xkzx                  1/1     Running     0          6d21h
cwpp-pop-manager-1642047000-jzqxw   0/1     Completed   0          12m
cwpp-pop-manager-1642047300-mvbwz   0/1     Completed   0          7m10s
cwpp-pop-manager-1642047600-fhtmc   0/1     Completed   0          2m10s

After completing the deployment successfully, the connector and a PoP Manager image is created on the VM and your docker instance runs as a container. You can check the PoP status on the POP Management page.

Once the connector is deployed, it automatically registers with Skyhigh SSE, generates the certificate, and get it signed by Skyhigh SSE. The connector establishes a tunnel with the Private Access Gateway by using this signed certificate. The connector provides secure access to the requested private application through the tunnel.

The connectors are automatically upgraded to the latest available version. This feature is supported only on the functional connectors with version  v1.0.0.3 and later.

To check the connector version, execute the following command:

sudo kubectl describe pod connector-<connector name> -n cwpp | grep Image 

To find the name of the connector, execute the following command :

sudo kubectl get pods -n cwpp

Best practice to shut down a VM

It is important to stop the microk8s service before you shut down a VM. Start the microk8s service again after you start the VM.

  1. Execute the following commands to stop the microk8s service:

  root@lubuntu-core:~# kubectl get pods -n cwpp
NAME                                 READY     STATUS     RESTARTS     AGE
connector-ztna-5454cd865c-6hhdk      1/1       Running    0            6d21h
cwpp-cicd-56d6dcc9b7-dl5cq           1/1       Running    0            6d21h
cwpp-connector-7f8kj                 1/1       Running    0            6d21h
cwpp-logging-4xkzx                   1/1       Running    0            6d21h
cwpp-pop-manager-1642047000-jzqxw    0/1       Completed  0            12m
cwpp-pop-manager-1642047300-mvbwz    0/1       Completed  0            7m10s
cwpp-pop-manager-1642047600-fhtmc    0/1       Completed  0            2m10s
root@lubuntu-core:~#

root@lubuntu-core:~# microk8s stop
Stopped

root@lubuntu-core:~# microk8s status
microk8s is not running. Use microk8s inspect for a deeper inspection
  1. Shut down and start the VM or reboot the VM.

  2. Execute the following commands to start the microk8s service:

  root@lubuntu-core:~# microk8s start
Started.
Enabling pod scheduling
node/lubuntu-core already uncordoned

root@lubuntu-core:~# kubectl get pods -n cwpp
NAME                                 READY     STATUS     RESTARTS     AGE
connector-ztna-5454cd865c-6hhdk      1/1       Running    1            6d21h
cwpp-cicd-56d6dcc9b7-dl5cq           1/1       Running    1            6d21h
cwpp-connector-7f8kj                 1/1       Running    1            6d21h
cwpp-logging-4xkzx                   1/1       Running    1            6d21h
cwpp-pop-manager-1642047000-jzqxw    0/1       Completed  0            14m
cwpp-pop-manager-1642047300-mvbwz    0/1       Completed  0            9m27s
cwpp-pop-manager-1642047600-fhtmc    0/1       Completed  0            4m27s

 

Redeploy connectors

Redeploying connector on the same host helps when you have not configured the Ubuntu VM properly or when the pods (containers) are not in the healthy status due to an incorrect provisioning key or gateway.

NOTE: Download the deployment package from the service instance when you receive the package is expired message. We strongly recommend that you use a VM having Ubuntu 18.x and later, 4CPU, 8 GB RAM with 50 GB HDD for deploying connectors.

  1. On the Skyhigh CASB navigation bar, settings.
  2. From the drop-down list, click PoP Management.
  3. Select the VM that is in the Unhealthy status.
  4. Click Delete in the detail panel.
  5. Execute the following in the VM Command Line Interface (CLI) console:
    1. snap remove microk8s
    2. rm ‑rf /opt/McAfee
    3. sudo bash infra.sh ‑‑provision_key=<PROV_KEY> ‑‑gateway=<GATEWAY_IP> ‑‑proxy=<PROXY> ‑‑no_proxy=<NO_PROXY>
      NOTE: The provisioning key is generated when you create a connector group. The provisioning key is a text string that identifies a connector with a connector group. The maximum number of connectors you specify while creating a connector group is the number of times you can use a provisioning key.
    • Infra.sh invokes the deployment of a connector
    • GATEWAY_IP is the nearest Private Access Gateway deployed in the following PoPs:
      • US PoP - us.pa-wgcs.mcafee-cloud.com
      • Germany PoP - de.pa-wgcs.mcafee-cloud.com
      • Singapore PoP - sg.pa-wgcs.mcafee-cloud.com
      • London PoP - gb.pa-wgcs.mcafee-cloud.com
      • Brazil PoP - br.pa-wgcs.mcafee-cloud.com

        NOTE: We recommend that you select a PoP location that is nearest to the location where you deploy the connectors to achieve optimal performance.
    • <proxy> is the address of the proxy server
    • <no_proxy> is the list of domains you can add to bypass the proxy
      NOTE: Set the <proxy> and <no_proxy> parameters only when your connector uses the proxy server. When you use proxy, make sure to add corp.nai.org,.internalzone.com, .scur.com, and .corp.mcafee.com to the <no_proxy> parameter.

The following is an example of a sudo command:

sudo bash
infra.sh --provision_key="eyJj...CJ9" --gateway=us.pa-wgcs.mcafee-cloud.com 

Where,
<PROV_KEY> = eyJ...NCJ9
<GATEWAY_IP> = us.pa-wgcs.mcafee-cloud.com

  1. Execute sudo kubectl get pods -n cwpp to check the status of the pods.
    The following is an example of pod's status:

 root@lubuntu-core:~# sudo kubectl get pods -n cwpp
NAME                                READY   STATUS      RESTARTS   AGE
connector-ztna-5454cd865c-6hhdk     1/1     Running     0          6d21h
cwpp-cicd-56d6dcc9b7-dl5cq          1/1     Running     0          6d21h
cwpp-connector-7f8kj                1/1     Running     0          6d21h
cwpp-logging-4xkzx                  1/1     Running     0          6d21h
cwpp-pop-manager-1642047000-jzqxw   0/1     Completed   0          12m
cwpp-pop-manager-1642047300-mvbwz   0/1     Completed   0          7m10s
cwpp-pop-manager-1642047600-fhtmc   0/1     Completed   0          2m10s

After completing the deployment successfully, the connector and a PoP Manager image is created on the VM and your Docker instance runs as a container. You can check the PoP status on the POP Management page.

  • Was this article helpful?