Problem Statement :
Changing the host switch mode from STANDARD to ENS_INTERRUPT via Transport Node Profiles in NSX 4.2.* releases triggers
an immediate change on the hosts which could result in 10s of seconds of downtime per host. This downtime could
adversely impact vSAN and other workloads.


Solution :
To avoid the downtime, for TNC (vSphere cluster prepped with NSX), a host by host approach should be followed. This
involves putting the host in maintenance mode, changing the STANDARD mode switches to ENS_INTERRUPT mode, exiting the
host out ot maintenance mode and repeating the same steps for all hosts in the cluster. Once update is done on all
hosts, a new TNP should be applied on TNC. The new TNP should be the same as old TNP of TNC but with one difference,
where in all the switches that have mode as STANDARD in old TNP should have the mode as ENS_INTERRUPT in new TNP.


Guidance for user before running script:
1.For vSphere cluster selected, make sure that DRS is enabled and set to fully automated mode. This is to ensure
that hosts in cluster can enter into maintenance mode via script.
2.If the "tnp_id_to_apply" for a cluster is a TNP ID string that is not used in NSX, the script will
auto-create a new TNP with the ID from the current TNP of the cluster. The new TNP will be the same
as the current TNP except for each host-switch with STANDARD mode the mode will change to ENS_INTERRUPT
and there will be no high-performance-hostswitch-profile.
If the "tnp_id_to_apply" for a cluster is the ID of an existing TNP in NSX, the existing TNP must
not be the current TNP attached to the cluster. The existing TNP will be the new TNP of the cluster;
the script will apply the new TNP on the hosts in the cluster and attach the new TNP to the cluster
at the end. The new TNP should be the same as the cluster's current TNP except for each host-switch
with STANDARD mode in the current TNP the mode must be ENS_INTERRUPT.
A single TNP can be shared by multiple TNCs (say TNC1, TNC2, TNC3). For clusters sharing the same TNP,
the same "tnp_id_to_apply" can be used and then a new TNP will be shared by the clusters after the
script is run successfully.

Script, README file and sample config json file are present in the following directory on NSX appliance.
"/opt/vmware/migration-coordinator-tomcat/bin/uens-adoption/config"

How to run the script:
  python enable_uens.py -f "/var/log/enable_uens_config.json"
or to save the logs to a file, run the command as follows:
  python enable_uens.py -f "/var/log/enable_uens_config.json" 2>&1 | tee -a /var/log/enable_uens.log
  Note: press Enter key once if "Enter NSX password:" is not shown in the console.

In the above command, config file "-f" is a mandatory argument and the file should have all the input values needed by
script. The script execution also needs NSX and vCenter password. These passwords can be provided in two different ways.
1. Providing the password values via password command prompt. If the above command is executed, it will prompt the user
to enter passwords for NSX and vCenter via secure command prompt.
2. Setting the passwords as environmental variables before starting script. The environmental variables that need to be
set are "ENABLE_UENS_SCRIPT_NSX_PASSWORD" and "ENABLE_UENS_SCRIPT_VC_PASSWORD". If passwords are set in environmental
variables before script is executed, later when script is executed, it does NOT ask for passwords via command prompt.
3. add "-s" before "-f" to the above command to skip TLS/SSL verification only in cases of testing or running it
in a trusted and secured env.


Config file schema with comments :
{
  "nsx_manager_details": {
    # NSX manager FQDN that matches the SSL certificate name, or IP addr if command option "-s" is used. Mandatory field.
    "ip": "mp1.example.com",
    # Optional field. If not provided, only FQDN value will be used for making API calls.
    "port": 443,
    # Mandatory field.
    "username": "admin"
  },
  "vcenter_details": {
    # vCenter FQDN that matches the SSL certificate name, or IP addr if command option "-s" is used. Mandatory field
    "ip": "vc1.example.com",
    # vCenter user name. Mandatory field.
    "username": "administrator@vsphere.local"
  },
  "cluster_entry_list": [
    {
      # cluster name in vCenter
      "vcenter_cluster_name": "cluster-1",
      # The new TNP Id to apply on the cluster.
      "tnp_id_to_apply": "tnp1-ens-enabled",
      # set it to true if any host in the cluster had high-performance params tuned for NSX edge VM; default is false.
      "reset_high_performance_params": false,
      # Enter maintenance mode max waiting period(in mins) for hosts in cluster. Must be greater than or equal to 2
      "enter_mm_timeout_minutes": 30
    }
  ]
}



User steps to create new TNP manually:

Get the details of current TNP
Request : GET 'https://10.161.242.211/api/v1/infra/host-transport-node-profiles/tnp1'
Response :
{
    "host_switch_spec": {
        "host_switches": [
            {
                "host_switch_name": "1-vds-446",
                "host_switch_id": "50 3d b1 5b 7e fd 07 2d-d0 f0 91 de d4 77 98 7c",
                "host_switch_type": "VDS",
                "host_switch_mode": "STANDARD",
                "ecmp_mode": "L3",
                "host_switch_profile_ids": [
                    {
                        "key": "UplinkHostSwitchProfile",
                        "value": "/infra/host-switch-profiles/UPROF1"
                    }
                ],
                "uplinks": [
                    {
                        "vds_uplink_name": "uplink1",
                        "uplink_name": "uplink0"
                    }
                ],
                "is_migrate_pnics": false,
                "ip_assignment_spec": {
                    "ip_pool_id": "/infra/ip-pools/IPPOOL1",
                    "resource_type": "StaticIpPoolSpec"
                },
                "cpu_config": [],
                "transport_zone_endpoints": [
                    {
                        "transport_zone_id": "/infra/sites/default/enforcement-points/default/transport-zones/TZ1",
                        "transport_zone_profile_ids": []
                    }
                ],
                "not_ready": false,
                "portgroup_transport_zone_id": "/infra/sites/default/enforcement-points/default/transport-zones/8f184bd2-cce2-3ce6-80c3-078b1c92957e"
            }
        ],
        "resource_type": "StandardHostSwitchSpec"
    },
    "ignore_overridden_hosts": false,
    "resource_type": "PolicyHostTransportNodeProfile",
    "id": "tnp1",
    "display_name": "tnp1",
    "path": "/infra/host-transport-node-profiles/tnp1",
    "relative_path": "tnp1",
    "parent_path": "/infra",
    "remote_path": "",
    "unique_id": "f0e570bb-1605-4a13-b74a-f820dd4b0018",
    "realization_id": "f0e570bb-1605-4a13-b74a-f820dd4b0018",
    "owner_id": "649bf088-91dd-40f1-9342-de8d3cfdcfb9",
    "marked_for_delete": false,
    "overridden": false,
    "_system_owned": false,
    "_protection": "NOT_PROTECTED",
    "_create_time": 1741085227005,
    "_create_user": "admin",
    "_last_modified_time": 1741085227005,
    "_last_modified_user": "admin",
    "_revision": 0
}

Create new TNP with switch mode change from STANDARD to ENS_INTERRUPT
Request: PUT 'https://10.161.242.211/api/v1/infra/host-transport-node-profiles/tnp1-ens-enabled'
Request body: (copy response from above GET call on old TNP and just change switch mode from STANDARD to ENS_INTERRUPT)
Example payload:
{
    "host_switch_spec": {
        "host_switches": [
            {
                "host_switch_name": "1-vds-446",
                "host_switch_id": "50 3d b1 5b 7e fd 07 2d-d0 f0 91 de d4 77 98 7c",
                "host_switch_type": "VDS",
                "host_switch_mode": "ENS_INTERRUPT",
                "ecmp_mode": "L3",
                "host_switch_profile_ids": [
                    {
                        "key": "UplinkHostSwitchProfile",
                        "value": "/infra/host-switch-profiles/UPROF1"
                    }
                ],
                "uplinks": [
                    {
                        "vds_uplink_name": "uplink1",
                        "uplink_name": "uplink0"
                    }
                ],
                "is_migrate_pnics": false,
                "ip_assignment_spec": {
                    "ip_pool_id": "/infra/ip-pools/IPPOOL1",
                    "resource_type": "StaticIpPoolSpec"
                },
                "cpu_config": [],
                "transport_zone_endpoints": [
                    {
                        "transport_zone_id": "/infra/sites/default/enforcement-points/default/transport-zones/TZ1",
                        "transport_zone_profile_ids": []
                    }
                ],
                "not_ready": false,
                "portgroup_transport_zone_id": "/infra/sites/default/enforcement-points/default/transport-zones/8f184bd2-cce2-3ce6-80c3-078b1c92957e"
            }
        ],
        "resource_type": "StandardHostSwitchSpec"
    },
    "ignore_overridden_hosts": false,
    "resource_type": "PolicyHostTransportNodeProfile",
    "id": "tnp1-ens-enabled",
    "display_name": "tnp1-ens-enabled",
    "path": "/infra/host-transport-node-profiles/tnp1-ens-enabled",
    "relative_path": "tnp1-ens-enabled",
    "parent_path": "/infra",
    "remote_path": "",
    "unique_id": "54e4ccfe-e2fc-418d-a25f-1e009cc743c9",
    "realization_id": "54e4ccfe-e2fc-418d-a25f-1e009cc743c9",
    "owner_id": "649bf088-91dd-40f1-9342-de8d3cfdcfb9",
    "marked_for_delete": false,
    "overridden": false,
    "_system_owned": false,
    "_protection": "NOT_PROTECTED",
    "_create_time": 1741116060283,
    "_create_user": "admin",
    "_last_modified_time": 1741210678930,
    "_last_modified_user": "admin",
    "_revision": 0
}

To Run lint
============================
From nsx root directory:

bazel test --cache_test_results=no //mp/tools/uens-adoption/config:uens-adoption.style
