Skip to content
vWorld
Menu
  • Main Page
  • About
  • Study Guide
    • VCAP-CMA Deploy 2018
Menu

NSX-T API with vRO and vRA

Posted on August 19, 2022August 19, 2022 by admin

Hi, Good Morning, I invite you to read the new article, which for me is an article for learning the VMware technology that I know the least, and by the way, having fun in creating a workflow and helping the community to create interesting solutions.

As you already know from the title of the article, today I will be talking about NSX-T, API and vRO in vRA.
But let’s start from the beginning.

The NSX-T was released in 2016, i.e. 8 years ago, but I think most of you started looking at it from version 2, i.e. from the end of 2017, and even more when it was unified in the VMware NSX-T Data Center 2.2.0 version on 2018-06-05. I personally tried to convince myself to this solution from the very beginning, but it was not time, the cooler things were and I have to admit that my knowledge from the network is rather not high.

At NSX-T, I started digging deeper last year when I had to create some workflows for the project. Now, however, I have decided to review it from the very beginning. Due to the fact that my hobby is vRealize Automation, I combined business with pleasure and started to create a workflow where you can configure the NSX-T and then add more elements.

My workflow allows you to create an NSX-T configuration From Compute Manager to Edge Node. The entire workflow is based on Dynamic REST Hosts and actions linked to specific fields in the form

  • Compute Managers

Based on documentation A compute manager, for example, vCenter Server, is an application that manages resources such as hosts and VMs. So you have to set it up right at the beginning.

In the workflow I included the logic for creating Compute Manager or selecting from the list available if we wanted to use another already configured in NSX-T

Workflow Compute

We use API to create ComputeManager

API: /api/v1/fabric/compute-managers

METHOD: POST

And the following body

var requestBody = {
        "server": vcenterUrl,
        "origin_type" : "vCenter",
        "display_name" : envPrefix+"-"+computeName,
        "credential" : {
            "credential_type" : "UsernamePasswordLoginCredential",
            "username": vcenterUsername,
            "password" : vcenterPassword,
        }
    }

The Input/Output parameters used here are following

<in-binding>
  <bind name="nsxUrl" type="string" export-name="nsxUrl"/>
  <bind name="nsxPassword" type="string" export-name="nsxPassword"/>
  <bind name="nsxUsername" type="string" export-name="nsxUsername"/>
  <bind name="vcenterPassword" type="string" export-name="vcenterPassword"/>
  <bind name="computeName" type="string" export-name="computeName"/>
  <bind name="vcenterUsername" type="string" export-name="vcenterUsername"/>
  <bind name="vcenterUrl" type="string" export-name="vcenterUrl"/>
  <bind name="envPrefix" type="string" export-name="envPrefix"/>
</in-binding>
<out-binding/>

The next item is to get the ComputeID which will or may not be used in other tasks. Here we use the same API except that with the method GET and computeID is

results = jsonResponse.results
    for(r in results)
    {
        if(results[r].display_name === cname)
        {
            commputeID = results[r].id
        }
    }

The Input/Output parameters used here are following

<in-binding>
 <bind name="nsxPassword" type="string" export-name="nsxPassword"/>
 <bind name="nsxUrl" type="string" export-name="nsxUrl"/>
 <bind name="nsxUsername" type="string" export-name="nsxUsername"/>
  <bind name="compute" type="string" export-name="compute"/>
 <bind name="computeName" type="string" export-name="computeName"/>
 <bind name="CreateComputeManage" type="boolean" export-name="CreateComputeManage"/>
 <bind name="envPrefix" type="string" export-name="envPrefix"/>
</in-binding>
<out-binding>
 <bind name="commputeID" type="string" export-name="commputeID"/>
</out-binding>

So we can create a compute or choose from the list for future Tasks

Main screen
Create Compute
Choose from List

An action is attached to the Compute field that reads all ComputeManagers from NSX-T and displays a list of them, showing the name and the object ID as value

  • TransportZone

Another element is TransportZone

Again, based on the documentation

Transport zones dictate which hosts and, therefore, which VMs can participate in the use of a particular network. A transport zone does this by limiting the hosts that can “see” a segment — and, therefore, which VMs can be attached to the segment. A transport zone can span one or more host clusters. Also, a transport node can be associated to multiple tranzport zones.

An NSX-T Data Center environment can contain one or more transport zones based on your requirements. A host can belong to multiple transport zones. A segment can belong to only one transport zone.

NSX-T Data Center does not allow connection of VMs that are in different transport zones in the Layer 2 network. The span of a segment is limited to a transport zone.

The overlay and VLAN transport zone is used by both host transport nodes and NSX Edge nodes. When a host is added to an overlay transport zone, you can configure an N-VDS or a VDS switch on the host. You can only configure a N-VDS switch on NSX Edge transport nodes.

The VLAN transport zone is used by the NSX Edge and host transport nodes for its VLAN uplinks. When an NSX Edge is added to a VLAN transport zone, a VLAN N-VDS is installed on the NSX Edge.

The logic for all objects is very similar so I will not elaborate but show the picture

Workflow TransportZone

In Create, we use

API: “/policy/api/v1/infra/sites/default/enforcement-points/default/transport-zones/” + envPrefix +”-“+transportZoneName;

METHOD: PATCH

and body

var requestBody = {
        "tz_type": transportZoneType
    }

The variables used here are

<in-binding>
 <bind name="nsxPassword" type="string" export-name="nsxPassword"/>
 <bind name="nsxUrl" type="string" export-name="nsxUrl"/>
 <bind name="nsxUsername" type="string" export-name="nsxUsername"/>
 <bind name="transportZoneName" type="string" export-name="transportZoneName"/>
 <bind name="envPrefix" type="string" export-name="envPrefix"/>
 <bind name="transportZoneType" type="string" export-name="transportZoneType"/>
</in-binding>
<out-binding>
 <bind name="tzoneID" type="string" export-name="tzoneID"/>
</out-binding>

Here, once again, we can create a Transport Zone or use the existing one

Main
Create Transport Zone
Choose TransportZone

The transportZone field uses an action that displays the appropriate zone transport based on their type, for which the typeVLAN check box is used.

  • UplinkProfile

Again based on documentation

An uplink is a link from the NSX Edge nodes to the top-of-rack switches or NSX-T Data Center logical switches. A link is from a physical network interface on an NSX Edge node to a switch.

An uplink profile defines policies for the uplinks. The settings defined by uplink profiles can include teaming policies, active and standby links, transport VLAN ID, and MTU setting.
Configuring uplinks for VM appliance-based NSX Edge nodes and Bare Metal NSX Edge transport nodes:

If the Failover teaming policy is configured for an uplink profile, then you can only configure a single active uplink in the teaming policy. Standby uplinks are not supported and must not be configured in the failover teaming policy. If the teaming policy uses more than one uplink (active / standby list), you cannot use the same uplinks in the same or a different uplink profile for a given NSX Edge transport node.
If the Load Balanced Source teaming policy is configured for an uplink profile, then you can either configure uplinks associated to different physical NICs or configure an uplink mapped to a LAG that has two physical NICs on the same N-VDS. The IP address assigned to an uplink endpoint is configurable using IP Assignment for the N-VDS. The number of LAGs that you can actually use depends on the capabilities of the underlying physical environment and the topology of the virtual network. For example, if the physical switch supports up to four ports in an LACP port channel, you can connect up to four physical NICs per host to a LAG.

You must use the Load Balanced Source teaming policy for traffic load balancing.

API

API: /policy/api/v1/infra/host-switch-profiles/” + envPrefix+”-“+hostUplinkProfileName

METHOD: PATCH

and body

 var requestBody = {
        "display_name": envPrefix+"-"+hostUplinkProfileName,
        "resource_type" : "PolicyUplinkHostSwitchProfile",
        "transport_vlan" : 0,
        "teaming" : {
            "policy" : "FAILOVER_ORDER",
            "active_list":[
                {
                    "uplink_name": envPrefix+"-uplink",
                    "uplink_type": "PNIC"
                }
            ]
        }
    }

In my case, it uses one PNIC and the FAILOVER functionality. We can add additional elements here, but for the needs of the lab it is enough

The variables it uses here are

<in-binding>
 <bind name="nsxPassword" type="string" export-name="nsxPassword"/>
 <bind name="nsxUrl" type="string" export-name="nsxUrl"/>
 <bind name="nsxUsername" type="string" export-name="nsxUsername"/>
 <bind name="envPrefix" type="string" export-name="envPrefix"/>
 <bind name="hostUplinkProfileName" type="string" export-name="hostUplinkProfileName"/>
</in-binding>
<out-binding/>
  • IPPool

Here I did not find information about what the field is, but for my little reason it is an object containing a set of IP addresses that we can use when deploying objects in NSX

We actually use the API call twice when creating a pool

API-1: “/policy/api/v1/infra/ip-pools/” + envPrefix+”-“+ipPoolName

METHOD: PATCH

with body

var requestBody = {
        "display_name": envPrefix+"-"+ipPoolName,
    }

and

API-2: “/policy/api/v1/infra/ip-pools/” + envPrefix+”-“+ipPoolName+”/ip-subnets/”+envPrefix+”-“+ipSubnetName;

METHOD: PATCH

var requestBody = {
        "resource_type": "IpAddressPoolStaticSubnet",
        "allocation_ranges":[
            {
                "start" : poolStartIP,
                "end" : poolEndIP
            }
        ],
        "cidr" : poolCIDRIP,
        "gateway_ip" : poolGWIP
    }

So first we create a pool and then add a subnet to it

Variables for this task

<in-binding>
 <bind name="nsxPassword" type="string" export-name="nsxPassword"/>
 <bind name="nsxUrl" type="string" export-name="nsxUrl"/>
 <bind name="nsxUsername" type="string" export-name="nsxUsername"/>
 <bind name="envPrefix" type="string" export-name="envPrefix"/>
 <bind name="ipPoolName" type="string" export-name="ipPoolName"/>
 <bind name="ipSubnetName" type="string" export-name="ipSubnetName"/>
 <bind name="poolCIDRIP" type="string" export-name="poolCIDRIP"/>
 <bind name="poolEndIP" type="string" export-name="poolEndIP"/>
 <bind name="poolStartIP" type="string" export-name="poolStartIP"/>
 <bind name="poolGWIP" type="string" export-name="poolGWIP"/>
</in-binding>
<out-binding/>
Main
Create IPPool
Choose already Created

  • TransporNodeProfile

From the documentation

A transport node profile is a template to define configuration that is applied to a cluster. It is not applied to prepare standalone hosts. Prepare vCenter Server cluster hosts as transport nodes by applying a transport node profile. Transport node profiles define transport zones, member hosts, N-VDS switch configuration including uplink profile, IP assignment, mapping of physical NICs to uplink virtual interfaces and so on.

Transport Node Profile is a profile that we connect directly to vCenter and our hosts, so the next step is the possibility of connecting just such a profile throughout the workflow, but before we have anything to connect, we need to create such a profile

The API that uses here is

API: “/policy/api/v1/infra/host-transport-node-profiles/” + envPrefix+”-“+transportNodeProfileName;

METHOD: PATCH

with body

var requestBody = {
        "display_name": envPrefix+"-"+transportNodeProfileName,
        "host_switch_spec": {
            "resource_type": "StandardHostSwitchSpec",
            "host_switches": [{
                "host_switch_mode": "STANDARD",
                "host_switch_type": "NVDS",
                "host_switch_profile_ids": [{
                        "key": "UplinkHostSwitchProfile",
                        "value": "/infra/host-switch-profiles/"+hupID
                    }],
                    "host_switch_name": "nsxHostSwitch",
                    "ip_assignment_spec": {
                        "resource_type": "StaticIpPoolSpec",
                        "ip_pool_id": "/infra/ip-pools/"+poolID
                    },
                    "is_migrate_pnics": "false",
                    "transport_zone_endpoints": [{
                        "transport_zone_id": "/infra/sites/default/enforcement-points/default/transport-zones/"+tzoneID,
                    }]
                }]
        }
    }

Where

  • hupID is the host ID of the uplink profile created or selected from the list, therefore the fields in the HostUpliknProfile tab become mandatory when creating this object
  • poolID is the ID of the pool created or selected from the list, therefore the fields in the IPPools tab become mandatory when creating this object
  • tzoneID is the TransportZone ID created or selected from the list, therefore the fields in the TransportZone tab become mandatory when creating this object

the other variables used in the tag are

<in-binding>
 <bind name="nsxPassword" type="string" export-name="nsxPassword"/>
 <bind name="nsxUrl" type="string" export-name="nsxUrl"/>
 <bind name="nsxUsername" type="string" export-name="nsxUsername"/>
 <bind name="envPrefix" type="string" export-name="envPrefix"/>
 <bind name="transportNodeProfileName" type="string" export-name="transportNodeProfileName"/>
 <bind name="hupID" type="string" export-name="hupID"/>
 <bind name="poolID" type="string" export-name="poolID"/>
 <bind name="tzoneID" type="string" export-name="tzoneID"/>
</in-binding>
<out-binding/>
Main
Create
Choose
  • Attach TransportNodeProfile

This task looks a bit different because we do not create anything here, we just add an already existing object to our vCenter

Workflow

The action that gets the cluster ID uses

API: /api/v1/fabric/compute-collections

METHOD: GET

The functionality that we require here is gained thanks to the following loop

results = jsonResponse.results
    for(r in results)
    {
        if(results[r].origin_type === "VC_Cluster" && results[r].origin_id === commputeID)
        {
            System.log(results[r].external_id)
            clusterId = results[r].external_id

        }
    }

The linking action uses the following API

API: “/policy/api/v1/infra/sites/default/enforcement-points/default/transport-node-collections/” + envPrefix+”-“+”TNC”;

METHOD: PATCH

    var requestBody = {
        "resource_type": "HostTransportNodeCollection",
        "compute_collection_id":clusterId,
        "transport_node_profile_id": "/infra/host-transport-node-profiles/"+tnpID
    }

Here, after the action is performed, the vCenter configuration takes place, which may take a while, so I created a script that waits for the configuration to be completed. We use here

API: /api/v1/transport-zones/transport-node-status

METHOD: GET

And the waiting function is

while(jsonResponse.result_count <= 0)
    {
        response = request.execute();
        jsonResponse = JSON.parse(response.contentAsString)
        System.log(jsonResponse.result_count)
        System.sleep(15000)
    }
    if(jsonResponse.result_count > 0)
    {
        
        while(jsonResponse.results[0].node_status.host_node_deployment_status !== "INSTALL_SUCCESSFUL")
        {
            response = request.execute();
            jsonResponse = JSON.parse(response.contentAsString)
            System.log(jsonResponse.results[0].node_status.host_node_deployment_status)
            System.sleep(15000)
        }
    }
    if(jsonResponse.results[0].node_status.host_node_deployment_status === "INSTALL_SUCCESSFUL")
    {
        
        while(jsonResponse.results[0].status !== "UP")
        {
            
            response = request.execute();
            jsonResponse = JSON.parse(response.contentAsString)
            System.log(jsonResponse.results[0].status)
            System.sleep(15000)
        }
    }

This function can be improved, but it worked in my lab.

  • EdgeNode

From the documentation
An NSX Edge Node is a transport node that runs the local control plane daemons and forwarding engines implementing the NSX-T data plane. It runs an instance of the NSX-T virtual switch called the NSX Virtual Distributed Switch, or N-VDS. The Edge Nodes are service appliances dedicated to running centralized network services that cannot be distributed to the hypervisors. They can be instantiated as a bare metal appliance or in virtual machine form factor. They are grouped in one or several clusters. Each cluster is representing a pool of capacity.

An NSX Edge can belong to one overlay transport zone and multiple VLAN transport zones. An NSX Edge belongs to at least one VLAN transport zone to provide the uplink access.

The creation process is as follows

Workflow for Edge

The first action works the same as the one described earlier, but we additionally extract domainID

vcenterDomainID = results[r].cm_local_id

The deployment Details action uses the built-in vRO functionality that allows you to pull objects from vCenter

I chose a random vDS, datastore and host which I achieved with.

var vdsPorts = vCenterConnection.getAllDistributedVirtualPortgroups(null)
for each (v in vdsPorts)
{
    if(v.config.defaultPortConfig.uplinkTeamingPolicy.uplinkPortOrder.activeUplinkPort)
    {
        portGroupKey = v.config.key;
    }        

}

var datastores = vCenterConnection.getAllDatastores(null)
for each (d in datastores)
{
    datastorKey = d.id
}
var hosts = vCenterConnection.getAllHostSystems(null)
System.log(hosts)
for each (h in hosts)
{
    hostKey = h.id
}

Here we can add a functionality that will allow us to choose a specific host, datastore and portgrupe on a specific vDS but I think this solution is already on PROD and not DEV

So having everything collected, we can go to the action creating EdgeNode which uses the API

API: “/api/v1/transport-nodes”

METHOD: POST

Here we also have the largest body, it is admitted that I made it easier to build it by using reverse enginering from the GUI

var requestBody = {
        "display_name": envPrefix+"-"+edgeName,
        "description": "",
        "node_deployment_info": {
            "ip_addresses": [ipAddress],
            "_revision": 0,
            "display_name": envPrefix+"-"+edgeName,
            "resource_type": "EdgeNode",
            "deployment_type": "VIRTUAL_MACHINE",
            "deployment_config": {
                "form_factor": "MEDIUM",
                "vm_deployment_config": {
                    "placement_type": "VsphereDeploymentConfig",
                    "vc_id": clusterId,
                    "compute_id": vcenterDomainID,
                    "storage_id": datastorKey,
                    "host_id": hostKey,
                    "management_network_id": portGroupKey,
                    "management_port_subnets": [{
                        "ip_addresses": [ipAddress],
                        "prefix_length": cidrNetmask
                    }],
                    "default_gateway_addresses": [edgeGW],
                    "reservation_info": {
                        "memory_reservation": {
                            "reservation_percentage": 100
                        },
                        "cpu_reservation": {
                            "reservation_in_shares": "HIGH_PRIORITY",
                            "reservation_in_mhz": 0
                        }
                    },
                    "resource_allocation": {
                        "cpu_count": 0,
                        "memory_allocation_in_mb": 0
                    },
                    "data_network_ids": [portGroupKey]
                },
                "node_user_settings": {
                    "cli_password": nsxPassword,
                    "root_password": nsxPassword,
                    "cli_username": "admin"
                }
            },
            "node_settings": {
                "hostname": envPrefix+"-"+edgeName+"."+searchDomain,
                "search_domains": [searchDomain],
                "ntp_servers": [ntpServer],
                "dns_servers": [dnsServer],
                "enable_ssh": "true",
                "allow_ssh_root_login": "true"
            }
        },
        "host_switch_spec": {
            "resource_type": "StandardHostSwitchSpec",
            "host_switches": [{
                "host_switch_mode": "STANDARD",
                "host_switch_type": "NVDS",
                "host_switch_profile_ids": [{
                    "key": "UplinkHostSwitchProfile",
                    "value": "/infra/host-switch-profiles/"+hupID
                }],
                "host_switch_name": "nsxHostSwitch",
                "pnics": [{
                    "device_name": "fp-eth0",
                    "uplink_name": envPrefix+"-uplink"
                }],
                "ip_assignment_spec": {
                    "resource_type": "StaticIpPoolSpec",
                    "ip_pool_id": poolID
                },
                "transport_zone_endpoints": [{
                    "transport_zone_id": "/infra/sites/default/enforcement-points/default/transport-zones/"+tzoneID

                }]
            }]
        },
        "_revision": 0,
        "tags": [],
        "resource_type": "",
        "_system_owned": false,
        "node_id": ""
    }

We also have the most variables here

<in-binding>
 <bind name="nsxPassword" type="string" export-name="nsxPassword"/>
 <bind name="nsxUrl" type="string" export-name="nsxUrl"/>
 <bind name="nsxUsername" type="string" export-name="nsxUsername"/>
 <bind name="envPrefix" type="string" export-name="envPrefix"/>
 <bind name="edgeName" type="string" export-name="edgeName"/>
 <bind name="datastorKey" type="string" export-name="datastorKey"/>
 <bind name="hostKey" type="string" export-name="hostKey"/>
 <bind name="portGroupKey" type="string" export-name="portGroupKey"/>
 <bind name="ipAddress" type="string" export-name="ipAddress"/>
 <bind name="clusterId" type="string" export-name="clusterId"/>
 <bind name="vcenterDomainID" type="string" export-name="vcenterDomainID"/>
 <bind name="cidrNetmask" type="string" export-name="cidrNetmask"/>
 <bind name="edgeGW" type="string" export-name="edgeGW"/>
 <bind name="searchDomain" type="string" export-name="searchDomain"/>
 <bind name="ntpServer" type="string" export-name="ntpServer"/>
 <bind name="dnsServer" type="string" export-name="dnsServer"/>
 <bind name="poolID" type="string" export-name="poolID"/>
 <bind name="tzoneID" type="string" export-name="tzoneID"/>
 <bind name="hupID" type="string" export-name="hupID"/>
</in-binding>
<out-binding/>

Many objects / variables used in this API call make mandatory fields appear on tabs, which makes it easier to navigate through the entire creation process.

And when all fields are completed, the call is successful and we go to the monitoring action as in the previous stage.

We have two to verify the API calle

API: /api/v1/transport-nodes/

METHOD: GET

results = jsonResponse.results
    
    for (r in results)
    {
        if(results[r].node_deployment_info.resource_type === "EdgeNode" && results[r].node_deployment_info.display_name === envPrefix+"-"+edgeName)
        {
            nodeID =results[r].node_id
        }
    } 

and

API: “/api/v1/transport-nodes/”+nodeID+”/state”

METHOD: GET

while(jsonResponse.state !== "success")
{
    response = request.execute();
    var jsonResponse = JSON.parse(response.contentAsString)
    System.log(jsonResponse.state)
    System.log(jsonResponse.node_deployment_state.state)
    System.sleep(15000)
}
while(jsonResponse.node_deployment_state.state !== "NODE_READY")
{
    response = request.execute();
    var jsonResponse = JSON.parse(response.contentAsString)
    System.log(jsonResponse.state)
    System.log(jsonResponse.node_deployment_state.state)
    System.sleep(15000)
}

This is where the NSX-T setup ends and this is what it looks like in the environment

Compute
Transport
Uplink
Node Profile
Pools
Subnet
Host Transport Node
Edge Node

in the next article I will describe how to create Segments, Groups and Routers T0 and T1 and how to modify the created Edge because it is also an interesting task. but follow him to this, thank you for making it to the end and I hope that I got you a bit curious and maybe I showed you how to do something in your daily work. To read in the next article, and feel free to comment and contact me if you need help.

Share with:


Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • VCF Automation: From Zero to a Running Virtual Machine – A Step-by-Step Guide
  • Activating the Supervisor in VMware Cloud Foundation 9 with NSX VPC – A Step-by-Step Guide
  • Configuring an NSX Edge Cluster in VMware Cloud Foundation 9
  • Complete pfSense 2.7.2‑RELEASE configuration for a VMware Cloud Foundation 9 home lab
  • Automating vRO Workflow Documentation with GitHub Actions

Archives

Follow Me!

Follow Me on TwitterFollow Me on LinkedIn

GIT

  • GITHub – vWorld GITHub – vWorld 0
© 2025 vWorld | Powered by Superbs Personal Blog theme