Skip to content
vWorld
Menu
  • Main Page
  • About
  • Study Guide
    • VCAP-CMA Deploy 2018
Menu

From Zero to a Scalable Application in VCF 9.0: The Complete, Hyper-Detailed Configuration Guide

Posted on August 4, 2025August 4, 2025 by admin

Welcome to the most comprehensive guide to configuring the advanced VMware Cloud Foundation 9.0 platform you will find. This article is not a high-level overview—it is a precise, step-by-step guide that will walk you through the entire, complex process of building an environment ready for modern applications. We assume you have a deployed and operational VCF Management Domain.

Our goal is to go from this base installation to a fully functional, scalable, and publicly accessible containerized application. To achieve this, we will perform the following key operations:

  1. Deploy and configure an NSX Edge Cluster with BGP routing.
  2. Install and deeply integrate NSX Advanced Load Balancer (Avi).
  3. Activate a Supervisor Cluster on a vSphere cluster using NSX VPC.
  4. Configure a Tenant (Organization) and its Resources in VCF Automation.
  5. Deploy a Virtual Machine from a Blueprint, demonstrating the power of Infrastructure as Code (IaC).
  6. Deploy a Tanzu Kubernetes (TKG) Cluster via the VCF Automation portal.
  7. Deploy an application on the TKG cluster in two scenarios: manual (NodePort + NAT) and fully automated (LoadBalancer).


Step 1: Building the Network Backbone – NSX Edge Cluster and Tier-0 Gateway

We begin by creating a robust, highly available communication gateway between our software-defined network (SDN) and the physical world.

  • Open the NSX Manager web interface and navigate to System > Fabric > Hosts, then select the Clusters tab.
  • Select the checkbox next to the vworld-c01 cluster.
  • From the ACTIONS menu that appears above the list, choose the Activate NSX on DVPGs option. This action injects NSX kernel modules into the ESXi hypervisors, a fundamental step that enables their participation in the Distributed Firewall (DFW) and routing.
  • Confirm the dialog box warning about the default DFW rule. After a successful operation, the value in the NSX on DVPGs column will change to Yes for your cluster.
  • Switch to the vSphere Client interface. In the Hosts and Clusters view, select the vworld-c01 cluster, then navigate to the Networking tab and the Network Connectivity sub-tab.
  • Click the CONFIGURE NETWORK CONNECTIVITY button.
  • In the wizard that appears, perform the following steps:
    • Gateway Type: Select Generalized Connectivity.
    • Edge Cluster: Enter a name for the new Edge cluster: vcf-edge-cl.
    • First Edge Node Configuration: Click ADD and precisely fill out the form:
      • Edge Node Name (FQDN): vcf-edge-01.vcf.world.lab
      • vSphere Cluster: vworld-c01
      • Data Store: vworld-cl01-ds-vsan
      • Management IP Allocation: Static
      • Port Group: vworld-cl01-vds01-pg-vm-mgmt
      • Management IP: 172.16.70.101/24
      • Management Gateway: 172.16.70.1
      • Edge Node Uplink Mapping: Map fp-eth0 to uplink01 and fp-eth1 to uplink02.
    • Second Edge Node Configuration: Click ADD again and choose the option to clone vcf-edge-01. Change only the unique values:
      • Edge Node Name (FQDN): vcf-edge-02.vcf.world.lab
      • Management IP: 172.16.70.102/24
    • Credentials: Scroll down and enter the passwords for the admin and root users.
  • In the final step of the wizard, configure the Tier-0 Gateway:
    • Gateway Name: vcf-edge-gw
    • High-Availability Mode: Active-Standby
    • Gateway Routing Type: BGP
    • Local Autonomous System Number (ASN): 65001
    • Uplink for vcf-edge-01: Click SET and configure the First Uplink:
      • Gateway Interface VLAN: 80
      • Gateway Interface CIDR: 172.16.80.2/24
      • BGP Peer IP: 172.16.80.1
      • BGP Peer ASN: 65000
    • Uplink for vcf-edge-01: Click SET and configure the Second Uplink:
      • Gateway Interface VLAN: 81
      • Gateway Interface CIDR: 172.16.81.2/24
      • BGP Peer IP: 172.16.81.1
      • BGP Peer ASN: 65000
    • Uplink for vcf-edge-02: Repeat the process for the second node, First Uplink
      • Gateway Interface VLAN: 80
      • Gateway Interface CIDR: 172.16.80.3/24
      • BGP Peer IP: 172.16.80.1
      • BGP Peer ASN: 65000
    • Uplink for vcf-edge-02: Repeat the process for the second node, Second Uplink
      • Gateway Interface VLAN: 81
      • Gateway Interface CIDR: 172.16.80.3/24
      • BGP Peer IP: 172.16.80.1
      • BGP Peer ASN: 65000
  • Click NEXT, review the summary, and click DEPLOY.

Step 2: Deploying and Integrating NSX Advanced Load Balancer (Avi)

Next, we will deploy and integrate an advanced load balancer.

  • Return to NSX Manager and navigate to System > Appliances > AVI Load Balancer.
  • Click SET VIRTUAL IP and enter the address 172.16.70.120. This “floating” IP will always point to the active leader of the controller cluster. Remember to create an A record in your DNS (e.g., vcf-avi.vcf.world.lab) pointing to this address.
  • Click ADD AVI LOAD BALANCER, then select and upload the controller’s OVA file. In the meantime, create another A record in DNS for the node itself, e.g., vcf-avi-01.vcf.world.lab with the IP 172.16.70.121.
  • In the deployment wizard, provide the following details:
    • Hostname: vcf-avi-01
    • Management IP/Netmask: 172.16.70.121/24
    • Management Gateway: 172.16.70.1
    • DNS Server: 172.16.10.10
    • NTP Server: 172.16.10.1
    • Node size: Large
  • After deployment, open the Avi web interface (https://vcf-avi-01.vcf.world.lab) and log in as admin.
  • Set the Passphrase (used for configuration backup).
  • Navigate to Administration > User Credentials and create two sets of credentials:
    1. Name: vcf-nsx, Type: NSX, provide the login details for NSX Manager.
    2. Name: vcf-vcsa, Type: vCenter, provide the login details for vCenter.
  • In vCenter, under Content Libraries, create a new library named vcf-avi-content. It will be used to store Service Engine images.
  • In the Avi UI, navigate to Infrastructure > Clouds, click CREATE, and select NSX Cloud.
  • Configure the new cloud:
  • General > Name: vcf-nsx-cloud
  • General > Object Name Prefix: vcf-avi
  • NSX > NSX Manager: Provide the FQDN and select the vcf-nsx credentials.
  • NSX > Management Network: Specify the Tier-1 Gateway (avi-t1) and Segment (avi-data-seg). Check Enable VPC Mode.
  • NSX > vCenter Servers: Click ADD, provide the vCenter details, select the vcf-vcsa credentials, and specify the vcf-avi-content library.
  • Save the configuration. The cloud’s status should turn green after a moment.

Step 3: Activating the Supervisor Cluster with NSX VPC

With the network and load balancer ready, we can now enable vSphere with Tanzu.

  • In NSX Manager, under Networking > IP Management > IP Address Pools, create two IP blocks:
    1. Name: vcf-external, CIDR: 172.16.100.0/24, Visibility: External.
    2. Name: vcf-transit-private, CIDR: 10.100.0.0/24, Visibility: Private.
  • In the vSphere Client, under the Virtual Private Clouds menu, right-click the datacenter and select New VPC…. Name it vpc-supervisor—and set up Private -VPC CIDR for 10.10.20.0/24
  • Inside vpc-supervisor, on the VPC Subnets tab, click ADD SUBNET:
    • Name: supervisor-public
    • Access Mode: Public
    • Auto allocate…: Yes
    • Subnet size: 64

After all prerequisites, start creating the Supervisor

  • Select the world-c01 cluster and from the Actions menu, choose Activate Supervisor.
  • In the activation wizard:
    • Step 1 – Network: Select VCF Networking with VPC.
    • Step 2 – Storage: Select the default vSAN storage policy.
    • Step 3 – Management Network: Configure the network for the Supervisor control plane nodes:
      • Network: supervisor-public
      • IP Address: 172.16.100.66-172.16.100.72
      • Subnet Mask: 255.255.255.192
      • Gateway: 172.16.100.65
      • DNS Servers: 172.16.10.10
      • NTP Servers: 172.16.10.1
      • Private Cidr: 172.26.0.0/16
  • Click FINISH. The platform will deploy and configure all Tanzu components in the background.

Step 4: Configuring a Tenant and Resources in VCF Automation

Now we will prepare the environment for our end-users in the VCF Automation portal.

  • Open the portal’s URL, select the System organization, and log in to the PROVIDER MANAGEMENT panel.
  • On the welcome screen, select Quick Start and click GET STARTED.
  • In the Quick Start wizard, create a new organization:
    • Name: vworld
    • Resources: Select Region Name: vcf-region and Supervisor: vcf-super.
    • Storage: Select the vworld-cl01 – Optimal Datastore Default Policy -RAID1.
  • Click CREATE AND PROVISION ORGANIZATION.
  • After the organization is successfully created, navigate to Infrastructure > Virtual Private Clouds.
  • Delete the default VPC (vcf-region-Default-VPC). ( note: this step is required as VPC needs to pass RFC 1123)
  • Click NEW VPC and configure it:
    • Name: vpc-world
    • Region: vcf-region
    • Private VPC CIDR: 10.20.10.0/20
  • Navigate to Build & Design > Projects > default-project > Namespaces tab.
  • Click NEW NAMESPACE and configure it:
    • Name: vcf-ns-01
    • Namespace Class: medium
    • Region: vcf-region
    • VPC: Select the newly created vpc-world.

Step 5: Infrastructure as Code in Practice – Deploying a VM from a Blueprint

Let’s demonstrate the power of the IaC approach.

  • In the VCF Automation portal, navigate to Build & Design > VM Images.
  • Click UPLOAD, select a local vcf-lin-temp.ova file, specify the vcf-content library, and name the image vcf-lin-temp.
  • Navigate to Build & Design > Blueprint Design.
  • Click NEW FROM > Blank Layout, name the blueprint dummyBlueprint, and assign it to the default-project.
  • In the editor on the right, paste the following YAML code:
formatVersion: 1
inputs: {}
resources:
  CCI_Supervisor_Namespace_1:
    type: CCI.Supervisor.Namespace
    properties:
      name: vworld-nsp-23fsd
      existing: true
  Virtual_Machine_1:
    type: CCI.Supervisor.Resource
    properties:
      context: ${resource.CCI_Supervisor_Namespace_1.id}
      manifest:
        apiVersion: vmoperator.vmware.com/v1alpha3
        kind: VirtualMachine
        metadata:
          name: virtual-machine-1-${env.shortDeploymentId}
        spec:
          className: best-effort-small
          imageName: vmi-dc5d594d2712b39f5
          powerState: PoweredOn
          storageClass: vworld-cl01-optimal-datastore-default-policy-raid1
      wait:
        conditions:
          - type: VirtualMachineCreated
            status: 'True'
      existing: false
  • Click TEST, and after a successful validation, click DEPLOY. Name the deployment dep-0001 and click DEPLOY again.
  • Observe the deployment process. Once finished, you can verify the VM’s creation in both the portal and the vSphere Client.

Step 6: Deploying a Tanzu Kubernetes (TKG) Cluster via VCF Automation

Deploying a K8s cluster is reduced to a few clicks.

  • In the VCF Automation portal, navigate to Build & Design > Services > vSphere Kubernetes Service.
  • Click + CREATE, select Default Configuration, click NEXT, and then FINISH.
  • The platform will automatically deploy the TKG cluster. You can observe the creation of virtual machines in the vSphere Client and the appearance of new virtual services in the Avi UI.

Step 7: Deploying an Application on the TKG Cluster – Two Scenarios

With a ready cluster, we will now deploy an application in two ways.

Scenario A: Deployment with a NodePort Service and Manual NAT Configuration

  1. From the VCF Automation portal, download the kubeconfig file for your cluster.
  2. Using kubectl, deploy the application and expose it as a NodePort service:

vcf context create vcf-aut --endpoint https://vcf-aut.vcf.vworld.lab --insecure-skip-tls-verify --type cci --auth-type basic --tenant-name vworld --api-token <api token for user>


kubectl --kubeconfig c:\VCF\kubernetes-cluster-klat-kubeconfig.yaml apply -f c:\VCF\nginx-secure-deployment.yaml

kubectl --kubeconfig c:\VCF\kubernetes-cluster-klat-kubeconfig.yaml get pods

kubectl --kubeconfig c:\VCF\kubernetes-cluster-klat-kubeconfig.yaml expose deployment nginx-secure-deployment --type=NodePort --port=8080

  1. Check the assigned NodePort and the internal IP address of one of the cluster nodes:
kubectl --kubeconfig c:\VCF\kubernetes-cluster-klat-kubeconfig.yaml get service nginx-secure-deployment

kubectl --kubeconfig c:\VCF\kubernetes-cluster-klat-kubeconfig.yaml get nodes -o wide
  1. In NSX Manager, under the NAT configuration for your VPC, manually create a DNAT rule:
    • Name: kubernetes-nginx-manual
    • Action: DNAT
    • Destination IP: 172.16.100.10 (example public IP)
    • Translated IP: The internal node IP you checked earlier.
    • Destination Port: 30580 (or other assigned NodePort)
    • Translated Port: 30580 (or other assigned NodePort)
    • Protocol: TCP

Scenario B: Deployment with a LoadBalancer Service

  1. Prepare a single manifest file, nginx-hostname-lb.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-index-html-template
  namespace: default
data:
  index.html: |
    <!DOCTYPE html>
    <html>
    <head>
    <title>Load Balancer Test</title>
    <meta charset="UTF-8">
    <style>
      body { font-family: sans-serif; background-color: #111; color: #0f0; text-align: center; }
      div { border: 2px solid #0f0; padding: 2rem; margin: 2rem; display: inline-block; }
      h1 { font-size: 2rem; text-shadow: 0 0 5px #0f0; }
    </style>
    </head>
    <body>
      <div>
        <h1>__POD_NAME__</h1>
      </div>
    </body>
    </html>
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-hostname-deployment
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-hostname
  template:
    metadata:
      labels:
        app: nginx-hostname
    spec:
      initContainers:
      - name: prepare-html
        image: bitnami/nginx:latest
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        command: ["/bin/sh", "-c", "cp /tmp/template/index.html /opt/bitnami/nginx/html/index.html && sed -i 's/__POD_NAME__/'\"$POD_NAME\"'/g' /opt/bitnami/nginx/html/index.html"]
        volumeMounts:
        - name: nginx-html-volume
          mountPath: /opt/bitnami/nginx/html
        - name: nginx-template-volume
          mountPath: /tmp/template
          
        securityContext:
          runAsNonRoot: true
          runAsUser: 1001
          allowPrivilegeEscalation: false
          capabilities:
            drop: ["ALL"]
          seccompProfile:
            type: "RuntimeDefault"
            
      containers:
      - name: nginx
        image: bitnami/nginx:latest
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: nginx-html-volume
          mountPath: /opt/bitnami/nginx/html
          
        securityContext:
          runAsNonRoot: true
          runAsUser: 1001
          allowPrivilegeEscalation: false
          capabilities:
            drop: ["ALL"]
          seccompProfile:
            type: "RuntimeDefault"
            
      volumes:
      - name: nginx-html-volume
        emptyDir: {}
      - name: nginx-template-volume
        configMap:
          name: nginx-index-html-template
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-hostname-service
  namespace: default
spec:
  type: LoadBalancer
  sessionAffinity: None
  selector:
    app: nginx-hostname
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  1. Deploy the application with a single command:
kubectl --kubeconfig c:\VCF\kubernetes-cluster-klat-kubeconfig.yaml apply -f c:\VCF\nginx-hostname-lb.yaml

  1. Watch as the platform automatically assigns a public IP address:
kubectl --kubeconfig c:\VCF\kubernetes-cluster-klat-kubeconfig.yaml get pods

kubectl --kubeconfig c:\VCF\kubernetes-cluster-klat-kubeconfig.yaml get service -A

After a moment, a public IP will appear in the EXTERNAL-IP column.

  1. Test the load balancing:
curl -s http://172.16.100.11   | findstr "nginx-hostname-deployment"   & timeout /t 1  1>nul 

The output will show the hostnames of all three pods alternately.

We went through a complete, highly automated process that forms the foundation of a modern private cloud. Starting from edge network configuration, through integration with an advanced load balancer, tenant environment preparation in VCF Automation, showcasing Infrastructure as Code (IaC), and deploying applications in a “cloud-native” manner, we demonstrated how VCF eliminates manual, error-prone tasks. This guide illustrates the entire lifecycle – from infrastructure to application – and proves that VMware Cloud Foundation is a powerful, consistent, and highly automated platform for building and managing a private cloud.

Share with:


Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • From Commit to Cluster: Mastering GitOps with Argo CD on VMware Cloud Foundation
  • The Full Power of VCF Automation in Action: How I Connect the Dots and Build a Multi-Tier App with Kubernetes Objects.
  • From Code to Kubernetes Cluster with Chiselled Ubuntu Images on VMware
  • From Zero to Database-as-a-Service: A Deep Dive into VMware Data Services Manager 9.0 and VCF Automation
  • Complete Guide: Configuring SSO in VMware Cloud Foundation with Active Directory and VCF Automation Integration

Archives

Follow Me!

Follow Me on TwitterFollow Me on LinkedIn

GIT

  • GITHub – vWorld GITHub – vWorld 0
© 2026 vWorld | Powered by Superbs Personal Blog theme