In the modern era of cloud computing, managing and orchestrating containerized applications efficiently is paramount for businesses aiming to stay ahead in the digital transformation curve. VMware’s Tanzu Kubernetes Grid Multicloud (TKGm) emerges as a robust solution, offering a consistent, secure, and fully integrated Kubernetes environment across on-premises, public cloud, and edge environments. However, as organizations scale their Kubernetes clusters, the necessity for a centralized management platform becomes evident. This is where VMware Tanzu Mission Control (TMC) steps in, providing a unified management interface for consistent operations and security across all your Kubernetes clusters, regardless of where they reside.
The integration of Tanzu Mission Control with Tanzu Kubernetes Grid Multicloud not only simplifies the management of Kubernetes clusters but also enhances visibility, security, and control over your entire Kubernetes infrastructure. In this blog post, we will walk you through the step-by-step process of installing Tanzu Mission Control on Tanzu Kubernetes Grid Multicloud, enabling you to harness the full potential of both platforms for a streamlined Kubernetes management experience.
Now, I will walk you through the installation process step by step. If anything is unclear, please reach out to me. I am also learning this, much like you.
Setting Up the Bootstrap Machine
Before diving into the installation of Tanzu Mission Control on Tanzu Kubernetes Grid Multicloud, the first step is to set up a bootstrap machine. For this tutorial, we’ll be using Rocky Linux 9. Ensure you have the same operating system installed on your vCenter to maintain consistency throughout the setup process
Step 0: Setup DNS
192.168.100.239 tkg-tmc.tanzu.domain.lab
192.168.100.239 alertmanager.tkg-tmc.tanzu.domain.lab
192.168.100.239 auth.tkg-tmc.tanzu.domain.lab
192.168.100.239 blob.tkg-tmc.tanzu.domain.lab
192.168.100.239 console.s3.tkg-tmc.tanzu.domain.lab
192.168.100.239 gts-rest.tkg-tmc.tanzu.domain.lab
192.168.100.239 gts.tkg-tmc.tanzu.domain.lab
192.168.100.239 landing.tkg-tmc.tanzu.domain.lab
192.168.100.239 pinniped-supervisor.tkg-tmc.tanzu.domain.lab
192.168.100.239 prometheus.tkg-tmc.tanzu.domain.lab
192.168.100.239 s3.tkg-tmc.tanzu.domain.lab
192.168.100.239 tmc-local.s3.tkg-tmc.tanzu.domain.lab
192.168.100.231 tkg-keycloak.tanzu.domain.lab
192.168.100.230 tkg-harbor.tanzu.domain.lab
Step 1: Preparing Your Environment
- Download Rocky Linux 9:
- Navigate to the official Rocky Linux download page and download the ISO image suitable for your architecture.
- Install Rocky Linux 9 on Your vCenter:
- Launch your vCenter client and create a new virtual machine.
- Follow the on-screen prompts to select the downloaded Rocky Linux 9 ISO image and complete the installation process.
- Update Your System:
- Once the installation is complete, open a terminal on your Rocky Linux machine.
- Run the following command to update your system to the latest package versions:bash
sudo dnf update -y
Step 2: Setting Up Time Synchronization
Accurate time synchronization is crucial for many system processes. We’ll use chronyd
as our NTP (Network Time Protocol) client to keep our system time accurate.
- Check Current Time Settings:bash
timedatectl
Enable NTP:
timedatectl set-ntp on
Install and Configure Chrony:
dnf install chrony -y systemctl start chronyd systemctl enable chronyd vi /etc/chrony/chrony.conf
Modify the chrony.conf
file as needed, then enable NTP and restart the chronyd
service:
timedatectl set-ntp true systemctl restart chronyd
Step 3: Installing DHCP Server
A DHCP server is essential for assigning IP addresses within your network.
Install DHCP Server
dnf install dhcp-server
vi /etc/dhcp/dhcpd.conf
Configure your DHCP server by editing the dhcpd.conf
file with the following content:
default-lease-time 600;
max-lease-time 7200;
authoritative;
subnet 192.168.100.0 netmask 255.255.255.0 {
range 192.168.100.150 192.168.100.170;
range 192.168.100.175 192.168.100.219;
option routers 192.168.100.4;
option domain-name-servers 192.168.100.101;
option domain-name "tanzu.domain.lab";
}
Step 4: Disabling Firewall
Disabling the firewall is necessary for certain communications within your network.
systemctl stop firewalld.service systemctl disable firewalld systemctl disable firewalld --now systemctl mask firewalld
Step 5: Installing Docker
Docker is a platform for developing, shipping, and running applications in containers.
Add Docker Repository:
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
Install Docker and Related Packages:
sudo dnf install docker-ce docker-ce-cli containerd.io -y
Start Docker Service:
sudo systemctl start docker
(Optional) Add Your User to Docker Group:
sudo usermod -aG docker $USER
Install Docker Compose:
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose
Verify Docker and Docker Compose Installation:
docker --version docker-compose --version
Enable Docker to Start on Boot:
bash
systemctl enable docker
Step 6: Installing kubectl and Tanzu CLI
- Download the Required Packages:
- You can download the required packages from the official VMware Tanzu website or through the links provided in your VMware Tanzu account.
kubectl-linux-v1.25.7+vmware.2.gz
tanzu-cli-bundle-linux-amd64.tar.gz
- Transfer the Packages to Your Bootstrap Machine:
- Use a secure method like SCP (Secure Copy Protocol) to transfer the downloaded files to your bootstrap machine:
Step 7: Installing Required Tools and Plugins
Now that the necessary packages are on your bootstrap machine, it’s time to install them along with the required plugins.
Install kubectl:
# Decompress the kubectl binary
gzip -d kubectl-linux-v1.25.7+vmware.2.gz
# Make the binary executable
chmod ugo+x kubectl-linux-v1.25.7+vmware.2
# Move the binary to your PATH
mv ./kubectl-linux-v1.25.7+vmware.2 /usr/bin/kubectl
Install Tanzu CLI:
# Decompress the Tanzu CLI bundle
gzip -d tanzu-cli-bundle-linux-amd64.tar.gz
# Extract the Tanzu CLI bundle
tar -xvf tanzu-cli-bundle-linux-amd64.tar
# Change directory to the CLI directory
cd cli
# Move the Tanzu CLI binary to your PATH
mv ./core/v0.29.0/tanzu-core-linux_amd64 /usr/bin/tanzu
Install Tanzu Plugins:
# Sync Tanzu plugins
tanzu plugin sync
# List Tanzu plugins to verify installation
tanzu plugin list
Install Carvel Tools: Carvel provides a set of tools for managing and deploying applications in Kubernetes. The following steps will guide you through installing the Carvel tools ytt
, kapp
, kbld
, and imgpkg
.
# Install ytt
gunzip ytt-linux-amd64-v0.43.1+vmware.1.gz
chmod ugo+x ytt-linux-amd64-v0.43.1+vmware.1
mv ./ytt-linux-amd64-v0.43.1+vmware.1 /usr/local/bin/ytt
# Install kapp
gunzip kapp-linux-amd64-v0.53.2+vmware.1.gz
chmod ugo+x kapp-linux-amd64-v0.53.2+vmware.1
mv ./kapp-linux-amd64-v0.53.2+vmware.1 /usr/local/bin/kapp
# Install kbld
gunzip kbld-linux-amd64-v0.35.1+vmware.1.gz
chmod ugo+x kbld-linux-amd64-v0.35.1+vmware.1
mv ./kbld-linux-amd64-v0.35.1+vmware.1 /usr/local/bin/kbld
# Install imgpkg
gunzip imgpkg-linux-amd64-v0.31.1+vmware.1.gz
chmod ugo+x imgpkg-linux-amd64-v0.31.1+vmware.1
mv ./imgpkg-linux-amd64-v0.31.1+vmware.1 /usr/local/bin/imgpkg
Step 8: Setting Up SSH
Secure Shell (SSH) is a protocol used for secure communications between machines. Setting up SSH keys is a best practice that enhances security by allowing password-less logins and secure file transfers. In this step, we will create a new SSH key pair on your bootstrap machine.
# Create a directory to store your SSH keys if it doesn't already exist
mkdir -p ~/.ssh
# Generate a new SSH key pair
ssh-keygen -t rsa -b 4096 -C "admin@vworld.com.pl" -f ~/.ssh/id_rsa -q -N ""
# Display the public key
cat ~/.ssh/id_rsa.pub
Step 9: Setting Up NSX Advanced Load Balancer (ALB)
With our bootstrap machine fully prepared, the next crucial step is to set up the NSX Advanced Load Balancer (ALB) which plays a pivotal role in ensuring high availability and reliability of your Tanzu Kubernetes Grid Multicloud environment.
While I won’t delve into the installation details here, I highly recommend following the official VMware documentation for setting up NSX ALB. The guide is well-structured and provides step-by-step instructions to ensure a smooth setup process. You can find the documentation here.
10. Certificate Generation Guide
This guide will walk you through the process of generating a self-signed certificate using openssl.
Prerequisites
- Ensure that you have openssl installed on your system. If not, you can usually install it using your system’s package manager.
Steps
1. Generate a Private Key
To start with, we’ll generate a private key for the Certificate Authority (CA):
openssl genrsa -out myCA.key 2048
This command generates a 2048-bit RSA private key and saves it to a file named myCA.key.
2. Create a Self-Signed Certificate
With the private key in place, you can now create a self-signed certificate:
openssl req -x509 -new -nodes -key myCA.key -sha256 -days 1825 -out myCA.pem
This command does the following:
- -x509: Produce a self-signed certificate.
- -new: Indicate that a new certificate request should be generated.
- -nodes: Create a certificate that does not require a passphrase (this option prevents the private key from being encrypted).
- -key myCA.key: Specify the private key to use.
- -sha256: Use the SHA-256 hash algorithm.
- -days 1825: Set the certificate’s validity period to 5 years (1825 days).
- -out myCA.pem: Save the generated certificate to a file named myCA.pem.
After running this command, you’ll be prompted to enter some information for the certificate. This is the sample
Country Name (2 letter code) [XX]:PL
State or Province Name (full name) []:Silesia
Locality Name (e.g., city) [Default City]:Imielin
Organization Name (e.g., company) [Default Company Ltd]:vWorld
Organizational Unit Name (e.g., section) []:Cloud
Common Name (e.g., your name or your server's hostname) []:vworld.domain.local
Email Address []:admin@vworld.domain.local
Generate a Private Key for Keycloak
Begin by generating a 2048-bit RSA private key:
openssl genrsa -out harbor.key 2048
This command will create a private key named keycloak.key.
2. Create a Certificate Signing Request (CSR) for Keycloak
With the private key generated, you can now create a CSR:
openssl req -new -key harbor.key -out harbor.csr
You’ll be prompted to provide details for the certificate. Enter the relevant information when asked.
Creating the harbor.ext Configuration File
To generate certificates with specific extensions, it’s helpful to use a configuration file. Let’s create one named harbor.ext.
Start by opening a new file in your preferred text editor. In this guide, we’ll use vi:
vi harbor.ext
Enter the following content into the file:
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = tkg-harbor.tanzu.domain.lab
DNS.2 = 192.168.100.230
Signing a CSR using your own CA
Now that you have both a Certificate Authority (CA) certificate and a configuration file (harbor.ext), the next step is to sign a Certificate Signing Request (CSR). In this case, we’ll sign a CSR named harbor.csr.
Command
To sign the CSR and create a certificate named harbor.crt, execute the following command:
openssl x509 -req -in harbor.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out harbor.crt -days 1825 -sha256 -extfile harbor.ext
Explanation
Here’s a breakdown of the command:
- -req: This specifies that we’re working with a CSR.
- -in harbor.csr: This is the CSR file that you want to sign.
- -CA myCA.pem: This is your CA’s certificate.
- -CAkey myCA.key: This is your CA’s private key.
- -CAcreateserial: This option creates a serial number file if one does not exist. This serial number will be used in the certificate.
- -out harbor.crt: This specifies the name of the output file for the signed certificate.
- -days 1825: This sets the certificate’s validity period to 5 years (1825 days).
- -sha256: Use the SHA-256 hash algorithm for signing.
- -extfile harbor.ext: This is the configuration file we created earlier, which provides extensions for the certificate.
Keycloak Certificate Generation Guide
This guide will walk you through the process of generating and signing a certificate for Keycloak using openssl.
1. Generate a Private Key for Keycloak
Begin by generating a 2048-bit RSA private key:
openssl genrsa -out keycloak.key 2048
This command will create a private key named keycloak.key.
2. Create a Certificate Signing Request (CSR) for Keycloak
With the private key generated, you can now create a CSR:
openssl req -new -key keycloak.key -out keycloak.csr
You’ll be prompted to provide details for the certificate. Enter the relevant information when asked.
3. Create the keycloak.ext Configuration File
This configuration file defines extensions for the certificate:
vi keycloak.ext
Add the following content:
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = tkg-keycloak.tanzu.domain.lab
DNS.2 = 192.168.100.231
4. Sign the CSR using your CA
To sign the keycloak.csr and produce a certificate named keycloak.crt, run the following command:
openssl x509 -req -in keycloak.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out keycloak.crt -days 1825 -sha256 -extfile keycloak.ext
Step 11: Installing TKG Management and Workload Clusters
With the bootstrap machine and NSX ALB set up, we are now ready to install the TKG management and workload clusters. This can be done either via the User Interface (UI) or Command Line Interface (CLI).
Via UI:
To initiate the UI for setting up the clusters, run the following command on your bootstrap machine:
tanzu management-cluster create --ui --bind 192.168.100.170:8080 --browser none
This command will start the UI server and bind it to the specified IP address and port. You can then access the UI from a web browser to proceed with the setup.
Via CLI:
For CLI-based setup, you would need to create YAML configuration files for both the management and workload clusters. Below are the configurations for both:
Management Cluster Configuration:
Here’s a snippet of the YAML configuration file for the management cluster:
AVI_CA_DATA_B64: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVCRENDQXV5Z0F3SUJBZ0lVUHhRMUMxd0l6OEh4Q0c3OGlpZ3VtZVg0T2x
AVI_CLOUD_NAME: vcsa
AVI_CONTROL_PLANE_HA_PROVIDER: "true"
AVI_CONTROL_PLANE_NETWORK: VM Network
AVI_CONTROL_PLANE_NETWORK_CIDR: 192.168.100.0/24
AVI_CONTROLLER: 192.168.100.173
AVI_DATA_NETWORK: VM Network
AVI_DATA_NETWORK_CIDR: 192.168.100.0/24
AVI_ENABLE: "true"
AVI_LABELS: ""
AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_CIDR: 192.168.100.0/24
AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_NAME: VM Network
AVI_MANAGEMENT_CLUSTER_SERVICE_ENGINE_GROUP: tkg-se-group
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: 192.168.100.0/24
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: VM Network
AVI_PASSWORD: <encoded:Vk13YXJlMSE=>
AVI_SERVICE_ENGINE_GROUP: tkg-se-group
AVI_USERNAME: admin
CLUSTER_ANNOTATIONS: 'description:,location:'
CLUSTER_CIDR: 100.96.0.0/11
CLUSTER_NAME: tkg-mgt
CLUSTER_PLAN: prod
ENABLE_AUDIT_LOGGING: "false"
ENABLE_CEIP_PARTICIPATION: "false"
ENABLE_MHC: "false"
IDENTITY_MANAGEMENT_TYPE: none
INFRASTRUCTURE_PROVIDER: vsphere
LDAP_BIND_DN: ""
LDAP_BIND_PASSWORD: ""
LDAP_GROUP_SEARCH_BASE_DN: ""
LDAP_GROUP_SEARCH_FILTER: ""
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: ""
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
LDAP_HOST: ""
LDAP_ROOT_CA_DATA_B64: ""
LDAP_USER_SEARCH_BASE_DN: ""
LDAP_USER_SEARCH_FILTER: ""
LDAP_USER_SEARCH_NAME_ATTRIBUTE: ""
LDAP_USER_SEARCH_USERNAME: userPrincipalName
OIDC_IDENTITY_PROVIDER_CLIENT_ID: ""
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: ""
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: ""
OIDC_IDENTITY_PROVIDER_ISSUER_URL: ""
OIDC_IDENTITY_PROVIDER_NAME: ""
OIDC_IDENTITY_PROVIDER_SCOPES: ""
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: ""
OS_ARCH: amd64
OS_NAME: photon
OS_VERSION: "3"
SERVICE_CIDR: 100.64.0.0/13
TKG_HTTP_PROXY_ENABLED: "false"
DEPLOY_TKG_ON_VSPHERE7: "true"
VSPHERE_CONTROL_PLANE_DISK_GIB: "40"
VSPHERE_CONTROL_PLANE_ENDPOINT: 192.168.100.220
VSPHERE_CONTROL_PLANE_MEM_MIB: "8192"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
VSPHERE_DATACENTER: /Datacenter
VSPHERE_DATASTORE: /Datacenter/datastore/Data-01
VSPHERE_FOLDER: /Datacenter/vm
VSPHERE_INSECURE: "true"
VSPHERE_NETWORK: /Datacenter/network/VM Network
VSPHERE_PASSWORD: <encoded:Vk13YXJlMSE=>
VSPHERE_RESOURCE_POOL: /Datacenter/host/Cluster/Resources
VSPHERE_SERVER: vcsa.vworld.domain.local
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDEOf
VSPHERE_TLS_THUMBPRINT: ""
VSPHERE_USERNAME: Administrator@vsphere.local
VSPHERE_WORKER_DISK_GIB: "40"
VSPHERE_WORKER_MEM_MIB: "8192"
VSPHERE_WORKER_NUM_CPUS: "2"
WORKER_ROLLOUT_STRATEGY: ""
TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: LS0tLS1CRUdJTiBDRVJUS
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
This configuration includes settings for the NSX ALB, vSphere infrastructure, and other cluster-specific settings.
Workload Cluster Configuration:
Similarly, here’s a snippet of the YAML configuration file for the workload cluster:
AVI_CONTROL_PLANE_HA_PROVIDER: "false"
AVI_LABELS: ""
CLUSTER_ANNOTATIONS: 'description:,location:'
CLUSTER_CIDR: 100.96.0.0/11
CLUSTER_NAME: tkg-tmc
CLUSTER_PLAN: prod
ENABLE_AUDIT_LOGGING: "false"
ENABLE_MHC: "true"
INFRASTRUCTURE_PROVIDER: vsphere
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: ""
LDAP_USER_SEARCH_USERNAME: userPrincipalName
OIDC_IDENTITY_PROVIDER_NAME: ""
OS_ARCH: amd64
OS_NAME: photon
OS_VERSION: "3"
SERVICE_CIDR: 100.64.0.0/13
TKG_HTTP_PROXY_ENABLED: "false"
VSPHERE_CONTROL_PLANE_DISK_GIB: "60"
VSPHERE_CONTROL_PLANE_ENDPOINT: 192.168.100.174
VSPHERE_CONTROL_PLANE_MEM_MIB: "8192"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "4"
VSPHERE_DATACENTER: /Datacenter
VSPHERE_DATASTORE: /Datacenter/datastore/Data-01
VSPHERE_FOLDER: /Datacenter/vm
VSPHERE_INSECURE: "true"
VSPHERE_NETWORK: /Datacenter/network/VM Network
VSPHERE_PASSWORD: <encoded:Vk13YXJlMSE=>
VSPHERE_RESOURCE_POOL: /Datacenter/host/Cluster/Resources
VSPHERE_SERVER: vcsa.vworld.domain.local
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa
VSPHERE_TLS_THUMBPRINT: ""
VSPHERE_USERNAME: Administrator@vsphere.local
VSPHERE_WORKER_DISK_GIB: "60"
VSPHERE_WORKER_MEM_MIB: "8192"
VSPHERE_WORKER_NUM_CPUS: "4"
WORKER_ROLLOUT_STRATEGY: ""
TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: LS0tLS1CRUdJTiBDRVJUSU
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
This configuration also includes settings for the NSX ALB, vSphere infrastructure, and other cluster-specific settings.
With these configurations ready, you can apply them using the tanzu
CLI to create your management and workload clusters.
Step 12: Creating the Management and Workload Clusters
Now that we have our configurations ready, it’s time to create the management and workload clusters. We’ll use the tanzu
CLI for this purpose.
Creating the Management Cluster:
To create the management cluster, use the following command, specifying the path to your management cluster configuration file (mgmt.yaml
):
tanzu management-cluster create tkg-mgt --file /root/tkgMGT.yaml -v 6
In this command:
tkg-mgt
is the name of the management cluster.--file /root/tkgMGT.yaml
specifies the path to the configuration file.-v 6
sets the log level to 6 (which is verbose) to provide detailed logging during the cluster creation process.
Creating the Workload Cluster:
Similarly, to create the workload cluster, use the following command, specifying the path to your workload cluster configuration file (tkgTMC.yaml
):
tanzu cluster create --file /root/tkgTMC.yaml
In this command:
--file /root/tkg-tmc.yaml
specifies the path to the configuration file.
These commands will initiate the creation of the respective clusters based on the configurations provided in the YAML files. You can monitor the progress in the terminal, and once the process is complete, your TKG management and workload clusters will be up and running, ready for further configuration and deployment of applications.
When you go through the UI, at the end, you will be able to see that the cluster is deployed, as shown in this image:
Upon executing the command to create the management cluster, the tanzu
CLI provides a detailed log of the process. Here’s a breakdown of some key steps and messages from the log:
Validating the pre-requisites...
Serving kickstart UI at http://192.168.100.170:8080
E0927 10:17:03.718683 25205 avisession.go:714] CheckControllerStatus is disabled for this session, not going to retry.
E0927 10:17:03.728572 25205 avisession.go:714] CheckControllerStatus is disabled for this session, not going to retry.
E0927 10:17:03.735267 25205 avisession.go:714] CheckControllerStatus is disabled for this session, not going to retry.
Identity Provider not configured. Some authentication features won't work.
Using default value for CONTROL_PLANE_MACHINE_COUNT = 3. Reason: CONTROL_PLANE_MACHINE_COUNT variable is not set
Using default value for WORKER_MACHINE_COUNT = 3. Reason: WORKER_MACHINE_COUNT variable is not set
Validating configuration...
web socket connection established
sending pending 2 logs to UI
Using infrastructure provider vsphere:v1.5.3
Generating cluster configuration...
Setting up bootstrapper...
Bootstrapper created. Kubeconfig: /root/.kube-tkg/tmp/config_uZSDXdct
Warning: unable to find component 'kube_rbac_proxy' under BoM
Installing kapp-controller on bootstrap cluster...
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager Version="v1.10.2"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.2.8" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.2.8" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.2.8" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-vsphere" Version="v1.5.3" TargetNamespace="capv-system"
Installing Provider="infrastructure-ipam-in-cluster" Version="v0.1.0" TargetNamespace="caip-in-cluster-system"
ℹ Updating package repository 'tanzu-management'
ℹ Getting package repository 'tanzu-management'
ℹ Validating provided settings for the package repository
ℹ Creating package repository resource
ℹ Waiting for 'PackageRepository' reconciliation for 'tanzu-management'
ℹ 'PackageRepository' resource install status: Reconciling
ℹ 'PackageRepository' resource install status: ReconcileSucceeded
ℹ Updated package repository 'tanzu-management' in namespace 'tkg-system'
ℹ Installing package 'tkg.tanzu.vmware.com'
ℹ Getting package metadata for 'tkg.tanzu.vmware.com'
ℹ Creating service account 'tkg-pkg-tkg-system-sa'
ℹ Creating cluster admin role 'tkg-pkg-tkg-system-cluster-role'
ℹ Creating cluster role binding 'tkg-pkg-tkg-system-cluster-rolebinding'
ℹ Creating secret 'tkg-pkg-tkg-system-values'
ℹ Creating package resource
ℹ Waiting for 'PackageInstall' reconciliation for 'tkg-pkg'
ℹ 'PackageInstall' resource install status: Reconciling
ℹ 'PackageInstall' resource install status: ReconcileSucceeded
ℹ 'PackageInstall' resource successfully reconciled
ℹ
Added installed package 'tkg-pkg'Installing AKO on bootstrapper...
Using default value for CONTROL_PLANE_MACHINE_COUNT = 3. Reason: CONTROL_PLANE_MACHINE_COUNT variable is not set
Management cluster config file has been generated and stored at: '/root/.config/tanzu/tkg/clusterconfigs/tkg-mgt.yaml'
Checking Tkr v1.25.7---vmware.2-tkg.1 in bootstrap cluster...
Start creating management cluster...
Management cluster control plane is available, means API server is ready to receive requests
Saving management cluster kubeconfig into /root/.kube/config
Installing kapp-controller on management cluster...
cluster control plane is still being initialized: ScalingUp
cluster control plane is still being initialized: WaitingForIPAllocation @ Machine/tkg-mgt-gm2rs-5ggf5
Installing providers on management cluster...
Fetching providers
Installing cert-manager Version="v1.10.2"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.2.8" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.2.8" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.2.8" TargetNamespace="capi-kubeadm-control-plane-system"
I0927 10:32:51.761358 25205 request.go:601] Waited for 1.001945642s due to client-side throttling, not priority and fairness, request: GET:https://192.168.100.200:6443/apis/controlplane.cluster.x-k8s.io/v1alpha4?timeout=30s
Installing Provider="infrastructure-vsphere" Version="v1.5.3" TargetNamespace="capv-system"
Installing Provider="infrastructure-ipam-in-cluster" Version="v0.1.0" TargetNamespace="caip-in-cluster-system"
ℹ Updating package repository 'tanzu-management'
ℹ Getting package repository 'tanzu-management'
ℹ Validating provided settings for the package repository
ℹ Creating package repository resource
ℹ Waiting for 'PackageRepository' reconciliation for 'tanzu-management'
ℹ 'PackageRepository' resource install status: Reconciling
ℹ 'PackageRepository' resource install status: ReconcileSucceeded
ℹ 'PackageRepository' resource successfully reconciled
ℹ Updated package repository 'tanzu-management' in namespace 'tkg-system'
ℹ Installing package 'tkg.tanzu.vmware.com'
ℹ Getting package metadata for 'tkg.tanzu.vmware.com'
ℹ Creating service account 'tkg-pkg-tkg-system-sa'
ℹ Creating cluster admin role 'tkg-pkg-tkg-system-cluster-role'
ℹ Creating cluster role binding 'tkg-pkg-tkg-system-cluster-rolebinding'
ℹ Creating secret 'tkg-pkg-tkg-system-values'
ℹ Creating package resource
ℹ Waiting for 'PackageInstall' reconciliation for 'tkg-pkg'
ℹ 'PackageInstall' resource install status: Reconciling
ℹ 'PackageInstall' resource install status: ReconcileSucceeded
ℹ
Added installed package 'tkg-pkg'Waiting for the management cluster to get ready for move...
Waiting for addons installation...
Applying ClusterBootstrap and its associated resources on management cluster
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Moving Cluster API objects ClusterClasses=1
Creating objects in the target cluster
Deleting objects from the source cluster
Creating tkg-bom versioned ConfigMaps...
You can now access the management cluster tkg-mgt by running 'kubectl config use-context tkg-mgt-admin@tkg-mgt'
Management cluster created!
You can now create your first workload cluster by running the following:
tanzu cluster create [name] -f [file]
ℹ Checking for required plugins...
ℹ All required plugins are already installed and up-to-date
13. Helm Installation
This guide describes the steps to install Helm on a Rocky Linux 9 system using the official Helm repository.
Prerequisites:
curl
- Internet connection.
Installation Steps:
Follow the steps below to install Helm:
Download the Latest Helm Release: Download the latest release of Helm from the official GitHub releases page using curl
:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
Make the Script Executable: Change the permissions of the downloaded script to make it executable:
chmod 700 get_helm.sh
Run the Installation Script: Execute the script to install Helm:
./get_helm.sh
This script will automatically download and install the latest version of Helm.
Verify the Installation: After the installation is complete, you can verify that Helm was installed correctly by running:
helm version
14. Deploying Harbor on Tanzu Kubernetes Grid (TKG) Workload Cluster
In this section, we will deploy Harbor, an open-source cloud-native registry that stores, signs, and scans container images for vulnerabilities. Harbor extends the open-source Docker Distribution by adding the functionalities usually required by users such as security, identity, and management. Deploying Harbor on a TKG workload cluster provides a robust and secure environment for managing container images.
Prerequisites:
- A running TKG workload cluster.
- Helm installed on your machine.
- Certificates generated as per the previous sections of this article.
Deployment Steps:
The steps you’ve provided outline the process of deploying Harbor on a TKG workload cluster. Here’s a breakdown of what each command does:
Create Context for tkg-tmc Workload Cluster:
This command retrieves the kubeconfig for the tkg-tmc
cluster, allowing you to interact with the cluster using kubectl
.
tanzu cluster kubeconfig get tkg-tmc -n default --admin
Switch Context:
This command switches the current context to the tkg-tmc
cluster, ensuring subsequent kubectl
commands are run against this cluster.
kubectl config use-context tkg-tmc-admin@tkg-tmc
Add Bitnami Repo to Helm:
This command adds the Bitnami repository to Helm, which contains the Harbor chart among others.
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
Create Namespace for Harbor:
This command creates a new namespace called harbor
where the Harbor deployment will reside.
kubectl create ns harbor
Create Secret for Harbor:
This command creates a Kubernetes secret to store the TLS certificate and key for Harbor.
kubectl create secret tls harbor-cert --key harbor.key --cert harbor.crt -n harbor
Create Custom Values for Harbor:
This command creates a custom values file for the Harbor Helm chart. This file contains configuration values that override the default values in the Harbor chart.
cat > /root/harborValues.yaml << EOF
global:
imageRegistry: ""
adminPassword: "VMware1!"
ipFamily:
ipv4:
enabled: true
externalURL: https://tkg-harbor.tanzu.domain.lab
ingress:
core:
controller: default
hostname: tkg-harbor.tanzu.domain.lab
service:
type: LoadBalancer
loadBalancerIP: 192.168.100.230
http:
enabled: true
ports:
http: 80
https: 443
notary: 4443
core:
tls:
existingSecret: "harbor-cert"
nginx:
tls:
enabled: true
existingSecret: "harbor-cert"
commonName: tkg-harbor.tanzu.domain.lab
EOF
Deploy Harbor:
This command deploys Harbor using the Helm chart from the Bitnami repository, using the custom values file created in the previous step.
helm install harbor bitnami/harbor -f harborValues.yaml -n harbor
Verification:
After deploying Harbor, you can verify the deployment by checking the status of the Helm release and the Harbor pods.
# Check the status of the Helm release
helm list -n harbor
# Check the status of the Harbor pods
kubectl get svc,pods -n harbor
You should also be able to access the Harbor UI by navigating to
https://tkg-harbor.vworld.domain.local
in your web browser.
Troubleshooting:
If you encounter any issues during the deployment, here are some common troubleshooting steps:
- Check the logs of the Harbor pods for any errors or warnings:
kubectl logs <pod-name> -n harbor
- Check the events in the
harbor
namespace for any issues:
kubectl get events -n harbor
- If you encounter issues with the TLS certificate, ensure that the
harbor-cert
secret was created correctly and contains the correct certificate and key.
15. Extending Persistent Volume Claim (PVC) for Harbor Registry in Tanzu Kubernetes Grid
In a Kubernetes environment, data persistence is managed through Persistent Volumes (PVs) and Persistent Volume Claims (PVCs). Over time, as the data stored in Harbor registry grows, you may need to extend the size of the PVC to accommodate the additional data. In this section, we will guide you through the process of extending the PVC for the Harbor registry in your Tanzu Kubernetes Grid (TKG) environment.
Prerequisites:
- A running Tanzu Kubernetes Grid with Harbor deployed.
kubectl
command-line tool configured to interact with your TKG cluster.
Steps to Extend the PVC:
Identify the PVC: First, list all PVCs in the harbor
namespace to identify the PVC associated with the Harbor registry.
kubectl get pvc -n harbor
Check the Current Size of the PVC: Note down the current size of the PVC named harbor-registry
.
kubectl get pvc harbor-registry -n harbor -o jsonpath='{.spec.resources.requests.storage}'
Edit the PVC: Use the kubectl edit
command to modify the specification of the PVC. This will open the PVC specification in your default text editor.
kubectl edit pvc harbor-registry -n harbor
In the editor, locate the spec.resources.requests.storage
field and change its value to the desired size, for example, 20Gi
for 20 Gigabytes. Save and close the editor.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: default
volumeMode: Filesystem
volumeName: pvc-97eb7763-616f-47d8-9a2d-90f660aa1031
Verify the Extension: After editing the PVC, verify that the new size has been applied.
kubectl get pvc harbor-registry -n harbor -o jsonpath='{.spec.resources.requests.storage}'
Additionally, check the status of the PVC to ensure it’s Bound
and the events for any issues.
kubectl get pvc harbor-registry -n harbor
kubectl describe pvc harbor-registry -n harbor
Notes:
- The ability to extend a PVC is dependent on the underlying storage class and its configuration. Ensure that the storage class used by the
harbor-registry
PVC supports volume expansion. - Volume expansion is a disruptive process for some storage backends. Ensure you understand the implications and have taken necessary precautions, such as backing up data, before proceeding.
16. Configuring Docker to Trust Harbor Certificate
In this section, we’ll ensure that Docker and your local machine trust the self-signed certificate used by your Harbor instance. This is crucial for secure communication between Docker and Harbor.
For Docker:
- Create Docker’s certificate directory for Harbor:
mkdir -p /etc/docker/certs.d/tkg-harbor.tanzu.domain.lab
- Copy the CA certificate to Docker’s certificate directory:
cp myCA.pem /etc/docker/certs.d/tkg-harbor.tanzu.domain.lab/ca.crt
- Reload the Docker daemon to recognize changes:
systemctl daemon-reload
- Restart the Docker service:
systemctl restart docker
For Local Machine:
- Copy the root CA certificate to your local machine’s trusted store:
cp myCA.pem /etc/pki/ca-trust/source/anchors/myCA.crt
- Update the CA certificates on your machine:
update-ca-trust
- Restart the Docker service:
systemctl restart docker
By following these steps, you’ve configured Docker and your local machine to trust the self-signed certificate used by your Harbor instance. This is crucial for secure interactions between Docker and Harbor.
Testing:
To validate that the configuration is working as expected, you can try logging in to Harbor using Docker:
docker login tkg-harbor.tanzu.domain.lab -u admin -p VMware1!
Replace admin
with your Harbor username and VMware1!
with your Harbor password.
If the configuration is correct, you should be able to log in to Harbor without any SSL/TLS certificate errors. If there are any issues, double-check the previous steps to ensure that the certificates have been placed in the correct directories and that the Docker service has been restarted.
17. Importing Harbor Certificates into TKGm Management Cluster
After ensuring Docker and your local machine trust the Harbor certificates, the next step is to ensure the Tanzu Kubernetes Grid Management (TKGm) cluster also trusts these certificates.
1. Define the Necessary Variables
Use the following commands to set up your Harbor domain and the base64-encoded CA certificate:
export HARBOR_DOMAIN='tkg-harbor.tanzu.domain.lab'
export HARBOR_CA_CRT=$(cat harbor.crt | base64 -w 0)
2. Prepare the YAML Patch
cat > harbor-ca-patch.yaml << EOF
- op: add
path: "/spec/topology/variables/-"
value:
name: additionalImageRegistries
value:
- caCert: $HARBOR_CA_CRT
host: $HARBOR_DOMAIN
skipTlsVerify: true
EOF
3. Apply the Patch to the Management Cluster, and workoad cluster
tanzu mc kubeconfig get tkg-mgt --admin
- This command retrieves the administrative kubeconfig for the
tkg-mgt
management cluster using the Tanzu CLI. - The administrative kubeconfig for the management cluster is set as the current context, allowing subsequent commands to operate on this cluster.
kubectl config use-context tkg-mgt-admin@tkg-mgt
- This command sets the current context for
kubectl
to the administrative context of thetkg-mgt
management cluster. This ensures that subsequentkubectl
commands will operate on this specific cluster. - The
kubectl
command-line tool is now set to operate on thetkg-mgt
management cluster.
kubectl patch cluster tkg-tmc --patch-file harbor-ca-patch.yaml --type json
- This command applies a patch to the
tkg-
tmc cluster. The patch is defined in theharbor-ca-patch.yaml
file we created earlier. This patch ensures that the TKGm cluster trusts the Harbor certificates. - The
tkg-
mgt cluster configuration is updated to include the Harbor certificates information.
Repeat those steps to tkg-mgt
kubectl patch cluster tkg-mgt --patch-file harbor-ca-patch.yaml --type json -n tkg-system
4. Monitor the Redeployment
watch -n1 tanzu cluster get tkg-tmc
Wait for the workload domain to finish redeploying.
18. Tanzu Mission Control (TMC) Installation Preparation
After the workload domain has been redeployed, it’s time to prepare for the Tanzu Mission Control (TMC) installation. Follow the steps below to ensure a smooth setup:
1. Download the TMC Installation Package:
- Navigate to the VMware Tanzu Mission Control download page (you may need to log in to access this page).
- Locate and download the latest version of the TMC installation package.
2. Prepare a Directory for TMC:
Access your server using a terminal or an SSH client.
Create a directory where you’ll store the TMC installation files:bash
mkdir /tmc
3. Upload the TMC Package to this Directory:
Use a secure file transfer method such as SCP (Secure Copy Protocol) to upload the TMC package to the /tmc
directory on your server.bash
scp path/to/bundle-1.0.1.tar user@your-server:/tmc
4. Unpack the TMC Package:
- Navigate to the
/tmc
directory:bash
cd /tmc
Unpack the TMC package:
bash
tar -xvf bundle-1.0.1.tar -C /tmc
5. Verify the Unpacked Files:
Ensure that the files have been unpacked correctly:bash
ls /tmc
19. Inserting CA Certificate into Kapp Controller
To ensure that the Kapp Controller trusts your custom certificate authority (myCA.pem), you need to insert the certificate into its configuration. Follow this step-by-step guide:
1. Retrieve the Current Kapp Controller Configuration:
First, retrieve the current configuration of the Kapp Controller:bash
kubectl get kappcontrollerconfig -A
kubectl get kappcontrollerconfig tkg-tmc-kapp-controller-package -o yaml >> kappTMC.yaml
kubectl get kappcontrollerconfig tkg-mgt-kapp-controller-package -o yaml >> kappMGT.yaml -n tkg-system
2. Modify the Kapp Controller Configuration File:
- Open the configuration file using a text editor:
vi kappTMC.yaml
vi kappMGT.yaml
Then, add your CA certificate to the configuration. Ensure you replace “cert content here” with the actual content of your certificate:
spec:
kappController:
config:
caCerts: |
-----BEGIN CERTIFICATE-----
PUT CONTENT HERE
-----END CERTIFICATE-----
createNamespace: false
deployment:
3. Apply the Updated Configuration:
Finally, apply the changes you made to the configuration:bash
kubectl apply -f kappTMC.yaml
kubectl apply -f kappMGT.yaml -n tkg-system
4. Verify the Configuration:
It’s a good practice to verify that the changes have been applied correctly:bash
kubectl get kappcontrollerconfig tkg-tmc-kapp-controller-package -o yaml
20. Pushing TMC Images to Harbor
In this section, we’ll walk through the process of pushing Tanzu Mission Control (TMC) images to your Harbor registry. This is an essential step to ensure that TMC can be deployed using images stored in your private registry.
1. Navigate to TMC Installation Directory:
- First, ensure you are in the directory where the
tmc-sm
binary resides. This binary is part of the TMC installation package and is used to manage images for TMC.
cd /tmc
2. Create project on harbor
curl -u "admin:VMware1!" -X POST "https://tkg-harbor.tanzu.domain.lab/api/v2.0/projects" -H "accept: application/json" -H "Content-Type: application/json" -d "{ \"project_name\": \"tmc\", \"public\": true }"
3. Execute the Push Command:
- Utilize the
tmc-sm
command-line tool to push the images to your Harbor registry. Here’s how you do it:
./tmc-sm push-images harbor --project tkg-harbor.tanzu.domain.lab/tmc --username admin --password [YourPasswordHere]
- In this command:
push-images
instructstmc-sm
to push images.harbor
specifies that the images should be pushed to a Harbor registry.--project
specifies the project within Harbor where the images should be stored.--username
and--password
are used to authenticate with the Harbor registry.
Security Note: It’s advisable to avoid using passwords directly in command-line commands, as there’s a risk of them being logged or viewed by unauthorized users. Consider using more secure methods for handling credentials, such as environment variables or secure vaults. For instance, you could export your credentials as environment variables before running the command:
export HARBOR_USERNAME=admin
export HARBOR_PASSWORD=VMware1!
./tmc-sm push-images harbor --project tkg-harbor.tanzu.domain.lab/tmc --username $HARBOR_USERNAME --password $HARBOR_PASSWORD
Upon executing the above command, tmc-sm
will start pushing the TMC images to the specified project in your Harbor registry. Ensure that your Harbor registry is accessible and that the credentials provided have the necessary permissions to push images to the specified project.
21. Utilizing imgpkg
for Image Transfer
The imgpkg
tool is instrumental in managing and transferring container images and OCI image indexes across different container registries. This section elucidates the command used to copy a bundle of images from the VMware TKG registry to a Harbor-based repository.
Command Syntax:
imgpkg copy --registry-ca-cert-path=/root/harbor.crt -b projects.registry.vmware.com/tkg/packages/standard/repo:v2.2.0_update.2 --to-repo tkg-harbor.tanzu.domain.lab/tmc/498533941640.dkr.ecr.us-west-2.amazonaws.com/packages/standard/repo
Command Explanation:
imgpkg copy
:- This is the primary command that initiates the copying process of
imgpkg
.
- This is the primary command that initiates the copying process of
--registry-ca-cert-path=harbor.crt
:- This flag specifies the path to the CA certificate for the registry, which is crucial for ensuring a secure communication channel with the registry.
-b projects.registry.vmware.com/tkg/packages/standard/repo:v2.2.0_update.2
:- The
-b
flag denotes the “bundle” you wish to copy. In this case, it’s pointing to a specific bundle in the VMware TKG registry.
- The
--to-repo tkg-harbor.vworld.domain.local/tmc/498533941640.dkr.ecr.us-west-2.amazonaws.com/packages/standard/repo
:- This flag designates the destination repository where the bundle will be copied to. It’s pointing to a specific repository in your Harbor-based registry.
Execution Outcome:
By executing the provided command, you are facilitating the transfer of a specified bundle from the VMware TKG registry to your Harbor-based repository. This operation is essential for mirroring or backing up critical images or bundles, ensuring they are available in your own Harbor repository for deployment or further distribution
22. Adding Tanzu Standard Repository to Harbor
Adding a repository to Harbor is a crucial step to ensure that the Tanzu Kubernetes Grid (TKG) can access and deploy the packages contained within the repository. Here’s how to add the Tanzu Standard repository to Harbor:
1. Adding the Repository:
- Utilizing the
tanzu
CLI, the Tanzu Standard repository is added to Harbor.
tanzu package repository add tanzu-standard --url tkg-harbor.tanzu.domain.lab/tmc/498533941640.dkr.ecr.us-west-2.amazonaws.com/packages/standard/repo:v2.2.0_update.2 --namespace tkg-system
2. Command Breakdown:
tanzu package repository add tanzu-standard
: This command instructs thetanzu
CLI to add a package repository named “tanzu-standard”.--url ...
: Specifies the URL where the package repository is located, pointing to the Harbor instance where the Tanzu Standard package repository is stored.--namespace tkg-system
: Designates the Kubernetes namespace (in this case,tkg-system
) where the package repository will reside.
3. Verification:
- It’s crucial to verify the repository was added successfully post-execution.
Harbor GUI Verification:
- Log in to the Harbor user interface and navigate to the specified project/repository location to visually confirm the Tanzu Standard repository’s presence.
CLI Verification:
- Using the
tanzu
CLI, list the added repositories to confirm:
tanzu package repository list --namespace tkg-system
- If the repository named “tanzu-standard” appears in the list, it indicates the repository has been added successfully.
23. Installing and Configuring Cert-Manager
Cert-Manager is a native Kubernetes certificate management controller which helps in issuing certificates from various issuing sources. It ensures certificates are valid and up to date, and will renew certificates at a configured time before expiry. Here’s how to install and configure cert-manager in your TKG environment:
1. Namespace Creation for Packages:
- A dedicated namespace is created to house the packages related to cert-manager.
kubectl create ns packages
2. Cert-Manager Installation:
- Utilizing Tanzu CLI to install the cert-manager package.
tanzu package install cert-manager --package cert-manager.tanzu.vmware.com --version 1.10.2+vmware.1-tkg.1 -n packages
3. Generating Base64 Encoded Strings:
- For security, the CA certificates are base64 encoded.
CA_CERT=`base64 -i myCA.pem -w0`
CA_KEY=`base64 -i myCA.key -w0`
4. Creating Issuer Configuration File:
- A configuration file
issuer.yaml
is created to define the Secret and ClusterIssuer resources.
cat > /root/issuer.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
name: local-ca
namespace: cert-manager
data:
tls.crt: $CA_CERT
tls.key: $CA_KEY
type: kubernetes.io/tls
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: local-issuer
spec:
ca:
secretName: local-ca
EOF
5. Applying the Issuer Configuration:
- The
issuer.yaml
file is applied to create the necessary Kubernetes resources.
kubectl apply -f issuer.yaml
6. Verifying the ClusterIssuer:
- Ensuring the ClusterIssuer has been created successfully.
kubectl get clusterissuer
7. Verifying the Secret:
- Ensuring the Secret has been created in the
cert-manager
namespace.
kubectl get secret -n cert-manager
These steps ensure that cert-manager is installed and configured correctly, with a local issuer set up to issue certificates based on your CA certificate and key. This setup is crucial for managing TLS certificates within your TKG environment, ensuring secure communication between services in your Kubernetes clusters.
24. Deploy Keycloak
The steps outlined in this section of the article are aimed at deploying Keycloak on a Kubernetes cluster using Helm. Here’s a breakdown of the steps and some additional information:
- Create a Namespace for Keycloak:
- This step creates a new namespace called
keycloak
in your Kubernetes cluster.
- This step creates a new namespace called
kubectl create ns keycloak
Create a TLS Secret for Keycloak:
- This step creates a Kubernetes secret to hold the TLS certificate and key for Keycloak.
kubectl create secret tls keycloak-tls --cert=keycloak.crt --key=keycloak.key -n keycloak
Create values.yaml
file for Helm Configuration:
- This step involves creating a keycloakV
alues.yaml
file to customize the Helm deployment of Keycloak. Thevalues.yaml
file specifies various configuration options including the image registry, admin credentials, TLS settings, and service settings.
vi keycloakValues.yaml
global:
imageRegistry: ""
auth:
adminUser: admin
adminPassword: VMware1!
tls:
enabled: true
autoGenerated: false
existingSecret: "keycloak-tls"
usePem: true
service:
type: LoadBalancer
loadBalancerIP: 192.168.100.231
http:
enabled: true
ports:
http: "80"
https: "443"
Install Keycloak using Helm:
- This step installs Keycloak using Helm and the
values.yaml
file you created in the previous step.
helm install keycloak bitnami/keycloak -f keycloakValues.yaml -n keycloak
Verify the Installation of Keycloak:
- This step verifies the installation of Keycloak by listing the pods and services in the
keycloak
namespace.
kubectl get pods,svc -n keycloak
These steps should result in a running instance of Keycloak in your Kubernetes cluster, configured according to the settings specified in the values.yaml
file. The TLS secret created in step 2 should be used to secure the Keycloak instance, and the service should be exposed as a LoadBalancer with the specified IP address.
25. Keycloak Configuration:
- Log into the Administration Console:
- Use the username and password defined in the keycloakV
alues.yaml
file during the Helm installation to log into the Keycloak Administration Console.
- Use the username and password defined in the keycloakV
- Creating a New Client Scope:
- Navigate to
Client scopes
on the left menu, clickCreate client scope
, name itgroups
, and selectDefault
under type. - After saving, navigate to the
Mappers
tab of thegroups
client scope, clickAdd predefined mapper
, search for, and add the predefinedgroups
mapper. - Click on
groups
under the Mapper section, enableAdd to userinfo
, and clickSave
.
- Navigate to
- Creating Realm Roles:
- Navigate to
Realm roles
in the left menu, clickCreate role
, and create two roles namedtmc:admin
andtmc:member
.
- Navigate to
- Creating Users:
- Create two users named
admin-user
andmember-user
, and set passwords for each.
- Create two users named
- Creating Groups:
- Create two groups named
tmc:admin
andtmc:member
.
- Create two groups named
- Assigning Users to Groups and Roles:
- Add
admin-user
to thetmc:admin
group and map the role to the correspondingtmc:admin
realm role. - Similarly, add
member-user
to thetmc:member
group and assign thetmc:member
realm role.
- Add
- Adding a New Client:
- Navigate to
Clients
in the left menu, clickCreate client
, chooseOpenID Connect
as the client type, and provide a client ID and nametmc-sm
. - Enable client authentication and authorization.
- Set the home URL to
https://pinniped-supervisor.<tmc-dns-zone>/provider/pinniped/callback
. - Specify the following Valid redirect URIs:
https://pinniped-supervisor.<tmc-dns-zone>/provider/pinniped/callback
https://pinniped-supervisor.<tmc-dns-zone>
https://<tmc-dns-zone>
- Ensure the
groups
client scope has been added to the newly createdtmc-sm
client. - Lastly, note down the client secret under the
Credentials
tab for future reference with TMC Self-Managed.
- Navigate to
Notes:
- The steps for creating users, groups, and roles are straightforward, but it’s crucial to follow the naming conventions as specified for integration with TMC Self-Managed.
- The client configuration in Keycloak is crucial for enabling the integration with TMC Self-Managed. Ensure that the URLs are correctly configured to match your environment, especially the
<tmc-dns-zone>
placeholder. - The client secret generated in the last step is crucial for configuring the integration with TMC Self-Managed, so it should be stored securely.
26. Tanzu Mission Control Installation Guide
In this final part of the installation guide, you are setting up Tanzu Mission Control (TMC) in your environment. Here’s a breakdown of the steps and some additional information:
Configuration File:
- Create a configuration file named
tanzu-values.yaml
:- This file contains the necessary configuration values for the TMC installation.
harborProject: tkg-harbor.tanzu.domain.lab/tmc
dnsZone: tkg-tmc.tanzu.domain.lab
clusterIssuer: local-issuer
postgres:
userPassword: VMware1!
maxConnections: "300"
minio:
username: root
password: VMware1!
contourEnvoy:
serviceType: LoadBalancer
serviceAnnotations:
ako.vmware.com/load-balancer-ip: "192.168.100.239"
oidc:
issuerType: "pinniped"
issuerURL: https://tkg-keycloak.tanzu.domain.lab/realms/master
clientID: tmc-sm
clientSecret: [KeycloakSecret]
trustedCAs:
custom-ca.pem: |
-----BEGIN CERTIFICATE-----
PUT CONTENT HERE
-----END CERTIFICATE-----
harbor-ca.pem: |
-----BEGIN CERTIFICATE-----
PUT CONTENT HERE
-----END CERTIFICATE-----
keycloak-ca.pem: |
-----BEGIN CERTIFICATE-----
PUT CONTENT HERE
-----END CERTIFICATE-----
Create the Namespace:
Set up a dedicated namespace for Tanzu Mission Control:
kubectl create ns tmc-local
Add the Package Repository:
Add the Tanzu Mission Control package repository:bash
tanzu package repository add tanzu-mission-control-packages --url "tkg-harbor.tanzu.domain.lab/tmc/package-repository:1.0.1" --namespace tmc-local
Install Tanzu Mission Control:
Install Tanzu Mission Control using the provided package:
tanzu package install tanzu-mission-control -p "tmc.tanzu.vmware.com" --version "1.0.1" --values-file tanzu-values.yaml -n tmc-local
After few minutes TMC-SM is ready
After login you can attach first cluster
Log in to Tanzu Mission Control:
- Access the TMC console and log in with your credentials.
Navigate to the Clusters Section:
- Go to the “Clusters” section within TMC.
Add New Cluster:
- Click on “Add Cluster” or a similar button to initiate the process of attaching a new cluster.
Provide Cluster Information:
- Enter the name of your cluster and any other required information.
Retrieve Attachment Command:
- TMC will generate a command for you to run on your cluster. This command will typically look something like:bash
kubectl create -f "https://tkg-tmc.tanzu.domain.lab/installer?id=e96d7dbe4f655931f7cd7e8a2997382dede2e59d603c816657fbaf04e6196950&source=attach"
Run the Command on Your Cluster:
- Copy the command provided by TMC.
- Access your cluster (e.g., via
kubectl
). - Paste and run the command on your cluster.
Verify Attachment:
- Go back to the TMC console.
- Verify that your cluster is now listed and that TMC is able to retrieve information from your cluster.
Configure Additional Settings (Optional):
- Once your cluster is attached, you may want to configure additional settings, policies, or workspaces in TMC as per your requirements
In Tanzu Mission Control (TMC), when you want to attach a management cluster (as opposed to a workload cluster), the process is somewhat similar but with a few distinctions. The management cluster is where your Tanzu Kubernetes Grid (TKG) or other Kubernetes management components are running. Attaching a management cluster to TMC allows you to manage not only the cluster itself but also any workload clusters provisioned by it.
Here’s a simplified step-by-step process on how you might go about attaching a management cluster to TMC:
- Log in to Tanzu Mission Control:
- Access the TMC console and log in with your credentials.
- Navigate to the Clusters Section:
- Go to the “Administration” section within TMC.
- Add New Management Cluster:
- Click on “RegisterManagement Cluster”
- Provide Cluster Information:
- Enter the name of your management cluster and any other required information.
- Retrieve Attachment Command:
- TMC will generate a command for you to run on your management cluster. This command will typically look something like:bash
kubectl apply -f "https://tkg-tmc.tanzu.domain.lab/installer?id=fdc8674a3b32844fd790ff52535195c62b72bcd62b83bc6c4b323f8aba1b245a&source=registration&type=tkgm"
Run the Command on Your Management Cluster:
- Copy the command provided by TMC.
- Access your management cluster (e.g., via
kubectl
). - Paste and run the command on your management cluster.
kubectl config use-context tkg-mgt-admin@tkg-mgt
kubectl apply -f "https://tkg-tmc.tanzu.domain.lab/installer?id=fdc8674a3b32844fd790ff52535195c62b72bcd62b83bc6c4b323f8aba1b245a&source=registration&type=tkgm"
Verify Attachment:
- Go back to the TMC console.
- Verify that your management cluster is now listed and that TMC is able to retrieve information from your management cluster.
Provision/Manage Workload Clusters:
- With the management cluster attached, you can now use TMC to provision new workload clusters or manage existing workload clusters that were provisioned by this management cluster.
In this article, we explored the significance of efficient Kubernetes management in today’s cloud-centric landscape. We introduced VMware’s Tanzu Kubernetes Grid Multicloud (TKGm) as a potent solution for achieving a seamless, secure, and integrated Kubernetes environment across diverse platforms. As organizations upscale their Kubernetes clusters, the indispensable role of a centralized management platform is highlighted, with VMware Tanzu Mission Control (TMC) emerging as a viable solution. We delved into a detailed walkthrough of installing Tanzu Mission Control on Tanzu Kubernetes Grid Multicloud, aiming to simplify Kubernetes cluster management while enhancing visibility, security, and control over the entire infrastructure. Additionally, we covered essential steps in setting up a bootstrap machine, installing requisite tools, and ensuring secure communications within the network. Through this guide, readers are equipped with the knowledge to harness the combined potential of Tanzu Mission Control and Tanzu Kubernetes Grid Multicloud for a streamlined Kubernetes management experience.
NOW LET’S HAVE FUN
ps. If you have question please reach me