Jump to content

Pranay Shah

Members
  • Posts

    8
  • Joined

  • Last visited

Recent Profile Visitors

114 profile views

Pranay Shah's Achievements

Rookie

Rookie (2/14)

  • Conversation Starter Rare
  • Week One Done
  • One Month Later
  • One Year In

Recent Badges

1

Reputation

  1. Introduction This WebFOCUS CE Demo explores concepts related to securing WebFOCUS, primarily focusing on network traffic. Our objective is to ensure the security of incoming traffic and traffic circulating within the WebFOCUS setup. Discover how to fortify your WebFOCUS Container Edition on Kubernetes, ensuring that every byte of your data remains secure, whether it's flowing between components or streaming from browsers to clusters. Dive in to master the art of safeguarding your analytics!" TL;DR TL;DR for this article and Videos that are part of this article · Securing Ingress and Egress Traffic: It outlines strategies for securing both incoming (ingress) and outgoing (egress) network traffic, emphasizing the use of SSL/TLS termination at the Ingress controller level for incoming traffic, and mTLS/SSL for secure communication with external services, ensuring data protection during transmission. · POD and Multi-Container PODs Management: The document explains Kubernetes Pods and the concept of multi-container Pods within the Kubernetes ecosystem, emphasizing their role in facilitating tightly-coupled containers to share execution environments and resources, thus enhancing operational efficiency and security. · Service Mesh Implementation with Linkerd: The document highlights the deployment of Linkerd, a service mesh, to enhance security within the WebFOCUS environment. Linkerd simplifies SSL configuration across all components, enabling secure, encrypted communication internally and improving the overall security posture of the system. · Verification and Debugging Techniques: Various methods for verifying the security measures implemented, including using Linkerd's CLI tools and Wireshark for traffic inspection, are detailed. This ensures that the data transmission within the cluster is encrypted and secure against potential threats. Video 1 Understanding WebFOCUS Component Interconnectivity: In typical network configurations, traffic can be categorized into two main types: East/West, representing traffic between WebFOCUS components within the data center, and South/North, encompassing traffic entering or exiting the data center. Now, let's delve into a high-level schematic diagram illustrating the interactions between these components and the pathways traversed by data within and outside the data center. Overview of interconnectivity of WebFOCUS components Upon completing the deployment of WebFOCUS CE, whether on a local cluster or a managed Kubernetes cluster, the resulting configuration resembles the following: WebFOCUS CE, often described as a "battery-included" deployment, is self-sufficient, providing all necessary components out of the box, such as the database server, Solr, Zookeeper for search functionalities, and ETCD for centralized configuration storage. In traditional on-premise setups, securing the main components of WebFOCUS, such as the application server and reporting server, suffices to secure most inbound user traffic, offering around 99% coverage for securing major traffic entering WebFOCUS. While similar setups are possible in Kubernetes-based environments, they may not be the most optimal solutions as they violate Infrastructure as Code (IAC) principles. This is primarily due to the necessity of embedding certificates within container images during the build process. In Kubernetes and other cloud-based deployment practices, the consistent promotion of identical setups and certificates from development to production environments is crucial. Consequently, certificates intended solely for production environments should not be used in the development or pre-production stages. Moreover, certificate management, including renewal processes, introduces complexities. However, the CNCF community has addressed this challenge. What is the challenge here? The challenge here lies in securing numerous components, each requiring its own SSL setup. While custom SSL configurations are feasible, they are time-consuming and cumbersome. This challenge has been recognized by the CNCF community and addressed by few of it's member projects. One solution to this problem is adopting a service mesh, a concept we will discuss shortly. Video 2 Securing Ingress and egress traffic (North/South traffic) Before proceeding further, let's discuss securing both incoming and outgoing (ingress/egress) traffic, commonly referred to as North/South traffic. Securing Ingress traffic: To safeguard incoming traffic, utilizing an Ingress controller and implementing SSL certificate termination is advisable. Alternatively, you can opt to apply mTLS/SSL directly on your cloud load balancer, ensuring encrypted communication onward from that point. Securing Egress traffic: Outgoing traffic originating from the cluster, such as interactions with SMTP servers, data lakes, business/partner databases, or Active Directory, is assumed to be already secured by their respective vendors. When WebFOCUS communicates with these external services, it's recommended that mTLS/SSL communication be initiated to ensure secure data transmission. Securing Components with Service Mesh (Linkerd): Securing components running in the cluster Securing components within the cluster is an optional step, especially in scenarios like an "Air Gap" setup where traffic is restricted from entering or leaving the cluster. However, if you aim to enhance security by encrypting traffic across all endpoints, employing a service mesh like Linkerd provides a straightforward solution. How Service Mesh (Linkerd) can help? Linkerd simplifies the process of enabling SSL across all endpoints, offering a seamless experience akin to pressing a button or executing a single command. In this demonstration, we'll witness how Linkerd effortlessly secures all endpoints and components within the WebFOCUS Cluster. What is Service Mesh? A service mesh in Kubernetes is a dedicated infrastructure layer that facilitates communication between microservices within a cluster. It abstracts away the complexities of service-to-service communication, providing features like load balancing, traffic management, service discovery, and encryption. By deploying a service mesh like Istio or Linkerd, developers can enhance observability, reliability, and security without modifying application code. It improves the management and monitoring of microservices architectures, ensuring better control over network traffic and interactions between services. What is Linkerd? How does it work? Linkerd is a service mesh for Kubernetes. It makes running services easier and safer by giving you runtime debugging, observability, reliability, and security—all without requiring any changes to your code. By abstracting away the complexity of securing individual endpoints, LinkerD Service Mesh provides centralized control and visibility, simplifying the task of securing all endpoints in a Kubernetes cluster. How it works Linkerd operates by deploying ultralight, transparent "micro-proxies" alongside each service instance, which efficiently manage inbound and outbound traffic for the service. These proxies act as highly instrumented network stacks, seamlessly integrating with the control plane for telemetry and control. Understanding POD and Multi Containers in POD Kubernetes Pods, the smallest deployable units in Kubernetes, encapsulate one or more containers along with shared resources such as storage volumes and networking interfaces. Pods serve as the basic building blocks of applications, facilitating easy scaling and management within Kubernetes clusters. A multi-container Pod in Kubernetes allows for co-locating multiple tightly-coupled containers within the same Pod, enabling them to share the same execution environment and resources. This approach promotes efficient communication and container coordination, simplifying deployment and management tasks while maintaining a cohesive application architecture. Adding Linkerd to our Kubernetes cluster : Note: It's recommended that you try this demo first for non-production clusters. This demo involves imperative commands for interacting with the cluster, which is efficient for demos but doesn't strictly adhere to the Infrastructure as Code (IAC) rule. Once comfortable with the commands, consider scripting them or referring to Linkerd production deployment guidelines at linkerd.io/going-to-production. Video 3 : Eavesdropping on ReportCaster Before proceeding, let's intercept TCP traffic to confirm data transmission in plaintext. We've chosen ReportCaster due to its quick startup, facilitating multiple restarts. Leveraging Wireshark, the command-line tool, we'll intercept incoming messages without rebuilding the ReportCaster container image. Instead, we'll opt for a sidecar container named "wireshark," containing the pre-installed 'tshark' tool for capturing TCP traffic. To execute 'tshark,' the POD must start with root user privileges (user ID 0). Thus, we'll employ a Patch command to update the runAsUser field in the security context of the reportcaster StatefulSet in the webfocus namespace, setting the user ID to 0. # This `kubectl patch` command updates the `runAsUser` field in the security context of the `reportcaster` StatefulSet in the `webfocus` namespace to set the user ID to 0 (root user). kubectl patch statefulset reportcaster -n webfocus --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/securityContext/runAsUser", "value": 0}]' Now, we can add a sidecar container using this patch command. # The command uses `kubectl patch` to modify the StatefulSet named `reportcaster` in the `webfocus` namespace by adding a new container named `wireshark` with the specified image. kubectl patch statefulset reportcaster -n webfocus --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/-", "value": {"name": "wireshark", "image": "cr.l5d.io/linkerd/debug:edge-24.3.2"}}]' After a few seconds, we can run this command to inspect TCP traffic. This command runs a "tshark" command in the container "wireshark" Command tshark -i any -f "tcp" -x - prints payload data of TCP packets, use the -x option to print packet details in hexadecimal and ASCII format # Wait pod to be ready with both the Containers kubectl wait --for=condition=ready pod/reportcaster-0 --namespace=webfocus --timeout=60s Make sure above comand finishes before proceeding to next stesp # This command executes `tshark` within the `wireshark` container of the `reportcaster-0` pod in the `webfocus` namespace, capturing TCP traffic in hexadecimal format from all interfaces. kubectl exec -it -n webfocus reportcaster-0 -c wireshark -- tshark -i any -f "tcp" -x Sample output : This confirms that data is transmitted in clear text; we will keep this sidecar container running for a while. Updating WebFOCUS Components with Service Mesh Important Note: Exercise caution while executing the following steps, as they will restart the entire WebFOCUS setup, resulting in temporary downtime for your cluster. Some of the steps below also include running a few of the PODs as root a user - these is just for debug/demo purposes only. Once this debug step is over, revert it back to running as a non-root user. 1. Install Linkerd CLI: Utilize a helper script to install the Linkerd CLI on your Ubuntu system seamlessly. # Install linkerd curl -sL https://run.linkerd.io/install | sh # Add linkerd in path export PATH=$PATH:$HOME/.linkerd2/bin # Check verison linkerd version Run a check to see if linked can be installed in a cluster. linkerd check --pre Sample output : 2. Deploy the Control Plane: Employ the following steps to deploy the Linkerd control plane to your Kubernetes cluster (ensuring that your kubeconfig file has cluster admin privileges). # Install CRD linkerd install --crds >crds.yaml kubectl apply -f crds.yaml # Install linker d - we might not need runAsRoot option linkerd install --set proxyInit.runAsRoot=true >linkerd.yaml kubectl apply -f linkerd.yaml 3. Optional Dashboard Installation: Optionally, install the Linkerd dashboard, which provides a visual representation of data plane traffic, enabling you to monitor connections made by each pod within the cluster. # Install Dashboard ( you might have to wait for while ) linkerd viz install >linkerd-viz.yaml kubectl apply -f linkerd-viz.yaml 4. Dashboard Configuration Update: Customize the dashboard configuration to enable access via your Fully Qualified Domain Name (FQDN), allowing accessibility from your laptop. Additionally, expose port 8084 of the dashboard for external access. # Update viz service "web" to allow FQDN - not just localhost kubectl get -n linkerd-viz deployments.apps web -o json | sed "s/localhost/wfce02.ibi.systems/g" | kubectl replace -f - # Expose dashboard to outside world (only for demo) kubectl -n linkerd-viz port-forward svc/web 8084:8084 --address 0.0.0.0 & 5. Meshing WebFOCUS Components: In the WebFOCUS Container Edition (WF-CE), Kubernetes StatefulSets manage most components. Apply the Linkerd Proxy (sidecar) to all StatefulSets, ensuring secure traffic transmission. Note that this process will cause a brief outage lasting approximately 3 to 4 minutes while the components are restarted. # Inject Linkerd proxy to all WebFOCUS Stateful set PODs kubectl get -n webfocus sts -o yaml | linkerd inject - | kubectl apply -f - # Wait for 3 min to restart all PODS kubectl get pods -n webfocus --field-selector=status.phase!=Succeeded,status.phase!=Failed -o name | xargs -I {} kubectl wait --for=condition=ready {} --namespace=webfocus --timeout=180s Note: If other pods are managed by Deployments (e.g., ibi DSML) and are not covered by the initial command, rerun the command and select Deployments instead of StatefulSets. Verification of Security Measures 1. Using Linkerd CLI: Utilize the "viz" command within the Linkerd CLI to confirm that all WebFOCUS components are meshed securely. linkerd viz -n webfocus edges sts linkerd viz -n webfocus edges pods 2. Via Linkerd Dashboard: Access the Linkerd dashboard to inspect and validate the expected meshing of traffic visually. 3. Utilizing Wireshark: For an additional layer of validation, inspect traffic again using Wireshark. With Wireshark running within the ReportCaster POD, execute the same command as before, ensuring that all observed traffic is now encrypted. # Command that will show all traffic for Report Caster kubectl exec -it -n webfocus reportcaster-0 -c wireshark -- tshark -i any -f "tcp" -x Sample output : These comprehensive verification techniques ensure that all WebFOCUS components' traffic is safeguarded and secure after the Linkerd meshing solution is applied. Securing Communication from Ingress to Application Servers Video 4 What about Ingress & TLS termination? Ingress controllers serve as reverse proxies that manage external traffic and can implement mTLS/SSL encryption on their endpoints. This ensures that communication from the user's browser to the Ingress controller remains secure and terminates at the controller itself. However, traffic between the Ingress controller and the application server remains unencrypted. Inspecting Clear-Text Traffic with Wireshark: To inspect the clear-text traffic, we can employ Wireshark side-car containers (just like how we did above), leveraging Linkerd's capabilities (addtional command switch `--enable-debug-sidecar`). We can effectively eavesdrop on incoming traffic by injecting a debug container alongside the application server. The process involves using Linkerd's helper command to inject the side-car debug container, incorporating Wireshark for traffic analysis. Restricting Wireshark to Ingress Controller IP: To focus Wireshark's capture solely on traffic originating from the Ingress controller, we first retrieve the IP address of the Ingress controller. We then utilize this IP address to configure the tshark command, filtering traffic specifically from the Ingress controller's IP. # Change run as user to 'root' so wireshark can run kubectl patch statefulset appserver -n webfocus --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/securityContext/runAsUser", "value": 0}]' # add a wireshark side-car using linkerd inject switch --enable-debug-sidcar kubectl get -n webfocus sts appserver -o yaml | linkerd inject --enable-debug-sidecar - | kubectl apply -f - # Wait 3 min for POD to come up kubectl wait --for=condition=ready pod/appserver-0 --namespace=webfocus --timeout=180s # Get ingress Controller POD's IP INGRESS_CONT_IP=$(kubectl get pod -n ingress-nginx -l app.kubernetes.io/component=controller -o jsonpath='{.items[*].status.podIP}') # Check what kind of traffic is comming to App server from Ingress Controller POD kubectl exec -it -n webfocus appserver-0 -c linkerd-debug -- tshark -i any -f "src host $INGRESS_CONT_IP" -x Sample output : We can see data is being transmitted between the Ingress Controller and App Server in Clear text format. Meshing the Ingress Controller: To address the clear-text traffic between the Ingress controller and the application server, it's essential to mesh the Ingress controller deployment. Meshing involves integrating Linkerd with the Ingress controller to establish a secure communication channel. Once meshed, all traffic between the Ingress controller and the application server becomes encrypted, enhancing overall security. kubectl get -n ingress-nginx deploy -o yaml | linkerd inject - | kubectl apply -f - Use the command below to see if the Ingress controller POD returned successfully. kubectl rollout status deployment.apps/ingress-nginx-controller -n ingress-nginx --timeout=180s Inspecting Encrypted Traffic with Wireshark: Once ingress controller POD is also meshed now, if you try to run the same tshark command again, you will see all traffic is encrypted between the Ingress controller POD (reverse proxy) and Application server INGRESS_CONT_IP=$(kubectl get pod -n ingress-nginx -l app.kubernetes.io/component=controller -o jsonpath='{.items[*].status.podIP}') kubectl exec -it -n webfocus appserver-0 -c linkerd-debug -- tshark -i any -f "src host $INGRESS_CONT_IP" -x Sample output : We can see that the data is now encrypted when it flows between Ingress Controller and App server POD Using Linkerd's Visualization Command: Linkerd offers a CLI command called "viz," which allows us to visualize the communication between pods across namespaces. By executing the "viz" command, we can verify the secure communication between the application server (e.g., appserver-0 in the ingress-nginx namespace) and other relevant components (e.g., webfocus namespace), ensuring end-to-end security. Implementing these measures establishes a robust security framework, safeguarding communication at various points within the cluster architecture. Conclusion and Final Thoughts: Final thoughts Throughout this tutorial, we delved into securing WebFOCUS within Kubernetes environments, emphasizing the crucial aspects of network traffic security and implementing a service mesh with Linkerd. This comprehensive approach not only simplifies SSL configuration management but also ensures encrypted communication across all components, enhancing the overall security posture. By equipping ourselves with these methodologies, we are better prepared to protect our deployments against emerging threats, marking a significant step forward in our cybersecurity efforts. Key takeaways include: Understanding the importance of securing both ingress and egress traffic to prevent unauthorized access and data breaches. Implementing Linkerd as a service mesh to simplify SSL configuration and secure communication across Kubernetes nodes. Kubernetes Pods' flexibility facilitates the addition of security measures without hindering functionality. The role of encrypted communication channels in safeguarding data in transit within the WebFOCUS environment. As we wrap up, it's clear that the path to securing WebFOCUS involves a combination of strategic planning, an understanding of Kubernetes' inner workings, and the judicious application of service meshes like Linkerd. The skills and knowledge acquired here should empower you to fortify your deployments, making them resilient against the evolving threats in today's digital landscape. Looking forward, I encourage you to explore Kubernetes security practices further, delve deeper into service mesh architectures, and continue refining your cybersecurity approach. This tutorial has laid the groundwork, but the journey to comprehensive security in WebFOCUS CE is ongoing and ever-changing. Remember, securing your infrastructure is not a one-time effort but a continuous process of learning, adapting, and implementing the best practices. Cleanup debugging PODs : We need to clean two extra sidecars that we added to two of our Statefulsets - one for reportcaster and the other one is for appserver Remove side-car wireshark and remove run as root user from reportcaster pod kubectl patch statefulset reportcaster -n webfocus --type='json' -p='[{"op": "remove", "path": "/spec/template/spec/containers/1"}]' kubectl patch statefulset reportcaster -n webfocus --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/securityContext/runAsUser", "value": 1000}]' Remove debug sidecar from appserver and remove run as the root user. kubectl patch statefulset appserver -n webfocus --type=json -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/config.linkerd.io~1enable-debug-sidecar"}]' kubectl patch statefulset appserver -n webfocus --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/securityContext/runAsUser", "value": 1000}]' ----
  2. In this demo, we begin with the default setup of WebFOCUS CE 1.2.0 (WF 9.2), and proceed to assign a Fully Qualified Domain Name (FQDN) to the host running this WF-CE setup. We then install an ingress controller to allow access to the Application Server via standard port 80, rather than the default port 31080. The video concludes with installing an SSL Certificate to secure the Application Server's endpoint with TLS. High-level steps : - Begin by deploying the standard configuration of WebFOCUS CE as provided. - Ensure that the setup is accessible via Port 31080, which is the default port. - Deploy an Ingress controller and create an Ingress resource within the webfocus namespace to facilitate access over Port 80. - Incorporate a secret containing a TLS/SSL certificate into the webfocus namespace and modify the Ingress resource to utilize this secret for secure connections. - Access the WebFOCUS configuration securely over HTTPS (Port 443). - (Optional) Consider deactivating Port 31080 to prevent access through the unsecured port. Out-of-the-box setup : Once the WebFOCUS CE setup completes deploying all components - you should be able to access the WF App server using port 31080 If the above succeeds, you can also access the WebFOCUS App server GUI over the browser by going to the URL: http://x.1.10.96:31080 Install NGINX ingress controller. In the previous topic, we saw we have to access WebFOCUS using port 31080; what if we want to just access it over port 80 or not provide a port at all? For that, we need to install an Ingress controller in the K8s cluster; in this case, we will use NGINX. Let's install the Ingress controller in the kubernetes cluster - you can use the commands below. # Lable all Nodes to allow Ingress controller to run kubectl label nodes --all ingress-ready=true # Install NGINX Ingress controller that will attach Controller POD to port 80 and 443 on Node kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml # Wait for all Ingress controller pods to come up kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=90s After the Ingress controller is running, if you run the nc command again to see if Port 80 is open or not nc -zv x.1.10.96 80 If the above command succeeds, we know NGINX ingress is running fine on port 80 - now the next thing we need to do is create an Ingress Object in the webfocus namespace so we can access WebFOCUS on port 80 Now onwards, we're going to access the above Machine with its FQDN name - in this example, it is wfce02.ibi.systems - we assume you have something similar in your case; if not, ask your system admin to configure FQDN for your VM/Machine. So, in our case, if I re-run the above command as # Use nc to check if port 80 is open now nc -zv wfce02.ibi.systems 80 >> Connection to wfce02.ibi.systems (x.241.1.29) 80 port [tcp/http] succeeded! In the above, we assume the FQDN name "wfce02.ibi.systems" points to the correct IP of the machine where WF CE is running ( in this case, IP x.241.1.29) If the "nc" command returns with success, we are good to go to the next step Create Ingress Object in webfocus namespace Save the text below as an "appserver-ingress.yaml" file; as you can see, we are now using the FQDN of wfce02.ibi.systems to set up Ingress rules. This file also assumes your WF-CE setup is running in Namespace "webfocus." Note: make changes as needed before you apply it apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx meta.helm.sh/release-name: appserver meta.helm.sh/release-namespace: webfocus nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/affinity-mode: persistent nginx.ingress.kubernetes.io/app-root: /webfocus nginx.ingress.kubernetes.io/client-body-buffer-size: 64k nginx.ingress.kubernetes.io/force-ssl-redirect: "false" nginx.ingress.kubernetes.io/proxy-body-size: 200m nginx.ingress.kubernetes.io/proxy-connect-timeout: "300" nginx.ingress.kubernetes.io/proxy-read-timeout: "300" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/session-cookie-change-on-failure: "true" nginx.ingress.kubernetes.io/session-cookie-expires: "28800" nginx.ingress.kubernetes.io/session-cookie-max-age: "28800" nginx.ingress.kubernetes.io/session-cookie-name: sticknesscookie nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0 labels: app.kubernetes.io/instance: appserver app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: appserver app.kubernetes.io/version: "1.0" helm.sh/chart: appserver-0.1.0 name: appserver namespace: webfocus spec: rules: - host: wfce02.ibi.systems http: paths: - backend: service: name: appserver port: name: port8080 path: / pathType: ImplementationSpecific Apply the above file to the step This should create an Ingress rule in the ingress controller that sends any HTTP request incoming on port 80 with the HTTP Host header set to "wfce02.ibi.systems" - I will forward that request to the kubernetes service named "appserver" over port 8080. Now you should be able to access the WebFOCUS App server GUI via the URL: http://wfce02.ibi.systems Securing endpoint with SSL As you can see, the above URL is http:// that is not secure - we want to enable SSL so that we can access WebFOCUS GUI over SSL - such as URLs starting with https:// For this, we need to get certificates generated for our FQDN - in the above case 'wfce02.ibi.systems'; typically, you will get two PEM files - one named "privkey.pem" and the other "fullchain.pem" You can inspect the "fullchain.pem" file to see if it is indeed issued for the FQDN you use (a wild card is also okay). For this, you will need the OpenSSL tool installed on your machine. You will need two files—one with the key file and the other with a certificate file. First, we create a Kubernetes secret with these two files in the same 'webfocus' namespace. Once the secret has been created, the only thing left to do is to update the Ingress object in the webfocus namespace to use this secret to enable TLS/SSL. Now, let's update the appserver-ingress.yaml file to use this secret (wfce02-ibi-tls ) that we created above Add the below lines at the end. Re-apply this file to the cluster - this will update the ingress object to now support SSL (port 443) If all goes as expected - now you should be able to access WebFOCUS over HTTPS - https://wfce02.ibi.systems (Optional) Disable Port 31080 port Since we now have a secure way to access the WebFOCUS App server over SSL - we don't need to access the App server over port 31080 - so edit the service for the App server and change it from NodePort to Cluster IP type of service. At the beginning of this demo, we saw that we could access the WebFOCUS App server GUI over port 31080 - but now that is unnecessary as we can access the App server over secure port 443. So it makes sense to disable port 31080 - for that, we need to change appserver - Service (svc) to type ClusterIP from NodePort - below command to do that.
×
  • Create New...