Jump to content
  • Welcome!

    We are so excited to have you here.

    Join now to access our forum discussions on all things ibi - products, solutions, how-to's, knowledgebase, peer support and more.

    Please register or login to take part. (If you had a previous community account simply click > Sign In > Forgot Password)

     

  • Our picks View All

    • What is ibi Data Intelligence?
      Our platform is a unified data management solution that offers a comprehensive suite of tools for data integration, application integration and data quality improvement, empowering businesses to unlock the full potential of their data.
      What can I accomplish with ibi Data Intelligence?
      We understand that successfully leveraging data requires the collaboration between both the business and technology (IT) teams within organizations. The platform provides those teams with key features and capabilities, such as:

      Data Integration: Seamlessly connect disparate data sources, facilitate data movement, transformation and consolidation for a unified view.


      Application Integration: Facilitate communication between different applications to orchestrate, automate & streamline information exchange.


      Data Quality Improvement: Implement robust data quality checks & transformations to verify, cleanse & enrich your data to ensure high accuracy and reliability.






      These capabilities allow organizations the needed functions to solve their various use cases, regardless of industry, including:

      Enterprise Data Integration:  Large enterprises managing diverse data environments need to unify data from multiple sources for comprehensive business intelligence.


      Application Data Exchange: Organizations with multiple interdependent software applications need to ensure real-time data consistency across business applications.


      Data Quality Management: Businesses prioritizing data accuracy for compliance and reporting need to implement data cleansing & validation to ensure data integrity.




      How can I get started with ibi Data Intelligence?
      We recommend reaching out to your ibi account team and connecting with an Account Technology Strategist to review your use case and develop a solution strategy fit for your organization.
      • 0 replies
        • Like
    • Data Migrator BULK UPSERT

      Over the last year ibi has made a concerted effort to improve performance for data flows upserting data to target databases.

      The effort has come in the form of updating our adapters for all databases that support merge and upsert. This allows for us to support bulk upserts, improving performance significantly.

      In the example flows I will be showing a flow using insert/update to do upserts to a database table and then a flow using a bulk upsert and pointing out the improvement in throughput. 

      These first two images show a flow loading data from a table in a MS SQL Server database to a Postgresql database table using insert/update. This is how upserts were done in Data Migrator before Bulk Upserts were available.


       


       

      As you can see the log below insert/update process took over 35 minutes to upsert over 2 million rows into the target table.

       


      The next two images show the configuration of a flow that utilizes bulk upsert. The flow is configured to be optimized and Bulk Load is used to Merge data into the Target. This leverages the Merge ability of the database and is significantly more efficient than insert/update.


      *Note: On the Web Console Optimize is always set on, so this setting change is unnecessary.


       

      Both of these flows used the same source table, with target tables that are identical. As you see in the log below, using bulk upsert, upserted over 2 million rows into the table in just over 2 minutes. A significant improvement over insert/update which took roughly 35 minutes.


      As of 9.3 bulk upsert is supported by the following databases:

      MS Azure Synapse Analytics (formerly SQL DW)

      MS SQL Server ODBC/AzureDB

      MS SQL Server OLE DB/AzureDB

      MySQL

      Oracle

      Teradata

      Amazon Redshift

      Apache Hive

      EXASol

      Google BigQuery

      Greenplum DB

      Hyperstage (PG)

      Informix

      MariaDB

      Netezza

      PostgreSQL

      Snowflake

      Sybase

      Vertica

      Salesforce.com
      • 0 replies
        • Like
    • WebFOCUS is dedicated to transforming the often-perceived complexity of Data Science and Machine Learning (DSML) into a streamlined, user-friendly experience that enhances efficiency for business users. Our approach involves meticulously refining workflows and upgrading user interfaces, ensuring that even non-technical users can easily access and leverage the powerful DSML features that WebFOCUS offers.

       

      Our UX team is at the forefront of this transformation. They engage in extensive usability testing, incorporate real-time customer feedback, and build detailed user personas that reflect a variety of real-world scenarios. This process helps ensure that our product not only meets but exceeds the diverse needs and expectations of our users. With features like instant insights, natural language query (NLQ), and robust machine learning capabilities, WebFOCUS stands out in the marketplace, providing users with sophisticated tools to extract meaningful insights from their data, thereby enabling informed decision-making.

       

      Insights  -  Insights are at the heart of informed decision-making. With our latest 9.3 release, we have taken significant strides in the way insights are generated and delivered, offering users more relevant, timely, and actionable intelligence. This empowers organizations to strategize effectively, optimizing outcomes based on solid data-driven foundations.

      Natural Language Query (NLQ)  -  The advancement of NLQ technology marks a significant shift in how users interact with data systems. WebFOCUS enhances this interaction by simplifying the user interface to support a natural, conversational exchange. Users can ask complex data-related questions in plain language and receive reports or visualizations as answers, making analytics an intuitive and integral part of their daily decision-making process.

      Predicting Data with Machine Learning  -  Predictive analytics are critical for anticipating future trends and preparing for what's next. WebFOCUS harnesses the power of machine learning to provide predictive insights, allowing users to respond to current conditions and proactively address future challenges. These capabilities enable users to identify and capitalize on opportunities, positioning them well ahead of the competition in a fast-paced market.

       

      While we pride ourselves on these advanced features, we believe that their actual value comes from how accessible and user-friendly they are. Our ongoing commitment to improving WebFOCUS focuses on iterative development based on user feedback and emerging market needs. By constantly enhancing our interface and functionalities, we aim to make DSML not just a tool but a critical, reliable ally in the pursuit of business excellence. Each progressive enhancement helps us move closer to our goal of making sophisticated data analysis a cornerstone of business strategy across industries.
      • 0 replies
        • Like

Welcome to the ibi Community

See recent community activity

  • What is ibi Data Intelligence?
    Our platform is a unified data management solution that offers a comprehensive suite of tools for data integration, application integration and data quality improvement, empowering businesses to unlock the full potential of their data.
    What can I accomplish with ibi Data Intelligence?
    We understand that successfully leveraging data requires the collaboration between both the business and technology (IT) teams within organizations. The platform provides those teams with key features and capabilities, such as:
    Data Integration: Seamlessly connect disparate data sources, facilitate data movement, transformation and consolidation for a unified view. Application Integration: Facilitate communication between different applications to orchestrate, automate & streamline information exchange. Data Quality Improvement: Implement robust data quality checks & transformations to verify, cleanse & enrich your data to ensure high accuracy and reliability.
    These capabilities allow organizations the needed functions to solve their various use cases, regardless of industry, including:
    Enterprise Data Integration:  Large enterprises managing diverse data environments need to unify data from multiple sources for comprehensive business intelligence. Application Data Exchange: Organizations with multiple interdependent software applications need to ensure real-time data consistency across business applications. Data Quality Management: Businesses prioritizing data accuracy for compliance and reporting need to implement data cleansing & validation to ensure data integrity. How can I get started with ibi Data Intelligence?
    We recommend reaching out to your ibi account team and connecting with an Account Technology Strategist to review your use case and develop a solution strategy fit for your organization.
  • Just when you think you’ve achieved something great, maybe even perfect, the world sometimes throws you a curve. Like many other things today, design processes and product strategies seem to be in a continuous state of evolution. While methodologies and core design principles hold strong, desired outcomes and the definitions of success somehow seem to change.
    A year ago I collaborated with Angie Hildack, ibi PM Lead, on an article titled “Welcome to the new WebFOCUS!” where we set the stage for the great things we had planned for the product. As I look back at the experience vision we described in that article and consider the progress we’ve made since then, I feel we are indeed delivering on that promise. Although it has not been without challenges and complexities we’ve needed to overcome. It’s become clear that attempting to solve some of the experience challenges we set out to address ended up introducing new challenges that also had to be met with careful consideration. There are times where solving one problem causes another, and then another, and another, etc. It’s a cycle of design exploration and iteration that is not new in the world of product design, but it is unique and challenging each time it occurs. The goals and the endpoint you strive for sometimes end up being quite different when you actually get there. In life, the destination is often relatively unknown and is entirely dependent on the journey itself. And more times than not, the old saying is absolutely true. It’s the journey that matters, not the destination. Of course in the world of product design, the destination absolutely matters. But the journey is every bit as rewarding if you look at it the right way.
    So how do we ensure the journey leads to the right destination? It’s important to periodically revisit the process, the goals, and the expected outcomes… and make adjustments accordingly as trends, audiences and needs change. With the increasing importance of designing for all user types, this heightened level of user awareness has naturally impacted the way we think and feel and design, which in turn, has altered our concept of the product design lifecycle. We plan longer release cycles with phases of incremental exploration, validation, and updates which allow more flexibility for course correction and problem solving. This provides us the best opportunity to evolve our design strategy as it unfolds while still keeping the destination, wherever and whatever it may be, in sight. 
    With WebFOCUS and the Hub in particular, we’ve spent the last few years heavily focused on reaching new audiences and leveraging modern technologies. We aim to draw greater performance and outcomes from data and provide improved reporting, visualization and collaboration capabilities for better real-time decision making. This of course is critical to our long-term strategy for growth and continued success. As for the journey… we continue to toe the line between old and new ways of working, traditional and modern ways of thinking, manual and automated workflows, back-end code and front end UI, etc. We’re working hard to connect the past with the future to ensure they complement each other with relative harmony. Can WebFOCUS be everything to everybody? Time will tell. We haven’t reached the destination yet so it’s hard to say with certainty. But we hope to come as close to this as we can. This is the journey we are on… and it is not ours alone. Through ongoing partnerships and collaboration with our customers, we travel this road together as we imagine, and re-imagine, the future of WebFOCUS. Together we will see the forest for the trees.
    We’ve come a long way this past year and continue to add key features of great value as planned. But it’s entirely possible the concept of a final destination may simply not exist. Even so, we’ll continue on this course and we’re glad to be sharing this journey with you. And as the saying goes, “a journey of a thousand miles begins with a single step”. This is very true. This past year was a great first step to realizing our vision and we’re excited about the possibilities in store for the future.
     
  • This article intends to cover what you need to secure your search in Solr and how it's connected to WebFOCUS.
    Sometimes, some of our sensitive data is stored in the WebFOCUS Repository or at the WebFOCUS Reporting Server Application folders (hold files that are not stored temporarily, for example) and Solr allows you to find almost everything under WebFOCUS, so when performing a search, the connection between Solr and WebFOCUS is not encrypted by default and uses HTTP protocol. With this article, we want to enhance your security by adding an extra SSL layer to the communications between Solr and WebFOCUS, enabling SSL in Solr and WebFOCUS, and making available the connection between them. You'll find a troubleshooting section and a tips section at the end. I do recommend to read the entire article, and, of course, if you have any doubts or comments about it, we're here to help!
    To enable SOLR to work under a secure connection, you’ll need to have a valid certificate with a fully qualified domain name to be used (FQDN), we’ll follow all the steps to create one on your own and use it to secure and encrypt your connection between our WebFOCUS Client and the SOLR service.

    Even if I used this under Windows, the same steps apply to Linux, also, I used the WSL (Windows Subsystem for Linux) on my box to perform some of the steps, but that’s not necessary if you have the required components or you’ve been already provided with a valid certificate, so you don’t need to generate a new one.

    The very first step was to create the certificate and the key for my machine. Here’s where I used the WSL (CentOS) to use the OpenSSL commands. But if you don’t want to install WSL, the easiest way to get OpenSSL on your Windows box is by installing Git for Windows and running the Git Bash utility that comes with it (it’ll open a command prompt window in which you can execute Linux commands) or PowerShell for Windows.

     
    If you have WSL, you may need to install the OpenSSL package, so you’ll need to follow your distribution updater package to get it.
    sudo apt install openssl
    Or
    sudo yum install openssl
    Note: You may want to upgrade your repositories and packages before installing it (sudo apt update && sudo apt upgrade / sudo yum update && sudo yum upgrade). Once the package is installed, you can run the OpenSSL command that will generate the key file and the certificate file we will use later
    To get the FQDN:
    $myFQDN = (Get-WmiObject Win32_ComputerSystem).DNSHostName + "." + (Get-WmiObject Win32_ComputerSystem).Domain
    >> Write-Host $myFQDN

    And to generate the key and certificate files:
    openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -nodes -keyout keyfile.key -out certificatefile.crt -subj "/C=US/ST=Oregon/L=Portland/O=Company Name/OU=Org/CN=yourfqdn" -addext "subjectAltName=DNS
    :yourfqdn,DNS:*.tibco.com,IP:10.0.0.1"
    The output will be similar to these screenshots:
     
     
     
    You’ll need to change the values (highlighted in red) to match your company name, organization unit, country, etc… (not really needed so that you can use the ones in the sample). Still, the most important part is the Common Name (CN), which’s the one that needs to match the URL you’ll be using on your browser to access WebFOCUS, and it has to contain the FQDN.
    The key thing here is that Solr expects to have a valid domain to create a real secure connection between environments.
    Note: I’ve also added as DNS the entire tibco.com domain, that way, this will work even if I add my region, like: https://machinename.emea.tibco.com
    Now that we have created the key and the certificate, we need to store it under a keystore, so the next step is creating it based on them:
    openssl pkcs12 -export -in tibco-pc1t3xv0.crt -inkey tibco-pc1t3xv0.key -out tibco-pc1t3xv0.p12

    You’ll be prompted for a password, so don’t forget which one you’ve selected as it’ll be used later.
    Lastly, we need this recently created certificate to be “trusted” by our environment, in case you can’t certify it with a CA (Certificate Authority like VeriSign, IdenTrust, DigiCert, Let's Encrypt, GoDaddy, or similar) or already have a certified one by these authorities, which is the recommended for Production environments, and even more if they are publicly available.
    Your server has a JDK installed (requirement for WebFOCUS) so, in order to validate it ‘internally’ we’ll use the cacerts keystore that comes with the JDK installation. You can check it’s content by executing the following command from the command prompt:
    For Java11: keytool -v -list -cacerts
    For Java 8: keytool -v -list -keystore <path to cacerts>\cacertsfile.ext
    Note: Keytool is a tool also provided in the JDK, so it has to be in the PATH variable of the OS to be able to execute it anywhere, if you don’t have it there, you can use the entire path to it to make it work.
    You’ll be prompted for a password, and the default one is ‘changeit’ (without quotes). You’ll see a bunch of certificates scrolling on your screen (usually around ~99). So if you also want to review them, just send that output to a text file you can check later:
    Java11: keytool -v -list -cacerts > C:\Temp\cacerts_content.txt
    Java8: keytool -v -list -keystore <path to cacerts>\cacertsfile.ext > C:\Temp\cacerts_content.txt
    The cacerts file is usually stored under %JAVA_HOME%\lib\security (or $JAVA_HOME/lib/security in Unix/Linux environments), but depending on how you have your environment configured, you may be using the one that comes with WebFOCUS (C:\ibi\WebFOCUS93\jdk) or Tomcat (C:\ibi\tomcat\jdk), not adding the -keystore parameter we’re making sure that we are importing our certificate in the cacerts file that the OS is reading.
    The command to import our certificate into that cacert file is:
    keytool -import -alias {aliasname} -cacerts -file {certificatename.cer} 
    So you should be using something like:
    Java11: keytool -import -alias mylocalcert -cacerts -file tibco-pc1t3xv0.crt
    Java8:  keytool -import -alias WF8207SSL -keystore <path to cacerts>\cacerts -file <path to crt>\WF8207SSL.crt
    keytool -import -alias mylocalcert -cacerts -file tibco-pc1t3xv0.crt
    Again, you’ll be prompted for the cacerts password and it’ll also ask you if you trust the certificate you are importing (obviously, you trust it, as you are the one who has created it).

    Now our certificate is also trusted, and these files will now be used in our WebFOCUS installation. I recommend applying them to Tomcat first, so we’ll be able to access WebFOCUS using SSL, then apply it to Solr, and finally connect both.

    So, even if it’s not needed, as long as you want to secure the connection between the WebFOCUS Client and Solr, you’ll probably want to secure the WebFOCUS Client as well, so now that we have the certificate, the key, and the keystore, we should be able to make that happen.
    For Tomcat, is as simple as adding the following block to the server.xml file located under Tomcat’s ‘conf’ folder:
    <!-- IBI SSL Port -->
        <Connector port="8443"
          protocol="org.apache.coyote.http11.Http11NioProtocol"
          maxThreads="150"
          maxPostSize="-1"
          URIEncoding="UTF-8"
          SSLEnabled="true"
          scheme="https"
          secure="true"
          keystoreFile = "C:\ibi\certs\tibco-pc1t3xv0.p12"
          keystoreType = "PKCS12"
          keystorePass = "keystorepasswd"
          ciphers="TLS_RSA_WITH_AES_128_CBC_SHA"
          clientAuth="false"
          sslProtocol="TLS"
          sslEnabledProtocols="TLSv1.2"/>
    <!-- IBI SSL Port END -->
    After that, you just need to restart Tomcat and you should be able to access via http (if you didn’t disable the 8080 port) and https.

     
    Note: If you still don’t get the ‘Certificate is valid’, just keep going, there’s an ‘Other Tips’ section at the end of the Article that may help you with this as well.
    If you don’t use the FQDN, you’ll see that the connection will appear as insecure (as the URL doesn’t match the certificate) even it’s using SSL:

     
    If you have any issues at any of the steps we’ve taken, don’t hesitate to write me at pablo.alvarez@cloud.com
    If you are using a different Application Server, you can also ping me or open a Support Ticket case.
    Do let me know if you want me to write another article about certificates by sending me an email requesting it 😉
    Now, we should focus on the Solr configuration. You have a ‘solr.in.*’ file (.sh for Unix/Linux, .cmd for Windows) under C:\ibi\WebFOCUS93\Solr\solr\bin (or ../ibi/WebFOCUS93/Solr/solr/bin if you are using a Linux/Unix environment), edit it and uncomment the following lines (and make sure the values matches the ones for your environment):
    set SOLR_SSL_ENABLED=true
    set SOLR_SSL_KEY_STORE=C:\ibi\certs\local\tibco-pc1t3xv0.p12
    set SOLR_SSL_KEY_STORE_PASSWORD=keystorepasswd
    set SOLR_SSL_TRUST_STORE=C:\ibi\certs\local\tibco-pc1t3xv0.p12
    set SOLR_SSL_TRUST_STORE_PASSWORD=keystorepasswd
    set SOLR_SSL_NEED_CLIENT_AUTH=false
    set SOLR_SSL_WANT_CLIENT_AUTH=false
    set SOLR_SSL_CLIENT_HOSTNAME_VERIFICATION=false
    set SOLR_SSL_CHECK_PEER_NAME=true
    set SOLR_SSL_KEY_STORE_TYPE=PKCS12
    set SOLR_SSL_TRUST_STORE_TYPE=PKCS12
    I also recommend uncommenting and changing the following line to get a more detailed log the first time to debug the issues you could face, and once it’s working properly, set it back to the default value (INFO) or even comment it out:
    set SOLR_LOG_LEVEL=INFO
    If all the previous steps were correctly performed, you should be able to restart the Solr service now and be able to access the Solr Dashboard via https: https://servername.company.ext:8983/solr
     

    The latest configuration part is to tell WebFOCUS that the connection between them is now being made using SSL, so you’ll need to access your WebFOCUS Client Administration Console Configuration and change the Solr Url to match the one you used to access the Dashboard:

    And that should be all, you are now connected in an even more secure way and performing secure searches within your data and WebFOCUS:

     
     
    TROUBLESHOOTING
    During the investigation on how to enable SSL (and make it work with WebFOCUS), -John Calappi (who was also an important part on this Technical Article and you can contact him as well 😜), and I found some issues. Even if we were able to start Solr with SSL, WebFOCUS wasn’t able to communicate with it and the message you receive is as follows:

     
    This is why I recommend you configure the DEBUG mode on Solr, to try to figure out why it’s not working.
    Our magnify_search.log files from WebFOCUS showed this messages:
    [2024-04-17 12:45:21,648] INFO  main - {} - SolrSearchClientFactory.init() initializing Solr client with: url: https://tibco-pc1t3xv0:8983/solr, username: , collection: ibi-protected
    [2024-04-17 12:45:21,650] INFO  main - {} - createSolrClient(): created SolrClient for url: https://tibco-pc1t3xv0:8983/solr
    [2024-04-17 12:45:21,672] ERROR main - {} - testClient(): error: IOException occurred when talking to server at: https://tibco-pc1t3xv0:8983/solr
    [2024-04-17 12:45:21,673] ERROR main - {} - createSolrClient(): Error testing Solr client
    [2024-04-17 12:45:21,673] ERROR main - {} - Error! Creating SolrClient
     
    Even the Solr service was started and we were able to access the Solr Dashboard using SSL.
    Checking the Solr logs in DEBUG mode showed us that the ‘handshake’ wasn’t being performed. The ‘handshake’ is when you try to communicate with the endpoint securely, but the endpoint doesn’t trust you and doesn’t perform the handshake.
     



     
    2024-04-08 12:38:40.340 WARN  (main) [   ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for Client@20b9d5d5[provider=null,keyStore=file:///C:/ibi/certs/local/mokochino_pkcs12.jks,trustStore=file:///C:/Program%20Files/Java/jdk-11/lib/security/cacerts]
    2024-04-08 12:38:42.993 DEBUG (qtp1489193907-23) [   ] o.e.j.i.s.SslConnection fill NOT_HANDSHAKING
    2024-04-08 12:38:43.401 DEBUG (qtp1489193907-27) [   ] o.e.j.i.s.SslConnection DecryptedEndPoint@569cafab{l=/10.98.96.15:8983,r=/10.98.96.15:60038,OPEN,fill=-,flush=-,to=414/120000} stored fill exception => javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
    javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
    We got these messages when the certificate wasn’t added to the cacerts file, and also when the keystore we used didn’t have the .key attached to it, so it was considered a ‘public’ certificate (like having a low fence or an open padlock, both are secure methods, but used incorrectly, so not secure at all). In order to have a proper handshake between products, the keystore needs to have the required files to accept the connections.
    OTHER TIPS
    Make sure that you are adding your certificate to the proper cacerts file, sometimes, there are several Java JDK releases installed and each one has its own cacerts file. For example, when you install WebFOCUS, sometimes it also includes its own jdk (C:\ibi\WebFOCUS93\jdk) or even under Tomcat you can have another one (C:\ibi\tomcat\jdk), Linux/UNIX environments doesn’t usually have them, but Windows does. In that case, I usually delete those ones and use the mklink command to create symbolic links (similar to the Linux ones, not Windows shortcuts) just in case that there’s some hardcoded path pointing there. You could use it as follows:
    mklink /D {destination} {source}
    Sample:
    mklink /D C:\ibi\WebFOCUS93\jdk C:\Progra~1\Java\jdk11              
    It is also recommended to add your new certificate into your client OS, so any browser you use can also trust this one on their side. From your Windows OS, double-click on your .crt file (you can copy the file from the server or the content of it as it’s just a plain-text file) and you’ll see something like this:

    Click on the Install Certificate… and place it under the Trusted Root Certification Authorities:

    After finishing this process, you’ll see that certificate as valid:

    Another option to have SSL in your WebFOCUS Installation that Ben Naphtali also suggested is to put in front of our install a Load Balancer like NGINX and have that using SSL. That way, even if you don’t have SSL configured in WebFOCUS, you won’t be able to access it without going first to NGINX in a secure way (NGINX will manage and redirect the petitions to WebFOCUS and Solr). NGINX, Apache WebServer using the mod_proxy module or mod_jk, or any Load Balancer should work for this.
    As said before, let me know if you want me to write another Technical Article about how to properly install the Load Balancer in front of your WebFOCUS install to secure it, this will also allow you to have different WebFOCUS Clients to handle HighAvailability features in production environments.
    You could also enable SSL in the WebFOCUS Reporting Server Side, but that part is perfectly described in the Security & Administration Manual, so you just need to follow that one to make it work.
    Happy & Secure connections!
    Pablo Alvarez
  • As the world of business intelligence (BI) continues to evolve, companies are constantly seeking more efficient, insightful, and accessible ways to analyze data.  ibi WebFOCUS, a comprehensive BI platform, stands at the forefront of this transformation. Below, we explore the future trends in business intelligence and how ibi WebFOCUS is positioned to meet these emerging demands.
     
    1. Increased Demand for Real-time Data
    In today's fast-paced market environment, the need for real-time data analytics is more critical than ever. Businesses require immediate insights to make quick decisions. WebFOCUS caters to this need by providing instant analytics capabilities, allowing companies to monitor operations and market conditions as they happen, leading to more timely and informed decisions.
     
    2. The Rise of Artificial Intelligence and Machine Learning
    AI and machine learning are becoming integral to business intelligence. These technologies can predict trends, automate tasks, and offer new insights, transforming data into actionable intelligence. WebFOCUS integrates AI capabilities, enabling users to leverage predictive analytics and machine learning to drive business outcomes.
     
    3. Democratization of Data
    The democratization of data means making analytics accessible to non-experts, allowing more people within an organization to make data-driven decisions. WebFOCUS promotes this trend with user-friendly interfaces and customizable dashboards, making it easier for non-technical users to derive insights without deep statistical knowledge.

     
    How WebFOCUS Is Embracing These Trends
     
    IBI's WebFOCUS is a prominent player in the business intelligence domain, continuously evolving to integrate advanced features like Natural Language Querying (NLQ), Instant Insights, and Machine Learning functions. These capabilities position WebFOCUS as a forward-thinking solution in the BI landscape, enhancing user experience and analytical depth. Here's a closer look at how these features empower users and organizations.
    Natural Language Query (NLQ)
    Natural Language Querying is a revolutionary feature that allows users to interact with their data using everyday language. This accessibility significantly lowers the barrier to data analytics, enabling users from various organizational levels to engage with data directly, without needing specialized training in data querying languages such as SQL.
    Instant Insights
    In the age of big data, speed is crucial. Instant Insights is another innovative feature of WebFOCUS that caters to the need for rapid data analysis. This feature automatically generates visualizations based on the underlying data as soon as it is accessed, providing immediate visual insights. Users can quickly identify trends, outliers, and patterns without manually sifting through the data or building visualizations from scratch.
    Machine Learning Functions
    Machine Learning (ML) functions within WebFOCUS elevate its analytics capabilities by offering predictive analytics and pattern recognition that go beyond traditional data analysis techniques. These ML functions can automatically identify complex patterns and predict future trends based on historical data. For businesses, this means not only understanding current data but also forecasting future scenarios, optimizing processes, and personalizing customer interactions based on predictive models.
     
    The integration of NLQ, Instant Insights, and Machine Learning functions into WebFOCUS represents a significant leap towards more intuitive, efficient, and predictive business intelligence tools. As BI technology continues to evolve, WebFOCUS is clearly positioned at the forefront, ready to empower organizations with smarter, faster, and more accessible data insights.
     
  • How To 
    Customize Pages on Demand in WebFOCUS 9.x
    The aim of this document is to explain the process for changing the "Pages on Demand" design.
    If you want to get my sample design - please download the zip file attached
    New Design: 

    To change the design of the "Pages on Demand" functionality, the following steps are necessary - complete guide attached in PDF Format
     Step 1 - Replacing icons
     Previously files for customization where stored in following folder: drive:/ibi\WebFOCUSversion\ibi_html\viewer
    As of Release 9.0.0, the ibi WebFOCUS system file configuration no longer includes the ibi_html directory located at drive:\ibi\WebFOCUSversion\WebFOCUS, where version is the number of your installed version. If you store customized files in the ibi_html directory, you must backup them from this directory before installing or upgrading to WebFOCUS Release 9.0.0 or higher. If you do not take this precaution, you will lose customized files stored in the ibi_html directory.
    Solution in 9.x:
    All the entire \ibi_html folder structure has been moved to a .jar file, if you go to drive:\ibi\WebFOCUSversion\webapps\webfocus\WEB-INF\lib you will find a file called webfocus-ibi-html-version.jar, if you edit with winrar or 7zip and go to META-INF, resources you will find the old \ibi_html folder and you could add your custom files.
    For on-demand paging this would be following storage location
    drive:\ibi\WebFOCUSversion\webapps\webfocus\WEB-INF\lib\webfocus-ibi-html-version.jar\META-INF\resources\ibi_html\viewer\
    Make a backup of the existing jar file and save it into a new folder outside webapps\webfocus\WEB-INF\lib “drive:\ibi\backup”.
    The ibi_html change is part of a larger shift to replace standalone files in the WebFOCUS installation with packaged files.  This is being done to reduce the size of the install and for security reasons but it will require you to follow those steps
    Be aware that after upgrading this step would need to be repeated!
    To get you a sample design please check the files included with my documentation. Just copy and replace the files from the "On Demand Paging Custom Icons" folder to the following folder via 7zip or Winrar - drive:\ibi\WebFOCUSversion\webapps\webfocus\WEB-INF\lib\webfocus-ibi-html-version.jar\META-INF\resources\ibi_html\viewer\
    It is also possible to replace the gif files with your own selected icons.  But please use the same size for the new / changed icons.
     
    Step 2 - Customizing Templates
    The original files are stored in following folder drive:\ibi\WebFOCUSversion\client\home\etc\prod
    My sample will show you how to customize vcp_page.xml and vcp_page_1.xml - the main files that build the deferred execution screen.
    To get you a sample please check the files included with my documentation - to be found in “On Demand Paging xml Files”
    Please copy the files into following folder drive:\ibi\WebFOCUSversion\client\wfc\etc\custom
    Within the xml files, parts of the HTML / CSS code have been changed or blocks have been completely commented out, so that some functionalities (e.g. Close or Help) are not visible in "Pages on Demand" anymore.
    Feel free to make your own changes as needed
    Feel free to make your own code adjustments within the above-mentioned files.
     
    Step 3 - Restart
    ●       Stop your application server - for example Tomcat
    ●       Clear your App Server work directory - for example drive:\ibi\tomcat\work\Catalina
    ●       Clear your browser cache
    ●       Start your application server - for example Tomcat
     
    Customize Pages on Demand in WebFOCUS 9.x (1).pdf ondemandpaging_9x.zip
  • Introduction 
    This WebFOCUS CE Demo explores concepts related to securing WebFOCUS, primarily focusing on network traffic. Our objective is to ensure the security of incoming traffic and traffic circulating within the WebFOCUS setup.
    Discover how to fortify your WebFOCUS Container Edition on Kubernetes, ensuring that every byte of your data remains secure, whether it's flowing between components or streaming from browsers to clusters. Dive in to master the art of safeguarding your analytics!"
     
    TL;DR 
    TL;DR for this article and Videos that are part of this article
    ·       Securing Ingress and Egress Traffic: It outlines strategies for securing both incoming (ingress) and outgoing (egress) network traffic, emphasizing the use of SSL/TLS termination at the Ingress controller level for incoming traffic, and mTLS/SSL for secure communication with external services, ensuring data protection during transmission.
    ·       POD and Multi-Container PODs Management: The document explains Kubernetes Pods and the concept of multi-container Pods within the Kubernetes ecosystem, emphasizing their role in facilitating tightly-coupled containers to share execution environments and resources, thus enhancing operational efficiency and security.
    ·       Service Mesh Implementation with Linkerd: The document highlights the deployment of Linkerd, a service mesh, to enhance security within the WebFOCUS environment. Linkerd simplifies SSL configuration across all components, enabling secure, encrypted communication internally and improving the overall security posture of the system.  
    ·       Verification and Debugging Techniques: Various methods for verifying the security measures implemented, including using Linkerd's CLI tools and Wireshark for traffic inspection, are detailed. This ensures that the data transmission within the cluster is encrypted and secure against potential threats.
    Video 1

     
    Understanding WebFOCUS Component Interconnectivity:
    In typical network configurations, traffic can be categorized into two main types: East/West, representing traffic between WebFOCUS components within the data center, and South/North, encompassing traffic entering or exiting the data center.
    Now, let's delve into a high-level schematic diagram illustrating the interactions between these components and the pathways traversed by data within and outside the data center.
    Overview of interconnectivity of WebFOCUS components
    Upon completing the deployment of WebFOCUS CE, whether on a local cluster or a managed Kubernetes cluster, the resulting configuration resembles the following:
     


    WebFOCUS CE, often described as a "battery-included" deployment, is self-sufficient, providing all necessary components out of the box, such as the database server, Solr, Zookeeper for search functionalities, and ETCD for centralized configuration storage.
    In traditional on-premise setups, securing the main components of WebFOCUS, such as the application server and reporting server, suffices to secure most inbound user traffic, offering around 99% coverage for securing major traffic entering WebFOCUS.
    While similar setups are possible in Kubernetes-based environments, they may not be the most optimal solutions as they violate Infrastructure as Code (IAC) principles. This is primarily due to the necessity of embedding certificates within container images during the build process. In Kubernetes and other cloud-based deployment practices, the consistent promotion of identical setups and certificates from development to production environments is crucial. Consequently, certificates intended solely for production environments should not be used in the development or pre-production stages. Moreover, certificate management, including renewal processes, introduces complexities. However, the CNCF community has addressed this challenge.
    What is the challenge here? The challenge here lies in securing numerous components, each requiring its own SSL setup. While custom SSL configurations are feasible, they are time-consuming and cumbersome. This challenge has been recognized by the CNCF community and addressed by few of it's member projects. 
    One solution to this problem is adopting a service mesh, a concept we will discuss shortly.
    Video 2

    Securing Ingress and egress traffic (North/South traffic)
    Before proceeding further, let's discuss securing both incoming and outgoing (ingress/egress) traffic, commonly referred to as North/South traffic.

    Securing Ingress traffic:
    To safeguard incoming traffic, utilizing an Ingress controller and implementing SSL certificate termination is advisable. Alternatively, you can opt to apply mTLS/SSL directly on your cloud load balancer, ensuring encrypted communication onward from that point.

    Securing Egress traffic:
    Outgoing traffic originating from the cluster, such as interactions with SMTP servers, data lakes, business/partner databases, or Active Directory, is assumed to be already secured by their respective vendors. When WebFOCUS communicates with these external services, it's recommended that mTLS/SSL communication be initiated to ensure secure data transmission.

    Securing Components with Service Mesh (Linkerd):
    Securing components running in the cluster 
    Securing components within the cluster is an optional step, especially in scenarios like an "Air Gap" setup where traffic is restricted from entering or leaving the cluster. However, if you aim to enhance security by encrypting traffic across all endpoints, employing a service mesh like Linkerd provides a straightforward solution.
    How Service Mesh (Linkerd) can help? 
    Linkerd simplifies the process of enabling SSL across all endpoints, offering a seamless experience akin to pressing a button or executing a single command. In this demonstration, we'll witness how Linkerd effortlessly secures all endpoints and components within the WebFOCUS Cluster.
    What is Service Mesh? A service mesh in Kubernetes is a dedicated infrastructure layer that facilitates communication between microservices within a cluster. It abstracts away the complexities of service-to-service communication, providing features like load balancing, traffic management, service discovery, and encryption. By deploying a service mesh like Istio or Linkerd, developers can enhance observability, reliability, and security without modifying application code. It improves the management and monitoring of microservices architectures, ensuring better control over network traffic and interactions between services.
    What is Linkerd? How does it work?
    Linkerd is a service mesh for Kubernetes. It makes running services easier and safer by giving you runtime debugging, observability, reliability, and security—all without requiring any changes to your code.
    By abstracting away the complexity of securing individual endpoints, LinkerD Service Mesh provides centralized control and visibility, simplifying the task of securing all endpoints in a Kubernetes cluster.

    How it works
    Linkerd operates by deploying ultralight, transparent "micro-proxies" alongside each service instance, which efficiently manage inbound and outbound traffic for the service. These proxies act as highly instrumented network stacks, seamlessly integrating with the control plane for telemetry and control.
    Understanding POD and Multi Containers in POD
    Kubernetes Pods, the smallest deployable units in Kubernetes, encapsulate one or more containers along with shared resources such as storage volumes and networking interfaces. Pods serve as the basic building blocks of applications, facilitating easy scaling and management within Kubernetes clusters.
    A multi-container Pod in Kubernetes allows for co-locating multiple tightly-coupled containers within the same Pod, enabling them to share the same execution environment and resources. This approach promotes efficient communication and container coordination, simplifying deployment and management tasks while maintaining a cohesive application architecture.
    Adding Linkerd to our Kubernetes cluster :
    Note: It's recommended that you try this demo first for non-production clusters. This demo involves imperative commands for interacting with the cluster, which is efficient for demos but doesn't strictly adhere to the Infrastructure as Code (IAC) rule. Once comfortable with the commands, consider scripting them or referring to Linkerd production deployment guidelines at linkerd.io/going-to-production.

    Video 3 :
    Eavesdropping on ReportCaster 
    Before proceeding, let's intercept TCP traffic to confirm data transmission in plaintext. We've chosen ReportCaster due to its quick startup, facilitating multiple restarts. Leveraging Wireshark, the command-line tool, we'll intercept incoming messages without rebuilding the ReportCaster container image. Instead, we'll opt for a sidecar container named "wireshark," containing the pre-installed 'tshark' tool for capturing TCP traffic.

    To execute 'tshark,' the POD must start with root user privileges (user ID 0). Thus, we'll employ a Patch command to update the runAsUser field in the security context of the reportcaster StatefulSet in the webfocus namespace, setting the user ID to 0.
    # This `kubectl patch` command updates the `runAsUser` field in the security context of the `reportcaster` StatefulSet in the `webfocus` namespace to set the user ID to 0 (root user). kubectl patch statefulset reportcaster -n webfocus --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/securityContext/runAsUser", "value": 0}]' Now, we can add a sidecar container using this patch command. 
    # The command uses `kubectl patch` to modify the StatefulSet named `reportcaster` in the `webfocus` namespace by adding a new container named `wireshark` with the specified image. kubectl patch statefulset reportcaster -n webfocus --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/-", "value": {"name": "wireshark", "image": "cr.l5d.io/linkerd/debug:edge-24.3.2"}}]'
    After a few seconds, we can run this command to inspect TCP traffic. This command runs a "tshark" command in the container "wireshark" 
    Command tshark -i any -f "tcp" -x  - prints payload data of TCP packets, use the -x option to print packet details in hexadecimal and ASCII format
    # Wait pod to be ready with both the Containers kubectl wait --for=condition=ready pod/reportcaster-0 --namespace=webfocus --timeout=60s Make sure above comand finishes before proceeding to next stesp # This command executes `tshark` within the `wireshark` container of the `reportcaster-0` pod in the `webfocus` namespace, capturing TCP traffic in hexadecimal format from all interfaces. kubectl exec -it -n webfocus reportcaster-0 -c wireshark -- tshark -i any -f "tcp" -x Sample output : 
     
    This confirms that data is transmitted in clear text; we will keep this sidecar container running for a while. 
    Updating WebFOCUS Components with Service Mesh
    Important Note: Exercise caution while executing the following steps, as they will restart the entire WebFOCUS setup, resulting in temporary downtime for your cluster. Some of the steps below also include running a few of the PODs as root  a user - these is just for debug/demo purposes only. Once this debug step is over, revert it back to running as a non-root user. 

     
    1.     Install Linkerd CLI: Utilize a helper script to install the Linkerd CLI on your Ubuntu system seamlessly.
      # Install linkerd curl -sL https://run.linkerd.io/install | sh # Add linkerd in path export PATH=$PATH:$HOME/.linkerd2/bin # Check verison linkerd version Run a check to see if linked can be installed in a cluster. 
    linkerd check --pre Sample output : 
     
    2.     Deploy the Control Plane: Employ the following steps to deploy the Linkerd control plane to your Kubernetes cluster (ensuring that your kubeconfig file has cluster admin privileges).
    # Install CRD linkerd install --crds >crds.yaml kubectl apply -f crds.yaml # Install linker d - we might not need runAsRoot option linkerd install --set proxyInit.runAsRoot=true >linkerd.yaml kubectl apply -f linkerd.yaml   3.  Optional Dashboard Installation: Optionally, install the Linkerd dashboard, which provides a visual representation of data plane traffic, enabling you to monitor connections made by each pod within the cluster.
    # Install Dashboard ( you might have to wait for while ) linkerd viz install >linkerd-viz.yaml kubectl apply -f linkerd-viz.yaml   4.  Dashboard Configuration Update: Customize the dashboard configuration to enable access via your Fully Qualified Domain Name (FQDN), allowing accessibility from your laptop. Additionally, expose port 8084 of the dashboard for external access.
    # Update viz service "web" to allow FQDN - not just localhost kubectl get -n linkerd-viz deployments.apps web -o json | sed "s/localhost/wfce02.ibi.systems/g" | kubectl replace -f - # Expose dashboard to outside world (only for demo) kubectl -n linkerd-viz port-forward svc/web 8084:8084 --address 0.0.0.0 &   5.  Meshing WebFOCUS Components: In the WebFOCUS Container Edition (WF-CE), Kubernetes StatefulSets manage most components. Apply the Linkerd Proxy (sidecar) to all StatefulSets, ensuring secure traffic transmission. Note that this process will cause a brief outage lasting approximately 3 to 4 minutes while the components are restarted.
    # Inject Linkerd proxy to all WebFOCUS Stateful set PODs kubectl get -n webfocus sts -o yaml | linkerd inject - | kubectl apply -f - # Wait for 3 min to restart all PODS kubectl get pods -n webfocus --field-selector=status.phase!=Succeeded,status.phase!=Failed -o name | xargs -I {} kubectl wait --for=condition=ready {} --namespace=webfocus --timeout=180s   Note: If other pods are managed by Deployments (e.g., ibi DSML) and are not covered by the initial command, rerun the command and select Deployments instead of StatefulSets.
    Verification of Security Measures
    1.     Using Linkerd CLI: Utilize the "viz" command within the Linkerd CLI to confirm that all WebFOCUS components are meshed securely.
    linkerd viz -n webfocus edges sts linkerd viz -n webfocus edges pods   2.     Via Linkerd Dashboard: Access the Linkerd dashboard to inspect and validate the expected meshing of traffic visually.
    3.     Utilizing Wireshark: For an additional layer of validation, inspect traffic again using Wireshark. With Wireshark running within the ReportCaster POD, execute the same command as before, ensuring that all observed traffic is now encrypted.
    # Command that will show all traffic for Report Caster kubectl exec -it -n webfocus reportcaster-0 -c wireshark -- tshark -i any -f "tcp" -x   Sample output : 
     
    These comprehensive verification techniques ensure that all WebFOCUS components' traffic is safeguarded and secure after the Linkerd meshing solution is applied.
    Securing Communication from Ingress to Application Servers
    Video 4
    What about Ingress & TLS termination?
    Ingress controllers serve as reverse proxies that manage external traffic and can implement mTLS/SSL encryption on their endpoints. This ensures that communication from the user's browser to the Ingress controller remains secure and terminates at the controller itself. However, traffic between the Ingress controller and the application server remains unencrypted.

    Inspecting Clear-Text Traffic with Wireshark: To inspect the clear-text traffic, we can employ Wireshark side-car containers (just like how we did above), leveraging Linkerd's capabilities (addtional command switch `--enable-debug-sidecar`). We can effectively eavesdrop on incoming traffic by injecting a debug container alongside the application server. The process involves using Linkerd's helper command to inject the side-car debug container, incorporating Wireshark for traffic analysis.
    Restricting Wireshark to Ingress Controller IP: To focus Wireshark's capture solely on traffic originating from the Ingress controller, we first retrieve the IP address of the Ingress controller. We then utilize this IP address to configure the tshark command, filtering traffic specifically from the Ingress controller's IP.
    # Change run as user to 'root' so wireshark can run kubectl patch statefulset appserver -n webfocus --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/securityContext/runAsUser", "value": 0}]' # add a wireshark side-car using linkerd inject switch --enable-debug-sidcar kubectl get -n webfocus sts appserver -o yaml | linkerd inject --enable-debug-sidecar - | kubectl apply -f - # Wait 3 min for POD to come up kubectl wait --for=condition=ready pod/appserver-0 --namespace=webfocus --timeout=180s # Get ingress Controller POD's IP INGRESS_CONT_IP=$(kubectl get pod -n ingress-nginx -l app.kubernetes.io/component=controller -o jsonpath='{.items[*].status.podIP}') # Check what kind of traffic is comming to App server from Ingress Controller POD kubectl exec -it -n webfocus appserver-0 -c linkerd-debug -- tshark -i any -f "src host $INGRESS_CONT_IP" -x
    Sample output : 
     
    We can see data is being transmitted between the Ingress Controller and App Server in Clear text format. 
    Meshing the Ingress Controller: To address the clear-text traffic between the Ingress controller and the application server, it's essential to mesh the Ingress controller deployment. Meshing involves integrating Linkerd with the Ingress controller to establish a secure communication channel. Once meshed, all traffic between the Ingress controller and the application server becomes encrypted, enhancing overall security.
    kubectl get -n ingress-nginx deploy -o yaml | linkerd inject - | kubectl apply -f -  

    Use the command below to see if the Ingress controller POD returned successfully. 
    kubectl rollout status deployment.apps/ingress-nginx-controller -n ingress-nginx --timeout=180s  

    Inspecting Encrypted Traffic with Wireshark: Once ingress controller POD is also meshed now, if you try to run the same tshark  command again, you will see all traffic is encrypted between the Ingress controller POD (reverse proxy) and Application server  
    INGRESS_CONT_IP=$(kubectl get pod -n ingress-nginx -l app.kubernetes.io/component=controller -o jsonpath='{.items[*].status.podIP}') kubectl exec -it -n webfocus appserver-0 -c linkerd-debug -- tshark -i any -f "src host $INGRESS_CONT_IP" -x  
    Sample output : 
     
    We can see that the data is now encrypted when it flows between Ingress Controller and App server POD
    Using Linkerd's Visualization Command: Linkerd offers a CLI command called "viz," which allows us to visualize the communication between pods across namespaces. By executing the "viz" command, we can verify the secure communication between the application server (e.g., appserver-0 in the ingress-nginx namespace) and other relevant components (e.g., webfocus namespace), ensuring end-to-end security.
    Implementing these measures establishes a robust security framework, safeguarding communication at various points within the cluster architecture.
    Conclusion and Final Thoughts:
    Final thoughts
    Throughout this tutorial, we delved into securing WebFOCUS within Kubernetes environments, emphasizing the crucial aspects of network traffic security and implementing a service mesh with Linkerd. This comprehensive approach not only simplifies SSL configuration management but also ensures encrypted communication across all components, enhancing the overall security posture. By equipping ourselves with these methodologies, we are better prepared to protect our deployments against emerging threats, marking a significant step forward in our cybersecurity efforts.
    Key takeaways include:
    Understanding the importance of securing both ingress and egress traffic to prevent unauthorized access and data breaches. Implementing Linkerd as a service mesh to simplify SSL configuration and secure communication across Kubernetes nodes. Kubernetes Pods' flexibility facilitates the addition of security measures without hindering functionality. The role of encrypted communication channels in safeguarding data in transit within the WebFOCUS environment. As we wrap up, it's clear that the path to securing WebFOCUS involves a combination of strategic planning, an understanding of Kubernetes' inner workings, and the judicious application of service meshes like Linkerd. The skills and knowledge acquired here should empower you to fortify your deployments, making them resilient against the evolving threats in today's digital landscape.
    Looking forward, I encourage you to explore Kubernetes security practices further, delve deeper into service mesh architectures, and continue refining your cybersecurity approach. This tutorial has laid the groundwork, but the journey to comprehensive security in WebFOCUS CE is ongoing and ever-changing.
    Remember, securing your infrastructure is not a one-time effort but a continuous process of learning, adapting, and implementing the best practices. 
    Cleanup debugging PODs  : 
    We need to clean two extra sidecars that we added to two of our Statefulsets - one for reportcaster  and the other one is for appserver 
    Remove side-car wireshark and remove run as root user from reportcaster pod
    kubectl patch statefulset reportcaster -n webfocus --type='json' -p='[{"op": "remove", "path": "/spec/template/spec/containers/1"}]' kubectl patch statefulset reportcaster -n webfocus --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/securityContext/runAsUser", "value": 1000}]' Remove debug sidecar from appserver and remove run as the root user. 
     
    kubectl patch statefulset appserver -n webfocus --type=json -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/config.linkerd.io~1enable-debug-sidecar"}]' kubectl patch statefulset appserver -n webfocus --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/securityContext/runAsUser", "value": 1000}]'  
    ---- 
     
     
     
  • In this demo, we begin with the default setup of WebFOCUS CE 1.2.0 (WF 9.2), and proceed to assign a Fully Qualified Domain Name (FQDN) to the host running this WF-CE setup. We then install an ingress controller to allow access to the Application Server via standard port 80, rather than the default port 31080. The video concludes with installing an SSL Certificate to secure the Application Server's endpoint with TLS.

     
    High-level steps : 
    - Begin by deploying the standard configuration of WebFOCUS CE as provided. - Ensure that the setup is accessible via Port 31080, which is the default port. - Deploy an Ingress controller and create an Ingress resource within the webfocus namespace to facilitate access over Port 80. - Incorporate a secret containing a TLS/SSL certificate into the webfocus namespace and modify the Ingress resource to utilize this secret for secure connections. - Access the WebFOCUS configuration securely over HTTPS (Port 443). - (Optional) Consider deactivating Port 31080 to prevent access through the unsecured port.

     
    Out-of-the-box setup : 
    Once the WebFOCUS CE setup completes deploying all components - you should be able to access the WF App server using port 31080

    If the above succeeds, you can also access the WebFOCUS App server GUI over the browser by going to the URL: http://x.1.10.96:31080 
    Install NGINX ingress controller.
    In the previous topic, we saw we have to access WebFOCUS using port 31080; what if we want to just access it over port 80 or not provide a port at all? 
    For that, we need to install an Ingress controller in the K8s cluster; in this case, we will use NGINX.  
    Let's install the Ingress controller in the kubernetes cluster - you can use the commands below. 
    # Lable all Nodes to allow Ingress controller to run kubectl label nodes --all ingress-ready=true # Install NGINX Ingress controller that will attach Controller POD to port 80 and 443 on Node kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml # Wait for all Ingress controller pods to come up kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=90s After the Ingress controller is running, if you run the nc  command again to see if Port 80 is open or not 
    nc -zv x.1.10.96 80 
    If the above command succeeds, we know NGINX ingress is running fine on port 80 - now the next thing we need to do is create an Ingress Object in the webfocus namespace so we can access WebFOCUS on port 80 
     
    Now onwards, we're going to access the above Machine with its FQDN name - in this example, it is wfce02.ibi.systems  - we assume you have something similar in your case; if not, ask your system admin to configure FQDN for your VM/Machine.
    So, in our case, if I re-run the above command as 
    # Use nc to check if port 80 is open now nc -zv wfce02.ibi.systems 80 >> Connection to wfce02.ibi.systems (x.241.1.29) 80 port [tcp/http] succeeded! In the above, we assume the FQDN name "wfce02.ibi.systems" points to the correct IP of the machine where WF CE is running ( in this case, IP x.241.1.29) 
    If the "nc" command returns with success, we are good to go to the next step 
    Create Ingress Object in webfocus namespace 

    Save the text below as an "appserver-ingress.yaml" file; as you can see, we are now using the FQDN of wfce02.ibi.systems to set up Ingress rules.
    This file also assumes your WF-CE setup is running in Namespace "webfocus." 
    Note: make changes as needed before you apply it 
    apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx meta.helm.sh/release-name: appserver meta.helm.sh/release-namespace: webfocus nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/affinity-mode: persistent nginx.ingress.kubernetes.io/app-root: /webfocus nginx.ingress.kubernetes.io/client-body-buffer-size: 64k nginx.ingress.kubernetes.io/force-ssl-redirect: "false" nginx.ingress.kubernetes.io/proxy-body-size: 200m nginx.ingress.kubernetes.io/proxy-connect-timeout: "300" nginx.ingress.kubernetes.io/proxy-read-timeout: "300" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/session-cookie-change-on-failure: "true" nginx.ingress.kubernetes.io/session-cookie-expires: "28800" nginx.ingress.kubernetes.io/session-cookie-max-age: "28800" nginx.ingress.kubernetes.io/session-cookie-name: sticknesscookie nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0 labels: app.kubernetes.io/instance: appserver app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: appserver app.kubernetes.io/version: "1.0" helm.sh/chart: appserver-0.1.0 name: appserver namespace: webfocus spec: rules: - host: wfce02.ibi.systems http: paths: - backend: service: name: appserver port: name: port8080 path: / pathType: ImplementationSpecific Apply the above file to the step 
    This should create an Ingress rule in the ingress controller that sends any HTTP request incoming on port 80 with the HTTP Host header set to "wfce02.ibi.systems" - I will forward that request to the kubernetes service named "appserver" over port 8080. 
    Now you should be able to access the WebFOCUS App server GUI via the URL: http://wfce02.ibi.systems 

    Securing endpoint with SSL 
    As you can see, the above URL is http://  that is not secure - we want to enable SSL so that we can access WebFOCUS GUI over SSL  - such as URLs starting with https://  
    For this, we need to get certificates generated for our FQDN - in the above case  'wfce02.ibi.systems'; typically, you will get two PEM files - one named "privkey.pem" and the other "fullchain.pem"
    You can inspect the "fullchain.pem" file to see if it is indeed issued for the FQDN you use (a wild card is also okay). For this, you will need the OpenSSL tool installed on your machine. 

    You will need two files—one with the key file and the other with a certificate file. First, we create a Kubernetes secret with these two files in the same 'webfocus' namespace.  
    Once the secret has been created, the only thing left to do is to update the Ingress object in the webfocus namespace to use this secret to enable TLS/SSL. 
    Now, let's update the appserver-ingress.yaml  file to use this secret (wfce02-ibi-tls ) that we created above 

    Add the below lines at the end. 
    Re-apply this file to the cluster - this will update the ingress object to now support SSL (port 443) 
    If all goes as expected - now you should be able to access WebFOCUS over HTTPS -  https://wfce02.ibi.systems
    (Optional) Disable Port 31080 port 
    Since we now have a secure way to access the WebFOCUS App server over SSL - we don't need to access the App server over port 31080 - so edit the service for the App server and change it from NodePort to Cluster IP type of service. 

    At the beginning of this demo, we saw that we could access the WebFOCUS App server GUI over port 31080 - but now that is unnecessary as we can access the App server over secure port 443.
    So it makes sense to disable port 31080 - for that, we need to change appserver - Service (svc) to type ClusterIP from NodePort - below command to do that.

     
     
  • Users may receive an error after executing a procedure or HTML file from ibi WebFOCUS® App Studio when using Google Chrome or Microsoft Edge.
    Web browsers receive updates, and when a new version of Chrome or Edge browser is available, a new version of the selenium driver is also required to execute procedures and HTML files from ibi WebFOCUS App Studio.

    Currently, there is not an automatic way to update the Chrome and Edge drivers. However, users can manually update these drivers. 
    Resolution
    Make sure that you are on the latest version of Chrome or Edge. To find this information:  For Chrome: Click the ellipses on the top-right of the browser window to expand the drop-down menu. Then click Help > About Google Chrome. For Edge: Click the ellipses on the top-right of the browser window to expand the drop-down menu. Then click Help and feedback > About Microsoft Edge. Navigate to the following sites to download the WebDriver for Chrome or Edge: For Chrome: https://chromedriver.chromium.org/downloads For Edge: https://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/ Select the WebDriver version that corresponds to the browser version that you are on and download the WebDriver. See the following images for an example. 
    In the downloaded zip file, locate the chromedriver.exe file for Chrome or the msedgedriver.exe file for Edge. These are the WebDriver files. Shut down App Studio and copy the WebDriver files to the bin directory of the App Studio installation, for example, <drive>:ibiAppStudio90bin or <drive>:ibiAppStudio82bin. Start App Studio to test the browser setup. Navigate to Application Menu > Output Viewer Settings. Select the desired browser, and then click Test browser setup. If successful, App Studio will open up the selected browser and display the message Webdriver test status Success, as shown in the following image. Once this message displays, you have successfully updated the browser's WebDriver.
  • If you are interested in the FOCUS language, whether you are using ibi FOCUS or ibi WebFOCUS products, FOCUS Fridays is a monthly user group that centers around the FOCUS language.

    You can expect basic FOCUS concepts from all parts of the language: TABLE, MODIFY, MATCH. GRAPH, Dialogue Manager, JOIN, MAINTAIN; advanced FOCUS language techniques to enhance functionality and performance in WebFOCUS and FOCUS; and coverage of new features as they appear in the products.  

    These sessions all come with the live, freewheeling discussion, demonstrations, and Q&A with three of the FOCUS/WebFOCUS language gurus - Walter Blood and Walter Brengel  - representing together more than 70 years of experience. Join us for the basics, to get the techniques you need, and to tell us how you use the language. We love active participation, so join us for FOCUS Fridays!

    Browse the event schedule or watch recorded sessions on demand.
     
     
  • When planning your ibi WebFOCUS upgrade, please consider the most recent release of ibi WebFOCUS. ibi WebFOCUS v9.3 is now available on the ibi Product Download site (edelivery.ibi.com) and includes significant enhancements and innovations across the product. For complete details of all features, see our ibi WebFOCUS Product Documentation. 

    As a valued customer, we want you to benefit from WebFOCUS v.9.3 as well as the robust capabilities available via the ibi WebFOCUS Edition product package. So we are offering the ibi WebFOCUS Technology Refresh program, a new service to ensure a smooth transition to this latest, feature-rich release.

    For further information, please visit the ibi WebFOCUS Technology Refresh Program site.
×
  • Create New...