Welcome to the blog of Solomon Nelson, a software technologist by profession. This blog will contain my experiences with information technology and reflections on the management related topics that interest me.
DISCLAIMER: The views expressed on this blog are my own and do not necessarily reflect the views of my employer.
Identity Management is a mature IT concept and is the foundation of good security and solid regulatory compliance. Today, with the paradigm shift in cloud computing and mobility era, businesses need to understand and prepare their IT organizations to enable or associate user's identity and entitlements, and extend access privileges beyond the traditional corporate perimeters. This change is introducing new risks and it is also prompting new questions about ownership and control of digitized information.
However, what do the services/applications consider to be the attributes that define user/businesses unique identity? From a technology standpoint, I see identity products to be broadly classified into 3 pillars based on their feature offerings/solutions:
Mapping these core identity functions/features into the enterprise application architecture space is quite challenging. In reality, our understanding of a business entity (person/tenant/business) identity is built upon an incomplete set of attributes that the application/service architecture deems sufficient to differentiate one entity from another. But this attribute set is generally far from complete and a wide range of perceptions exist regarding what is considered acceptable to uniquely define an entity's identity in the application/service domain. Application architecture must accept a level of risk and be willing to offer service on the basis that a business/user's identity definition is "good enough" for the purpose of which the application/service is going to use it.
Here is my attempt at capturing the high-level identity touch points that matter to the application-architecture design across the enterprise technology domain.
Load balancing enables you to achieve horizontal scalability and higher levels of fault tolerance for applications and SOA Suite web services. Load balancers can be implemented at either L7 (application Layer) or at L-4 (Transport Layer) to achieve high availability and reliability of applications. The Layer-7 load balancers provides the user forwarding service and distributes requests based upon data found in application layer protocols (HTTP), while Layer 4 load balancers act upon data found in network and transport layer protocols. In a high-availability environment, both L-4 and L-7 are used. These load balancers come in both software and/or hardware appliances from various vendors (viz F5, Brocade, Cisco, Oracle).
Layer 7 load balancers can further distribute requests based on application specific data such as HTTP headers, cookies, or data within the application message itself, such as the value of a specific parameter. Here, I will discuss about how Oracle' Layer-7 software load-balancer "Oracle Traffic Director" can be configured in an Exalogic system for Oracle SOA Suite deployments. Oracle Traffic Director (OTD) is a complete software Layer-7 load balancing solution that can facilitate the scalability of your SOA Suite deployment by distributing incoming traffic to multiple servers or virtual machines inside Exalogic. The architecture of Traffic Director enables it to handle large volumes of SOA application traffic with low latency. It can communicate with SOA Suite servers in the back end over Exalogic's InfiniBand fabric. Here is a proposed sample topology to configure OTD for the Oracle SOA Suite VM Template deployed to Exalogic
Elastic Cloud. In a production deployment, the XE database will be replaced with a supported database version or Exadata.
Here are the high-level steps in configuring the Oracle Traffic Director for SOA Suite deployment on Exalogic: [A - Install and Create Traffic Director Instance] 1. Create vServers to host OTD server instances on the Exalogic Compute Node - use Exalogic Control to create vServers (see Exalogic Elastic Cloud Administrator's Guide - Chapter Creating vServers ) - Create OTD-vServer1 to host the OTD Administration Server [OTDHOST1] - Create OTD-vServer2 to host the OTD Administration Node [OTDHOST2] 2. Install OTD on a shared ZFS storage (say $ORACLE_TRAFFIC_DIRECTOR_HOME) that is accessible to both OTD-vServer1 & OTD-vServer2 3. Create the OTD Administation Server on OTDHOST1 # $ORACLE_TRAFFIC_DIRECTOR_HOME/bin/tadm configure-server --user=otd_admin --server-user=root --instance-home=/u01/app/oracle/config/otd/admin/ --host=OTDHOST1 4. Start the OTD Admin Server on OTDHOST1 as user root # /u01/app/oracle/config/otd/admin/admin-server/bin/startserv 5. Create an Administration Node on OTDHOST2 as user root - Execute the command below to create an administration node and register it with the remote administration server: https://OTDHOST1:8989 # $ORACLE_TRAFFIC_DIRECTOR_HOME/bin/tadm configure-server --user=otd_admin --host=OTDHOST1 --port=8989 --admin-node --node-host=OTDHOST2 --node-port=8900 --instance-home=/u01/app/oracle/config/otd/admin/instances Refer Oracle Traffic Director documentation chapter 5 Creating the Administration Server and an Administration Node [B - Configure Traffic Director to Route Requests to SOA Servers] In order to start using OTD, you need to create a 'configuration'. A 'configuration' is a collection of elements that determine the run-time behavior of an OTD instance. 1. Create a configuration named SOAEXA by using the OTD administration console 2. Start or restart OTD instances using the OTD admin Console 3. Define OTD Virtual Servers for the Oracle SOA Suite deployment - create the following origin-server pools using the administration console, for e.g. soa-pool ,osb-pool 4. Update the Host Pattern Served by the SOAEXA Virtual Server 5. Deploy the OTD SOAEXA configuration to create an instance of it on the administration node 6. Configure OTD to Create Virtual Server Routes for the WebLogic SOA domain Managed Servers 7. Create 'Route Conditions' (for e.g. osb-route, soa-route) for each of the SOA Suite virtual servers configured 8. Validate Access through OTD for the SOA/OSB managed servers 9. Configure Server Callback URL and OTD URL properties in the SOA Infrastructure - set both the SERVER and HTTP Server URLs to point to OTD's virtual server (exalogic Infiniband address) 10. Configure SSL for OTD by turning on the WebLogic plug-in enabled flag, since the load balancer terminates SSL requests for security purpose Chapters 7,8 and 9 "Oracle Exalogic Elastic Cloud Enterprise Deployment Guide for Oracle SOA Suite" are particularly useful to carry out these configurations. Oracle Traffic Director adds a number of key capabilities, such as intelligent routing, compression, and content caching, among others. Refer the OTD White Paper as well for more details on configuring OTD on Exalogic.
In this post, will share my experience Upgrading a development SOA Database version from 11.2.0.1 to 11.2.0.4. This will be a pre-requisite to be addressed first by many of the customers, who are planning to uptake the next major release of SOA Suite 12c (12.1.3.0.0 - GA mid Y2014).To clarify, here am referring to the minimal required version for the SOA Suite 12c infrastructure i.e. SOAINFRA database (ver 11.2.0.3), which is certified to function with no other
database vendor than Oracle.
The high-level details of the tasks executed are described below. Please consult the relevant Oracle Database Upgrade Guides for more details. (Upgrading to the New Release of Oracle Database)
1 - Run the Database Pre-Upgrade Information Tool
- Connect to the SOA 11g database as a user with SYSDBA privileges to run the Pre-Upgrade tool:
SQL> @$11g_ORACLE_HOME/rdbms/admin/utlu112i.sql
- The Pre-Upgrade Information Tool displays warnings about possible upgrade issues with the database.
- For invalid objects or invalid components, Oracle recommends
running the the utlrp.sql before starting the upgrade as a means to
minimize the number of invalid objects and components marked with
WARNING.
- Take necessary measures based on the Pre-Upgrade tool output; in my case no corrective actions were required.
SOA Database 11.2.0.1 Pre-Upgrade Information Tool Sample Output
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- in my development environment, a schema-mode export of the SOA schema was performed using Oracle Data Pump Export utility
- Refer Database Utilities Guide Chapter 2: Data Pump Export
2.2 Create export dump directory on the DB filesystem, say /u02/app/oracle/soa_expdp 2.3 Execute below SQL commands as SYSDBA:
SQL> CREATE DIRECTORY soadumpdir AS '/u02/app/oracle/soa_expdp';
SQL> GRANT READ, WRITE ON DIRECTORY soadumpdir TO system;
- on prompt for credentials specify sys/<sys_password> as sysdba
- SOA schema export dump file will be copied over to location specified /u02/app/oracle/soa_expdp/expschema_soainfra.dmp
3 - Install Database 11.2.0.4 Software
- Install 11.2.0.4 Database software on a new Oracle home to perform an out-of-placeupgrade
- Select the option to Install Database software only
4 - Perform Sanity Operations (before upgrade)
4.1 Check Invalid Objects
- Check for Invalid Objects. There should be no invalid objects in SYS and SYSTEM user schema.
SQL> select unique OBJECT_NAME, OBJECT_TYPE, OWNER from DBA_OBJECTS where STATUS='INVALID';
- Recompile invalid objects with utlrp.sql before the upgrade by running the utlrp.sql script, located in the $11204_ORACLE_HOME/rdbms/admin directory
SQL> @rdbms/admin/utlrp.sql
4.2 Check Duplicate Objects- Always check for DUPLICATE objects in SYS/SYSTEMSQL> select OBJECT_NAME, OBJECT_TYPE from DBA_OBJECTS where OBJECT_NAME || OBJECT_TYPE in (select OBJECT_NAME || OBJECT_TYPE from DBA_OBJECTS where OWNER='SYS') and OWNER='SYSTEM' and OBJECT_NAME not in ('AQ$_SCHEDULES_PRIMARY', 'AQ$_SCHEDULES', 'DBMS_REPCAT_AUTH');- Fix DUPLICATE objects in SYS/SYSTEM BEFORE upgrade
4.3 Check Invalid Components
- Always check for NON VALID components
SQL> select substr(COMP_ID, 1,10) compid, substr(COMP_NAME,1,24) compname, STATUS, VERSION from DBA_REGISTRY where STATUS<>'VALID';
- Fix all NON VALID components BEFORE upgrade
4.4 Purge Recylce Bin
- If upgrading from 10g or 11g, purge the recyclebin
SQL> purge DBA_RECYCLEBIN;
5 - Upgrade SOA Database Using 11.2.0.4 DB Upgrade Assistant (DBUA)
- Run DB Upgrade Assistant from the 11.2.0.4 Oracle Home# cd $11.2.0.4_ORACLE_HOME
# ./bin/dbua- Select the option to backup database using DBUA (optional step)- Summary of the steps performed during the database upgrade and Log files will be available at "$11.2.0.4_ORACLE_BASE/cfgtoollogs/dbua/<SID>/upgrade2"
6 - Post Upgrade
- Perform sanity checks for any additional Invalid/Duplicate Objects and Components post upgrade- Add datafiles to SOAINFRA tablespace as recommended in the SOA 12c Upgrade Guide.As a good practice, refer the official Oracle SOA 12c Upgrade documentation for the latest updates
Recently, my team received a OVA file (VBX_Image_from_customer.ova) - a customer instance image to validate SOA Suite environment upgrade in-house. It would have been much easier to import the appliance into the Virtual Box on our local env (laptop/desktop) and get going. However, we had to host the appliance internally for cross-development and QA teams, and the only dedicated hardware at our disposal was a Oracle VM Server 3.2 physical machine.
Basically, the problem was two-fold: a)OVA file cannot be specified as a disk parameter in the Oracle VM template configuration vm.cfg b)Virtual Box cannot be installed on a Oracle VM Server machine (because Virtual Box can't operate under another hypervisor)
In this write-up, I will share the main tasks executed to use the ova file and create the guest virtual machine on the OVM Server host. The OVA (Open Virtual Appliance) file is nothing more than a TAR archive, containing the .OVF and .VMDK files. For those interested, here is a good post by Mike on the different file formats and tools for virtualization.
-- Install disk image conversion utility # yuminstall kvm-qemu-img.x86_64
-- Extract the OVA file contents # tar -xvf VBX_Image_from_customer.ova # ls - VBX_Image_from_customer-disk1.vmdk - VBX_Image_from_customer.ovf
-- Determine if your version of QEMU supports VMDK Sparse by executing the following command # qemu-img info VBX_Image_from_customer-disk1.vmdk
-- if you get a message like below then VMDK Sparse is not supported <<qemu-img: Could not open 'VBX_Image_from_customer-disk1.vmdk'>>
-- Use the VBoxManage command-line tool that ships with VirtualBox (if qemu-img option does not work) # VBoxManage clonehd VBX_Image_from_customer-disk1.vmdk --format RAW developmentSOA.img
-- Convert from vmdk to raw img if your version of QEMU supports VMDK Sparse # qemu-img convert -f vmdk -O raw VBX_Image_from_customer-disk1.vmdk developmentSOA.img
-- Use the RAW image file in the vm.cfg 'disk' parameter; sample file below
OpenStack - there are quite a lot of vendors (including large enterprises) gravitating around the OpenStack ecosystem. It is a good thing that there is ample competition, which means more options for customers and hopefully better business models driving the cloud enterprise market. I look forward to research more from a technical standpoint.
In my view, a good starting point to learn more about the different technical features in OpenStack, is to experiment with them on a small scale local environment. My test bed is a Ubuntu 14.4 64-bit Linux system on a 8x Intel Xeon CPU machine, with 8G memory and two physical network interface cards. The goal is to basically setup an all-in-one configuration, where all the services, including compute services, are installed on the same node. A controller node is where most of the OpenStack services are configured, and will be installed on my Ubuntu system.
Here, I will discuss couple of quick OpenStack development environment setup options:
a) Using stable Git Icehouse repository OR
b) Using Vagrant Box
Option-A: Deploy OpenStack IceHouse using Git repo
1. Create work directory for OpenStack project, say $ICE_STACK_DIR
# mkdir /scratch/<user>/icehouse
2. Clone stable/icehouse git repository
- used Netbeans ide to clone the Git repository branch to workdir location; alternately, run the following command from $ICE_STACK_DIR
3. Modify Devstack configuration file to override default settings as needed
- localrc is a user-maintained settings file used to configure DevStack. It is deprecated and has been replaced by local.conf. More details here
Sample local.conf:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Read more about the stack.sh script in the official documentation
- The default services configured by DevStack are Identity (Keystone), Object Storage (Swift), Image Storage (Glance), Block Storage (Cinder), Compute (Nova), Network (Neutron), Dashboard (Horizon)
- During install run, hit errors like below: cp: cannot create regular file '/etc/nova/policy.json': Permission denied
the resolution was to basically edit the file work-dir/devstack/lib/nova and change to 'sudo cp' for the failing file-access occurrences; for e.g. the following changes were made
sudo cp -p $NOVA_DIR/etc/nova/policy.json $NOVA_CONF_DIR # Get the sample configuration file in place sudo cp -p $NOVA_DIR/etc/nova/api-paste.ini $NOVA_CONF_DIR sudo chown $STACK_USER $NOVA_CONF_DIR
- To give an estimate, deploying DevStack in my environment took between 5-6 minutes (after prior multiple failed attempts) 5. Perform basic sanity tests
- Run the test scripts # cd $ICE_STACK_DIR/devstack/tests # ./functions.sh # ./test_config.sh
- Run the exercise scripts # cd $ICE_STACK_DIR/devstack/exercises
# ./horizon.sh
-- expect to see something like the following message printed on the console if everything goes well with the deployment
............. + set +o xtrace ********************************************************************* SUCCESS: End DevStack Exercise: ./horizon.sh *********************************************************************
6. Launch OpenStack Horizon Dashboard
- Go to URL http://my.eth1.ipv4.address
- Logon as default user 'demo' or 'admin' and password $ADMIN_PASSWORD set in local.conf
Here is a screenshot of the dashboard System Info panel
7. Try creating instances from the Dashboard
- Refer OpenStack Admin Guide for more details on managing the resources and services using the Horizon dashboard
8. Stopping and Restarting DevStack
- To stop all processes that were started by stack.sh # cd $ICE_STACK_DIR/devstack # ./unstack.sh
- To restart DevStack # cd $ICE_STACK_DIR/devstack # ./rejoin-stack.sh
Option-B. Deploy DevStack Icehouse using Vagrant
I found this blog article 'OpenStack Cloud Computing Cookbook' by Kevin very helpful in setting up my local development environment virtual machines.
Enterprise Architecture (EA) is an interesting topic for those in the IT industry. There are tons of articles on the web that talks about the role of the Enterprise Architect, and how this IT-centric function or responsibility is key to serve the entire business, including strategy and execution well beyond IT and IT solution delivery.
On the other hand, you may come across debates and lots of conflicting opinions on the differences in focus between an Enterprise Architect, a Business Architect, an Information Architect and a Process Architect. However, having spend considerable time in the enterprise software development space, it is my opinion that 'job titles' do not matter. As an Enterprise IT professional, I have learnt that sound practices like clearly defining the problem statement, asking the right/relevant questions, focus on priorities and a sense of humility will take us a long way in helping achieve the business/technical goals to serve the wider global community. Well, this is a complex subject and I have in no way figured it all. The best part is that it is a collaborative learning experience.
In this post, I will share my opinion on what 'Enterprise Architecture' means to me and use Balanced Scorecard as a conceptual tool for illustrating Enterprise Architecture (EA) as a means for Business-IT Alignment. This approach connects business model to EA vision and proposes design methods/perspectives to be measured in the Business, Applications, Data and Infrastructure areas.
In this section, I will share my experience profiling the Oracle SOA Suite application server using NetBeans 7.4 from Windows7 desktop - where the WLS SOA managed server is running on a remote Enterprise Linux 5 machine. Here are the main steps:
A. Choose Profile > Attach Profiler (Attach Profile icon) from the main menu of Netbeans IDE to open the Attach Profiler dialog box.
Specify the location of the application and the connection method.
B. Before profiling, perform the following
Step 1: Make sure the target application is configured to run using Java 6+.
Step 2: If you have not done it before, create a Remote profiling pack (click the link in the IDE message window) for the selected OS & JVM and upload it to the remote system.
- Download the profiling pack file 'profiler-server-linuxamd64.zip' that will be created and copy to the remote location. (extracted dir contents of the zip below)
profiler-server-linuxamd64 |- bin |- lib - README
Step 3: If you have not run profiling on the remote system yet, run the <remote>/bin/calibrate-16.sh script first to calibrate the profiler.
You should see something like
profiler-server-linuxamd64]$ ./bin/calibrate-16.sh Profiler Agent: JNI OnLoad Initializing... Profiler Agent: JNI OnLoad Initialized successfully Starting calibration... Calibration performed successfully For your reference, obtained results are as follows: Approximate time in one methodEntry()/methodExit() call pair: When getting absolute timestamp only: 2.0325 microseconds When getting thread CPU timestamp only: 1.85 microseconds When getting both timestamps: 3.84 microseconds Approximate time in one methodEntry()/methodExit() call pair in sampled instrumentation mode: 0.0425 microseconds
Step 4: Add the following parameter(s) to the SOA application startup script:
Step 5: Start the target application. The process will wait for the profiler to connect.
On starting the soa_server1, you should see something like this in the server diagnostics log ......... Profiler Agent: Waiting for connection on port 5140 (Protocol version: 14)
Step 6: Submit to close the Attach Settings dialog
Step 7: Click the Attach button in the Attach Profiler dialog to connect to the target application server and resume its execution.
................ Profiler Agent: Established connection with the tool Profiler Agent: Standard session <Feb 11, 2014 3:04:43 PM PST> <Info> <Security> <BEA-090905> <Disabling the CryptoJ JCE Provider self-integrity check for better startup performance. To enable this check, specify -Dweblogic.security.allowCryptoJDefaultJCEVerification=true.> <Feb 11, 2014 3:04:43 PM PST> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG to FIPS186PRNG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true.> <Feb 11, 2014 3:04:43 PM PST> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) 64-Bit Server VM Version 24.45-b08 from Oracle Corporation.> <Feb 11, 2014 3:04:44 PM PST> <Info> <Management> <BEA-141107> <Version: WebLogic Server 12.1.3.0.0 Wed Dec 11 11:57:28 PST 2013 1567342 > <Feb 11, 2014 3:04:47 PM PST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING.>
C. The IDE attaches to the remote JVM and you can view the profiling data of the remote SOA WLS managed server as for a local application
- sample Snapshot of the VM Telemetry view displaying info on Classes loaded and running Threads
You can find more details about NetBeans7.4 profiling capabilities in the official NetBeans7.4 documentation chapter "Testing and Profiling Java Application Projects".
When working with Fusion Apps ESS, you may encounter errors during job submissions that has to do with remote rendering of job parameters. ESS i.e. EssCentralUiApp utilizes portlet technology to remotely invoke a business view that is implemented as an Schedule Request Submission(SRS) ADF task flow.
In such cases, the ess server diagnostics logs will contain exception stack trace like below:
Caused by: javax.naming.NameNotFoundException; remaining name 'ESS_SRS_PortletProducerName_<webModuleName>_<hostname>_com' at oracle.adf.share.jndi.ContextImpl.findObject(ContextImpl.java:647) at oracle.adf.share.jndi.ContextImpl.lookup(ContextImpl.java:144) at oracle.adf.share.jndi.ContextImpl.lookup(ContextImpl.java:149) at javax.naming.InitialContext.lookup(InitialContext.java:392) at oracle.portlet.client.techimpl.wsrp.WSRPClientUtils.getProducerConnection(WSRPClientUtils.java:118) at oracle.portlet.client.techimpl.wsrp.WSRPClientUtils.getProducerConnection(WSRPClientUtils.java:98)
======================================================================
oracle.apps.fnd.applcp.srs.model.exception.PortletProducerRegistrationException: Fail to lookup producer id with dynamic registration at oracle.apps.fnd.applcp.srs.model.srsService.applicationModule.PortletRegistrationServiceAMImpl.lookupPortletProducerIdEx(PortletRegistrationServiceAMImpl.java:187) at oracle.apps.fnd.applcp.srs.model.srsService.applicationModule.PortletRegistrationServiceAMImpl.lookupPortletProducerId(PortletRegistrationServiceAMImpl.java:133) at oracle.apps.fnd.applcp.srs.view.backing.ScheduleRequestSwitcher.computeTaskFlowId(ScheduleRequestSwitcher.java:105) at oracle.apps.fnd.applcp.srs.view.backing.ScheduleRequestSwitcher.<init>(ScheduleRequestSwitcher.java:43) at sun.reflect.GeneratedConstructorAccessor1056.newInstance(Unknown Source)"
=======================================================================
These errors typically indicate that the portlet producer registration has succeeded and there is some problem on the Portlet Binding side or the dynamic portlet producer registration on the EssCentralUiApp side has failed and there is some problem getting a valid portlet instance id. Usually, you can get past these errors by manually deleting the portlet producer registration entry to force ESS to dynamically register the producer again.
Remember, the portlet producer lookup for dynamic registration is based on the specific name derived from value of the ESS jobDefinition property 'EXT_PortletContainerWebModule'. One can either follow these steps to delete the producer associated with EssCentralUiApp from Enterprise Manager (EM) or use WLST commands.
1. Login to EM console of ESS Server associated with Common Domain 2. Navigate to the WebCenter Service Configuration page as shown
3. Select “portlet producer” and inspect the list of registered portlet producers 4. Look for the offending webmodule entry from the exception stack; you should see only one entry 5. Delete the producer entry
(sometimes bouncing the server may help see the changes take effect)
In other cases, you may hit errors that are caused due to a stale WebService connection entry that needs to be removed. Exceptions like ......... [ess_server1] [ERROR] [] [oracle.apps.fnd.applcp.srs] ....[APP: EssCentralUiApp] .. [APPS_SESSION_ID: F145330E505C6A13E0436810C10A43B7] Register Producer failed due to errors. @ Connection exists: @ ESS_SRS_PortletProducerName_<webModuleName>_<hostname>_com-wsconn at oracle.apps.fnd.applcp.srs.model.srsService.applicationModule.PortletRegistrationServiceAMImpl.lookupPortletProducerIdEx(PortletRegistrationServiceAMImpl.java:187) at oracle.apps.fnd.applcp.srs.model.srsService.applicationModule.PortletRegistrationServiceAMImpl.lookupPortletProducerId(PortletRegistrationServiceAMImpl.java:133) at oracle.apps.fnd.applcp.srs.view.backing.ScheduleRequestSwitcher.computeTaskFlowId(ScheduleRequestSwitcher.java:105) at oracle.apps.fnd.applcp.srs.view.backing.ScheduleRequestSwitcher.<init>(ScheduleRequestSwitcher.java:43) at sun.reflect.GeneratedConstructorAccessor1056.newInstance(Unknown Source)"
Here, you can go through the System MBean browser to delete the portlet producer WS Connection entry from EM.
1. Login to EM in the CommonDomain and launch the System MBean Browser. 2. Navigate to "ADFConnections" MBean
Application Defined MBeans -> oracle.adf.share.connections -> Server: ess_server1 -> Application: EsscentralUiApp -> ADFConnections -> ADFConnections
3. Expand the "WebServiceConnection" node and verify that the stale connection 'ESS_SRS_PortletProducerName_<webModuleName>_<hostname>_com-wsconn' exists 4. Navigate back to the "Operations" tab and click on the 'removeConnection' operation.
After deleting the WS Connection entry, you may also want to delete the portlet producer registration entry as an extra measure.
For more information, refer the Fusion Applications Administrator's Guide chapter on "Troubleshooting Oracle WebCenter Portlets"
Thought of sharing some information on how ESS Submission interface is designed to accept job parameters as user input.
ESS Global Submission UI is the primary interface for business users to submit job requests. The Central Submission interface is hooked up to Navigator menu in the Fusion Applications UI Shell [ Navigator -> Tools -> Schedule Processes].The extended functionality of ESS Global Submission UI allows users to submit job requests across CRM/HCM/FSCM domain in the Fusion Apps (FA) topology.
ESS CentralUi application deployed to the FA Common Domain acts as the portlet consumer and dynamically registers portlet producers at runtime. In the FA context, these portlet producers are basically remote web applications (deployed across Financials/CRM/HCM domain weblogic managed servers), that host the parameters ViewObject (VO) to facilitate the collection of user input as parameters for ESS job request runtime execution.
The Oracle Portlet Bridge exposes JSF applications and task flows as JSR 168 portlets. In Oracle Fusion Applications, portlets are WSRP portlets. The purpose of the Web Services for Remote Portlets (WSRP) protocol is to provide a web services standard that allows for the visual "plug-n-play" of remote running portlets from disparate sources.
Just as a web application relies on the servlet container to invoke custom developed servlets, a portal application relies on the portal container (i.e. Oracle Webcenter implementation) that uses the portlet API to invoke portlets. The ESS CentralUi application contacts the portlet producers that provide the portlets (i.e the ESS SRS TaskFlow with parametersVO exposed as portlet by the portlet-bridge) to be rendered on the ESS job submission page. The figure below illustrates the basic architecture of the ESS consumer web application EssCentralUiApp' interaction with the portlet producers.
Note: The portlet producer look-up for dynamic registration is based on the specific name derived from value of the ESS jobDefinition property 'EXT_PortletContainerWebModule'.
Refer the official Oracle Webcenter documentation for more details on Portlets & Portlet-bridge implementation.