Sunday, October 1, 2017

Implementing OAuth2 flow in RESTful APIs/Services using Google Authorization Server

A RESTful API positions a given service to present business value, and protection of the data provided via RESTful endpoints should always be a high priority. Aside from use of TLS/HTTPS, the most important level of RESTful API security is centered around authentication. For this article, the focus is going to be on OAuth2, which can perform pseudo-authentication through delegation.

OAuth is a standard for delegating authorization of resources and is a common mechanism used by many API developers/providers to accomplish user authentication. However, OAuth is not an authentication protocol; the core functionality of OAuth is about user identity-delegation for consent to access the protected resource / data / business logic. The 'delegated access' means user is not present on the connection between the client and the resource being accessed. OAuth.net is a great reference to learn more about the topic.

This page describes how to implement OAuth 2.0 basic flow within a Dropwizard RESTful application with Google as the Authorization Server.

At a high level, you follow four steps:
  1. Obtain OAuth 2.0 credentials from the Google API Console.
  2. Register the client application user with Google, sign the user in and obtain an access token.
  3. The REST client includes the access token it receives from Google in every request it sends to the Dropwizard application.
  4. The Dropwizard OAuth authentication implementation validates the access token and if valid, serves the request to protected resources.
The following sequence diagram captures the OAuth2 flow interaction between the REST Client, RESTful application (Resource Server) and the OAuth2 Server (Google Authorization Servers). 

In this sample OAuth implementation, the authorization server is Google and is separate from the resource server which is a Dropwizard application. 

1. Obtain OAuth 2.0 Credentials

An OAuth Client-ID is needed to use OAuth 2.0 in your application.

To begin, obtain OAuth 2.0 credentials such as a client ID that is known to both Google and your application from the Google API console. Select the appropriate application type for your project. Follow the instructions in Setting up Google OAuth 2.0 to create the OAuth client ID. The Client-ID will be used by the application when requesting an OAuth 2.0 access token.


2. Register the Client application User and obtain Access Token

Using the Client-ID from step-1, simulate a client-side OAuth 2.0 flow.

a) First, get the Authorization-Code.
Simply, add your OAuth client-ID credential to a URI and get the authorization-code by sending the HTTPS request through your web browser. This authorization flow supplies a local redirect URI to handle responses from Google's authorization server.

For example, send the URI request to Google's OAuth 2.0 server:
      https://accounts.google.com/o/oauth2/v2/auth?
                             scope=profile&
                             response_type=code&
                             redirect_uri=urn:ietf:wg:oauth:2.0:oob&
                             client_id=<<ENTER-CLIENT-ID>>

This Google OAuth2.0 endpoint handles active session lookup, authenticates the user, and obtains user consent. Login to your google account to COPY/PASTE the output i.e Authorization-Code.

b) Next, obtain the Access Token using authorization-code
Use the authorization-code to get the access token (in JSON format). The manner in which your application receives the authorization response depends on the redirect-URI scheme that it uses.

Exchange authorization code for refresh and access tokens. Call the https://www.googleapis.com/oauth2/v4/token endpoint and set the authorization_code, client-id and client_secret parameters. For example, where credential Application type is "Other", you can click the Download JSON button to get the client_secret.json file.
      https://www.googleapis.com/oauth2/v4/token?
                  code=<enter-authorization_code>&
                  client_id=<enter-client_id>&
                  client_secret=<enter-client_secret>&
                  redirect_uri=urn:ietf:wg:oauth:2.0:oob&
                  grant_type=authorization_code
 
Google responds to this request by returning a JSON object that contains a short-lived access token and a refresh token.

Follow the Google instructions (steps 2-5) in Obtaining OAuth 2.0 access tokens for more details.


3. Include the Access Token in every request to fetch protected REST Resource:-

After your application obtains an access token, you can use the token to make calls to the REST endpoints. The client application  must include the access token it receives from Google in every request it sends to the resource server, which in this example is a Dropwizard application.

For example, to invoke the API via the curl command-line, include the access token in a request to the API by including either an access_token query parameter or an Authorization: Bearer HTTP header.

     curl -H "Authorization: Bearer <access_token>" https://<my-apis-domain>/<rest-endpoint>

4. Validate OAuth2 Access Token:-

Finally, it is important for the REST/Dropwizard application to implement a mechanism for validating access token, as it has no way of differentiating between a valid token and an attack token. This can be mitigated by using the authorization code flow and only accepting tokens directly from the authorization server's token endpoint.

To setup OAuth 2.0 authentication within Dropwizard, it is best to follow instructions on Dropwizard Authentication tutorial. The necessary thing to understand is that an AuthFilter needs to be created and registered with a jersey provider. The AuthFilter then will be applied to every request sent to the server before the request is dispatched to a resource method. The filter extracts the access token from a request and asks the Authenticator (i.e. GoogleAuthenticator in this example) to authenticate the user and return a principal object.

Here is a sample code (for demonstration only, not to be used in production) for Dropwizard-Auth2 Google-Authenticator
   
The Dropwizard OAuth2 implementation validates the access token and if valid, serves the request to protected resources.

Sunday, August 24, 2014

Identity Touch Points - Applications Architecture Perspective

Identity Management is a mature IT concept and is the foundation of good security and solid regulatory compliance. Today, with the paradigm shift in cloud computing and mobility era, businesses need to understand and prepare their IT organizations to enable or associate user's identity and entitlements, and extend access privileges beyond the traditional corporate perimeters. This change is introducing new risks and it is also prompting new questions about ownership and control of digitized information.

However, what do the services/applications consider to be the attributes that define user/businesses unique identity? From a technology standpoint, I see identity products to be broadly classified into 3 pillars based on their feature offerings/solutions:


1) Identity Governance
User/Entitlement provisioning, Password/Profile management, Self-service Access/Roles catalog management

2) Access Management
Single sign-on, Authorization, Authentication

3) Identity Platform Services
Policy Compliance and Enforcement, Security Stores, Replication, Synchronization

Mapping these core identity functions/features into the enterprise application architecture space is quite challenging. In reality, our understanding of a business entity (person/tenant/business) identity is built upon an incomplete set of attributes that the application/service architecture deems sufficient to differentiate one entity from another. But this attribute set is generally far from complete and  a wide range of perceptions exist regarding what is considered acceptable to uniquely define an entity's identity in the application/service domain. Application architecture must accept a level of risk and be willing to offer service on the basis that a business/user's identity definition is "good enough" for the purpose of which the application/service is going to use it.

Here is my attempt at capturing the high-level identity touch points that matter to the application-architecture design across the enterprise technology domain. 








Saturday, July 19, 2014

Configuring Oracle Traffic Director Enterprise Load-Balancer for SOA Suite Deployment on Exalogic

Load balancing enables you to achieve horizontal scalability and higher levels of fault tolerance for applications and SOA Suite web services. Load balancers can be implemented at either L7 (application Layer) or at L-4 (Transport Layer) to achieve high availability and reliability of applications. The Layer-7 load balancers provides the user forwarding service and distributes requests based upon data found in application layer protocols (HTTP), while Layer 4 load balancers act upon data found in network and transport layer protocols. In a high-availability environment, both L-4 and L-7 are used. These load balancers come in both software and/or hardware appliances from various vendors (viz F5, Brocade, Cisco, Oracle). 

Layer 7 load balancers can further distribute requests based on application specific data such as HTTP headers, cookies, or data within the application message itself, such as the value of a specific parameter. Here, I will discuss about how Oracle' Layer-7 software load-balancer "Oracle Traffic Director" can be configured in an Exalogic system for Oracle SOA Suite deployments. 


Oracle Traffic Director (OTD) is a complete software Layer-7 load balancing solution that can facilitate the scalability of your SOA Suite deployment by distributing incoming traffic to multiple servers or virtual machines inside Exalogic.  The architecture of Traffic Director enables it to handle large volumes of SOA application traffic with low latency.  It can communicate with SOA Suite servers in the back end over Exalogic's InfiniBand fabric. Here is a proposed sample topology to configure OTD for the Oracle SOA Suite VM Template deployed to Exalogic Elastic Cloud. In a production deployment, the XE database will be replaced with a supported database version or Exadata.


Here are the high-level steps in configuring the Oracle Traffic Director for SOA Suite deployment on Exalogic:

[A - Install and Create Traffic Director Instance]

1. Create vServers to host OTD server instances on the Exalogic Compute Node
    - use Exalogic Control to create vServers (see Exalogic Elastic Cloud Administrator's Guide - Chapter Creating vServers )
    - Create OTD-vServer1 to host the OTD Administration Server  [OTDHOST1]
    - Create OTD-vServer2 to host the OTD Administration Node   [OTDHOST2]
 
2. Install OTD on a shared ZFS storage (say $ORACLE_TRAFFIC_DIRECTOR_HOME) that is accessible to both OTD-vServer1 & OTD-vServer2

3. Create the OTD Administation Server on OTDHOST1

# $ORACLE_TRAFFIC_DIRECTOR_HOME/bin/tadm configure-server --user=otd_admin --server-user=root --instance-home=/u01/app/oracle/config/otd/admin/ --host=OTDHOST1

4. Start the OTD Admin Server on OTDHOST1 as user root

# /u01/app/oracle/config/otd/admin/admin-server/bin/startserv

5. Create an Administration Node on OTDHOST2 as user root
   -  Execute the command below to create an administration node and register it with the remote administration server: https://OTDHOST1:8989  

  # $ORACLE_TRAFFIC_DIRECTOR_HOME/bin/tadm configure-server --user=otd_admin --host=OTDHOST1 --port=8989 --admin-node --node-host=OTDHOST2 --node-port=8900 --instance-home=/u01/app/oracle/config/otd/admin/instances
   
Refer Oracle Traffic Director documentation chapter 5 Creating the Administration Server and an Administration Node  

[B - Configure Traffic Director to Route Requests to SOA Servers]

In order to start using OTD, you need to create a 'configuration'. A 'configuration' is a collection of elements that determine the run-time behavior of an OTD instance.

1. Create a configuration named SOAEXA by using the OTD administration console
2. Start or restart OTD instances using the OTD admin Console
3. Define OTD Virtual Servers for the Oracle SOA Suite deployment
- create the following origin-server pools using the administration console, for e.g. soa-pool ,osb-pool
4. Update the Host Pattern Served by the SOAEXA Virtual Server
5. Deploy the OTD SOAEXA configuration to create an instance of it on the administration node
6. Configure OTD to Create Virtual Server Routes for the WebLogic SOA domain Managed Servers   
7. Create 'Route Conditions' (for e.g.  osb-route, soa-route) for each of the SOA Suite virtual servers configured
8. Validate Access through OTD for the SOA/OSB managed servers
9. Configure Server Callback URL and OTD URL properties in the SOA Infrastructure
   - set both the SERVER and HTTP Server URLs to point to OTD's virtual server (exalogic Infiniband address)
10. Configure SSL for OTD by turning on the WebLogic plug-in enabled flag, since the load balancer terminates SSL requests for security purpose

Chapters 7,8 and 9 "Oracle Exalogic Elastic Cloud Enterprise Deployment Guide for Oracle SOA Suite" are particularly useful to carry out these configurations.

Oracle Traffic Director adds a number of key capabilities, such as intelligent routing, compression, and content caching, among others.  Refer the OTD White Paper as well for more details on configuring OTD on Exalogic.


Sunday, June 22, 2014

Upgrading Oracle SOA Suite Database to 11.2.0.3+

In this post, will share my experience Upgrading a development SOA Database version from 11.2.0.1  to 11.2.0.4. This will be a pre-requisite to be addressed first by many of the customers, who are planning to uptake the next major release of SOA Suite 12c (12.1.3.0.0 - GA mid Y2014).To clarify, here am referring to the minimal required version for the SOA Suite 12c infrastructure i.e. SOAINFRA database (ver 11.2.0.3), which is certified to function with no other database vendor than Oracle.

The high-level details of the tasks executed are described below. Please consult the relevant Oracle Database Upgrade Guides for more details. (Upgrading to the New Release of  Oracle Database)

1 - Run the Database Pre-Upgrade Information Tool 


- Connect to the SOA 11g database as a user with SYSDBA privileges to run the Pre-Upgrade tool:
        SQL> @$11g_ORACLE_HOME/rdbms/admin/utlu112i.sql

- The Pre-Upgrade Information Tool displays warnings about possible upgrade issues with the database.

- For invalid objects or invalid components, Oracle recommends running the the utlrp.sql before starting the upgrade as a means to minimize the number of invalid objects and components marked with WARNING.
- Take necessary measures based on the Pre-Upgrade tool output; in my case no corrective actions were required.

SOA Database 11.2.0.1 Pre-Upgrade Information Tool Sample Output 

2 - Backup SOA Schemas (in 11.2.0.1)


2.1 Take necessary SOA schema backups

- in my development environment, a schema-mode export of the SOA schema was performed using Oracle Data Pump Export utility
- Refer Database Utilities Guide Chapter 2: Data Pump Export

2.2  Create export dump directory on the DB filesystem, say /u02/app/oracle/soa_expdp

2.3  Execute below SQL commands as SYSDBA:
  SQL> CREATE DIRECTORY soadumpdir AS '/u02/app/oracle/soa_expdp';

  SQL> GRANT READ, WRITE ON DIRECTORY soadumpdir TO system;

2.4  Run Data Pump export utility
   # cd $11g_ORACLE_HOME  
   # ./bin/expdp schemas=SOADEV_SOAINFRA directory=soadumpdir 
 dumpfile=expschema_soainfra.dmp logfile=expschema_soainfra.log

(where 'SOADEV' is the SOAINFRA schema prefix)

-  on prompt for credentials specify sys/<sys_password> as sysdba
-  SOA schema export dump file will be copied over to location specified  /u02/app/oracle/soa_expdp/expschema_soainfra.dmp

3 - Install Database 11.2.0.4 Software

-  Install 11.2.0.4 Database software on a new Oracle home to perform an out-of-place upgrade
-  Select the option to Install Database software only

4 - Perform Sanity Operations (before upgrade)

4.1 Check Invalid Objects
- Check for Invalid Objects. There should be no invalid objects in SYS and SYSTEM user schema.

   SQL> select unique OBJECT_NAME, OBJECT_TYPE, OWNER from DBA_OBJECTS where STATUS='INVALID';
- Recompile invalid objects with utlrp.sql before the upgrade by running the utlrp.sql script, located in the $11204_ORACLE_HOME/rdbms/admin directory

   SQL> @rdbms/admin/utlrp.sql
4.2 Check Duplicate Objects - Always check for DUPLICATE objects in SYS/SYSTEM SQL> select OBJECT_NAME, OBJECT_TYPE from DBA_OBJECTS where OBJECT_NAME || OBJECT_TYPE in (select OBJECT_NAME || OBJECT_TYPE from DBA_OBJECTS where OWNER='SYS') and OWNER='SYSTEM' and OBJECT_NAME not in ('AQ$_SCHEDULES_PRIMARY', 'AQ$_SCHEDULES', 'DBMS_REPCAT_AUTH'); - Fix DUPLICATE objects in SYS/SYSTEM BEFORE upgrade

4.3 Check Invalid Components


- Always check for NON VALID components

SQL> select substr(COMP_ID, 1,10) compid, substr(COMP_NAME,1,24) compname, STATUS, VERSION from DBA_REGISTRY where STATUS<>'VALID';

- Fix all NON VALID components BEFORE upgrade

4.4 Purge Recylce Bin


- If upgrading from 10g or 11g, purge the recyclebin

SQL> purge DBA_RECYCLEBIN;

5 - Upgrade SOA Database Using 11.2.0.4 DB Upgrade Assistant (DBUA)

- Run DB Upgrade Assistant from the 11.2.0.4 Oracle Home # cd $11.2.0.4_ORACLE_HOME # ./bin/dbua - Select the option to backup database using DBUA (optional step) - Summary of the steps performed during the database upgrade and Log files will be available at "$11.2.0.4_ORACLE_BASE/cfgtoollogs/dbua/<SID>/upgrade2"

6 - Post Upgrade

- Perform sanity checks for any additional Invalid/Duplicate Objects and Components post upgrade - Add datafiles to SOAINFRA tablespace as recommended in the SOA 12c Upgrade Guide. As a good practice, refer the official Oracle SOA 12c Upgrade documentation for the latest updates

Monday, June 9, 2014

Converting OVA file for use with Oracle VM Server Xen Commands

Recently, my team received a OVA file (VBX_Image_from_customer.ova) - a customer instance image to validate SOA Suite environment upgrade in-house. It would have been much easier to import the appliance into the Virtual Box on our local env (laptop/desktop) and get going. However, we had to host the appliance internally for cross-development and QA teams, and the only dedicated hardware at our disposal was a Oracle VM Server 3.2 physical machine.

Basically, the problem was two-fold:
a) OVA file cannot be specified as a disk parameter in the Oracle VM template configuration vm.cfg
b) Virtual Box cannot be installed on a Oracle VM Server machine (because Virtual Box can't operate under another hypervisor)

In this write-up, I will share the main tasks executed to use the ova file and create the guest virtual machine on the OVM Server host.   The OVA (Open Virtual Appliance) file is nothing more than a TAR archive, containing the .OVF and .VMDK files. For those interested, here is a good post by Mike on the different file formats and tools for virtualization.

-- Install disk image conversion utility
# yum install kvm-qemu-img.x86_64

-- Extract the OVA file contents
# tar -xvf VBX_Image_from_customer.ova
# ls
    - VBX_Image_from_customer-disk1.vmdk
    - VBX_Image_from_customer.ovf

-- Determine if your version of QEMU supports VMDK Sparse by executing the following command
# qemu-img info VBX_Image_from_customer-disk1.vmdk

-- if you get a message like below then VMDK Sparse is not supported
<<qemu-img: Could not open 'VBX_Image_from_customer-disk1.vmdk'>>

-- Use the VBoxManage command-line tool that ships with VirtualBox (if qemu-img option does not work)
# VBoxManage clonehd VBX_Image_from_customer-disk1.vmdk --format RAW developmentSOA.img

-- Convert from vmdk to raw img if your version of QEMU supports VMDK Sparse
# qemu-img convert -f vmdk -O raw VBX_Image_from_customer-disk1.vmdk developmentSOA.img

-- Use the RAW image file in the vm.cfg 'disk' parameter; sample file below

bootloader = '/usr/bin/pygrub'
device_model = '/usr/lib/xen/bin/qemu-dm'
disk = ['file:<path-to-OVS-repository>/developmentSOA.img,hda,w']
memory = '8192'
maxmem = '8192'
OVM_simple_name = 'MyCompany SOA V2'
name = 'SOA_V2_MYCOMP'
OVM_os_type = 'Oracle Linux 5'
vcpus = 4
uuid = 'e405f7ea-80bb-4a14-97b2-cf969077e25a'
on_crash = 'restart'
on_reboot = 'restart'
keymap = 'en-us'
vnc = 1
vncconsole = 1
vnclisten = '127.0.0.1'
vncpasswd = ''
vncunused = 1
vif = ['bridge=xenbr0']
timer_mode = 2
expose_host_uuid = 1

-- Create VM guest instance using Xen command
# xm create vm.cfg -c

You should be all set to start up the customer guest virtual machine.

Tuesday, May 27, 2014

Quick Setup of OpenStack Icehouse Development Env on Ubuntu14

OpenStack - there are quite a lot of vendors (including large enterprises) gravitating around the OpenStack ecosystem. It is a good thing that there is ample competition, which means more options for customers and hopefully better business models driving the cloud enterprise market. I look forward to research more from a technical standpoint.

In my view, a good starting point to learn more about the different technical features in OpenStack, is to experiment with them on a  small scale local environment. My test bed is a Ubuntu 14.4 64-bit Linux system on a 8x Intel Xeon CPU machine, with 8G memory and two physical network interface cards. The goal is to basically setup an all-in-one configuration, where all the services, including compute services, are installed on the same node. A controller node is where most of the OpenStack services are configured, and will be installed on my Ubuntu system.

Here, I will discuss couple of quick OpenStack development environment setup options:
    a) Using stable Git Icehouse repository
                OR
    b) Using Vagrant Box

Option-A:  Deploy OpenStack IceHouse using Git repo

1. Create work directory for OpenStack project, say $ICE_STACK_DIR
 
    # mkdir /scratch/<user>/icehouse  

2. Clone stable/icehouse git repository

   - used Netbeans ide to clone the Git repository branch to workdir location; alternately, run the following command from $ICE_STACK_DIR
 
    # git clone -stable/icehouse https://github.com/openstack-dev/devstack.git
   
3. Modify Devstack configuration file to override default settings as needed

  - localrc is a user-maintained settings file used to configure DevStack. It is deprecated and has been replaced by local.conf. More details here

  Sample local.conf: 



4. Install DevStack as a non-root user

   # cd $ICE_STACK_DIR/devstack
   # ./stack.sh

  - Read more about the stack.sh script in the official documentation

 - The default services configured by DevStack are Identity (Keystone), Object Storage (Swift), Image Storage (Glance), Block Storage (Cinder), Compute (Nova), Network (Neutron), Dashboard (Horizon)

  - During install run, hit errors like below:
        cp: cannot create regular file '/etc/nova/policy.json': Permission denied
     
    the resolution was to basically edit the file work-dir/devstack/lib/nova and change to 'sudo cp' for the failing file-access occurrences; for e.g. the following changes were made

        sudo cp -p $NOVA_DIR/etc/nova/policy.json $NOVA_CONF_DIR

        # Get the sample configuration file in place
        sudo cp -p $NOVA_DIR/etc/nova/api-paste.ini $NOVA_CONF_DIR
        sudo chown $STACK_USER $NOVA_CONF_DIR

   - To give an estimate, deploying DevStack in  my environment took between 5-6 minutes (after prior multiple failed attempts)

5. Perform basic sanity tests

   - Run the test scripts
         # cd $ICE_STACK_DIR/devstack/tests
         # ./functions.sh
         # ./test_config.sh
   
   - Run the exercise scripts
        # cd $ICE_STACK_DIR/devstack/exercises
   
       # ./horizon.sh 
   
        -- expect to see something like the following message printed on the console if everything goes well with the deployment
        .............
        + set +o xtrace
        *********************************************************************
        SUCCESS: End DevStack Exercise: ./horizon.sh
        *********************************************************************

6. Launch OpenStack Horizon Dashboard

   - Go to URL http://my.eth1.ipv4.address
   - Logon as default user 'demo' or 'admin' and password $ADMIN_PASSWORD set in local.conf

   Here is a screenshot of the dashboard System Info panel



7. Try creating instances from the Dashboard

   - Refer OpenStack Admin Guide for more details on managing the resources and services using the Horizon dashboard    

8.  Stopping and Restarting DevStack

   - To stop all processes that were started by stack.sh
        # cd $ICE_STACK_DIR/devstack
        # ./unstack.sh
   
   - To restart DevStack
        # cd $ICE_STACK_DIR/devstack
        # ./rejoin-stack.sh

Option-B. Deploy DevStack Icehouse using Vagrant

I found this blog article 'OpenStack Cloud Computing Cookbook' by Kevin very helpful in setting up my local development environment virtual machines.

Installed the following:

1. VirtualBox 4.3.10
2. Vagrant 1.4.3
3. Vagrant Cachier plugin    
     
and then followed the instructions as-is, they just work as documented.

Hoping to share my experiments with OpenStack as I learn more ....

Saturday, April 5, 2014

Enterprise Architecture Balanced Scorecard

Enterprise Architecture (EA) is an interesting topic for those in the IT industry. There are tons of articles on the web that talks about the role of the Enterprise Architect, and how this IT-centric function  or responsibility is key to serve the entire business, including strategy and execution well beyond IT and IT solution delivery.

On the other hand, you may come across debates and lots of conflicting opinions on the differences in focus between an Enterprise Architect, a Business Architect, an Information Architect and a Process Architect. However, having spend considerable time in the enterprise software development space, it is my opinion that 'job titles' do not matter. As an Enterprise IT professional, I have learnt that sound practices like clearly defining the problem statement, asking the right/relevant questions, focus on priorities and a sense of humility will take us a long way in helping achieve the business/technical goals to serve the wider global community. Well, this is a complex subject and I have in no way figured it all. The best part is that it is a collaborative learning experience.

In this post, I will share my opinion on what 'Enterprise Architecture' means to me and use Balanced Scorecard as a conceptual tool for illustrating Enterprise Architecture (EA) as a means for Business-IT Alignment. This approach connects business model to EA vision and proposes design methods/perspectives to be measured in the Business, Applications, Data and Infrastructure areas.