Wednesday, May 30, 2012

A look at Android Architecture


Google’ Android documentation (http://developer.android.com/guide/basics/what-is-android.html) has come along pretty good. I now see that there is a cool Android Architecture diagram that shows the major components of the Android OS.

Summarizing the 10 notable aspects of the architecture that every developer should be aware of:

  1. Android is built on portable Linux platform that brings a level of hardware abstraction
  2. Android applications run as separate Linux processes with security delegated to the underlying Linux system
  3. The native libraries are C/C++ libraries and provides necessary services to the Android application layer
  4. Bionic standard C library is rewritten to make it license friendly and purpose-built for tiny, battery-powered devices
  5. Includes Apache Harmony, a modular Java runtime with class libraries
  6. Relies on Linux core kernel for memory management, power management, and networking features
  7. Java VM was replaced with the Dalvik Virtual Machine to focus strictly on mobile devices and avoid licensing
  8.  Dalvik VM executes files in the Dalvik Executable (.dex) format which is optimized for minimal memory footprint
  9.  Includes SQLite, a full featured relational database engine available to all applications
  10.  3D graphics is based on the OpenGL ES 1.0 specification

Sunday, April 29, 2012

Android SDKs - Cupcake to Ice Cream Sandwich


My journey with mobile’ leading development platform, Android started back in early 2011 with my first Android app (weekend project) in the marketplace - ‘tennis grand-slam champions’. Ever since, I am keeping a tab on the Android platform and following its feature offerings as time permits.

In the mobile development world, the release cycles are considerably shorter than the traditional enterprise application development. Android’ first stable version ‘Cupcake’ was released in April, 2009 and within couple of years; we have seen many features evolved in ‘Ice Cream Sandwich – Android SDK 4’. Éclair (v2) was better optimized than Donut (v1.6) to take advantage of the hardware for faster processing. Speed improvement was brought in with the integration of Chrome V8 JavaScript engine and JIT optimization in Froyo (v2.2); bluetooth and wi-fi made its way into the networking layer.

Gingerbread (v2.3) was a major release and a considerable number of new features were introduced, that include new UI themes, redesigned keyboards, new copy and paste functionality, improved power management, NFC (Near Field Communication), support for VoIP/SIP calls, new Camera application for accessing multiple cameras and supports extra-large screens. Ice Cream Sandwich (v4) is much sleeker than Gingerbread and Google has designed a font completely from the ground up, which looks exceptional. Another major change is that a lot of the SQL handling is moved from the native to the Java layer.  I am sure there are lots of articles on the web that will give a quick comparison of the feature-sets that were bundled in every SDK release.

           iOS leads the way with quality apps, Android leads in market share; Windows phone has not caught up yet with the mass adoption as its leading competitors. Android and iOS — each offers its own clear take on the mobile OS experience, and I personally feel that the consumers and business users stand to benefit from this healthy competition. Staying tuned on what Android comes up with next …………………

Saturday, March 24, 2012

Java Management Extensions and Weblogic Server MBeans


The best part I like about Java Management Extensions (JMX) framework is that it allows developers of applications, services, or devices to make their products manageable in a standard way without having to understand or invest in complex management systems.

JMX has been around as a core JEE API for a long time and is the foundation for everything you can do to manage the Oracle Weblogic Server. The JMX architecture in the context of Weblogic Server as I see it is shown in figure below:

Application components designed with their management interface in mind can typically be written as MBeans. WLS MBeans are 3 types - Domain, Server and Application level MBeans. You can instrument your applications deployed to WLS by providing one or more management beans. For example, the DomainRuntimeMBean provides a federated view of all of the running JVMs in a Weblogic administrative domain. All Weblogic Server MBeans can be organized into one of the following general types - Runtime MBeans or Configuration MBeans based on whether the MBean monitors the runtime state of a server or configures servers and JMX manageable resources.

WLS MBean Server acts as a container for MBeans. The MBean servers available in a Weblogic domain are:
  1. Runtime MBeanServer: is the MBeanServer available from any Weblogic process, and contains both Weblogic and user MBeans. Each server in the domain hosts an instance of this MBean server.
  2. Domain MBeanServer: This MBeanServer provides federated access to MBeans for domain-wide services. Collocated with the WLS domain admin server process, this server  also acts as a single point of access for MBeans that reside on Managed Servers.  So, as long as a managed server is up, its MBeans can be accessed through the "Domain Runtime" MBeanServer.
  3. Edit MBeanServer: Collocated with the WLS domain admin server process, it provides access to pending configuration MBeans and operations that control the configuration of a Weblogic Server domain. No application MBeans can be registered in this MBeanServer. 
The JMX API enables to perform remote management of resources by using JMX technology-based connectors (JMX connectors). Adaptors and connectors make all MBean server operations available to a remote management application. Fusion Applications Control (or commonly known as EM Console) is the primary administration management interface. Developers can programmatically monitor Weblogic Server by using the JMX interface directly in a Java program or by writing scripts using a tool such as WLST.

Here is a good blog article on managing WLS using Jconsole: http://blogs.oracle.com/WebLogicServer/entry/managing_weblogic_servers_with

Please refer the Oracle Weblogic Server documentation for the latest on WLS-MBeans.


Monday, February 27, 2012

An evaluation of Content Management Software Solutions


Content Management Solutions (CMS) have evolved quite a lot in the recent years. There are so many available, all with different features, it can easily be an overwhelming task for any business or a web designer to choose the right solution-fit for their project.

I took upon myself to evaluate the CMS solutions that will be ideal for a small organization, like church for example. From my research of web articles, blogs and support forums the message I get is that a fully featured CMS, that is stable with a clear roadmap for future development/feature enhancements and a commitment to support and enhance the performance aspect is probably the best way to go. Here is the tabulated evaluation matrix of some of the major elements - cost, feature set and hosting options; that were important to me. 

              Table: Content Management Software Evaluation (January, 2012)

Sunday, January 8, 2012

Oracle ESS LifeCycle


My first post on Oracle ESS briefly introduced the high level functional aspects of the framework. Here, I will discuss the ESS request execution processing stages - a key concept to understand when working with ESS.

The lifecycle of an ESS Job Execution begins when the client makes a request for the job submission and ends when the server responds with the execution state. The lifecycle consists of three main phases: PreProcess, Execute and PostProcess.

During the PreProcess phase, several actions can take place. For instance, setting request properties for work/output/log directories, verficiation of ESS request file directory, creating application session (a lightweight session object in the Fusion Apps context), database connection initialization, using requested NLS settings, and loading environment properties file for spawned execution are some of the purposes of PreProcessing. PreProcessor will run prior to every job request execution, and when a request is restarted after pausing.  

Executable phase is where the actual job program runs. The program logic can be implemented in any of the supported ESS JobTypes i.e. Java, C, PL/SQL, Host, Perl, SQL, BIP etc. For example, Java job logic runs in the context of the J2ee ESS application hosting the job metadata. On the other hand, PL/SQL program logic written as procedures are executed as an Oracle RDBMS Scheduler job procedure (owned by ESS runtime schema) after the ESS wrapper procedure does some initialization/set up work. In general, for any job type the post-process handler is called only after the ESS mid-tier is notified the request executable has ended.

PostProcessing is performed only if the overall processing of the ESS request gets as far as the executable stage and it completes as success or warning, Post-processing is not done if the pre-processing or executable fails as error, or if the request is cancelled prior to post-processing. The main purpose of PostProcessor is to carry out general cleanup tasks and other actions to reflect the final state of the request before the job request can be deemed as fully complete. PostProcessing runs after the job has completed  its execution to perform defined actions, such as Notification (using Oracle UMS) and storing the request log/output files to the Content Repository (i.e. Oracle UCM Server).

Finally, here are some important points to remember w.r.t the ESS request lifecycle:
  • If the pre-process handler returns an error status, the business logic for that request will not be executed for that execution attempt. The request will normally transition to an ERROR state from which it may be retried if RETRIES are configured. It is important to note that pre-processor is not guaranteed to run in the same thread as the executable.
  • If post-processing is attempted and returns as error, the overall request state is set to WARNING. Post Processing will only occur after the request has gone to a terminal state.

Please refer the Oracle ESS Developers Guide for more details and latest information.

Monday, December 26, 2011

MapReduce and Grid Computing


There is a lot of interest and discussion around Hadoop MapReduce' success stories these days, with the likes of Amazon, Yahoo, Facebook, Google etc.. advocating and adopting the framework implementation for their production systems. I got curious to understand the core concept behind the MapReduce framework and what makes it so unique for distributed processing of large data sets.

Reading some interesting articles online, I understand MapReduce  framework at its core is a combination of two functions map ( ) and reduce ( ).The map function understands exactly where it should go to process the data i.e. the computation happens on the distributed nodes  in a completely parallel manner. The reduce function on the other hand, operates on the sorted output of the mappers' intermediate results from each computing node and performs a function on the list. Both the input and the output of the map/reduce tasks are stored in a file-system, for example  proprietary Google File System(GFS), Hadoop Distributed File System (HDFS) or something else. Typically, the compute nodes (MapReduce framework) and the storage nodes (HDFS) are co-located and run on the same set of nodes or physical box based on the assumption that remote data can only be accessed with low bandwidth and high latency. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in very high aggregate bandwidth across the cluster. So, is this an extension to the architectural approach for storage grid computing?

The idea of Grid computing arose from the need to solve highly-parallel computational problems that were beyond the processing capability of any single computer. Oracle has been offering its version of grid technology since 2000. The database grid, representing the approach taken with Oracle Database, deploys code redundantly on multiple servers (or nodes), which break up the workload based on an optimized scheme and execute tasks in parallel against common data resources. If any node fails, its work is taken up by a surviving node to ensure high availability. Simply put, RAC Database grid architecture assigns computing tasks to computing resources, and it assigns data to storage resources in a way that enables such resources to be easily added or removed and provides the flexibility for tasks and data to be moved as needed.

My take-away points from a computing perspective: Both MapReduce and Oracle RAC computing environments harness the processing power of multiple interconnected computers and are promising technologies to invest in (depending on the business case) for solving data-intensive and resource-intensive computing problems. A key premise for MapReduce-style computing systems is that there is insufficient network bandwidth to move the data to the computation, and thus computation must move to the data instead. The key differentiator or limitation I observed (at the point of blogging) is the High Availability. Unlike a RAC database transaction processing system, MapReduce-HDFS-style computing systems does not provide high availability as its HDFS file-system instance' name node server is a single point of failure.

I am looking forward to more advancement in these technologies at an affordable cost for addressing the growing data-intensive computing requirements of today’ business economics.

Tuesday, November 29, 2011

A peek into Sustainability Balanced Scorecard for Enterprise


“        Sustainability” is a term I was familiar with, but my recent read of Paul Hawken’s “Ecology of Commerce: A Declaration of Sustainability” helped me realize that it is a much bigger and important concept. Two important things (or the 2Rs of sustainability) caught my attention: 
  1. Wise use of economic and natural Resources 
  2. Respect for people and other living things
The goal of a Balanced Scorecard, as I understand, is a management tool for communicating the enterprise strategy for execution. Pursuing sustainability goals may not be the top priority for most businesses, but I believe a strategy-based balanced scorecard system aligned with principles of the sustainability ‘Triple Bottom Line’ will offer corporations a way to accomplish social and environmental goals while integrating them fully with financial performance and competitive advantage.

In an effort to understand how the ‘Sustainability’ theme can be described through each of the four perspectives of the balanced scorecard, I created a sustainability balanced scorecard as a 6 step process. 


In case you are wondering, the above 6-step sustainability strategic planning through execution process was created using MS Powerpoint. Creating a pretty balanced scorecard picture like this is easy though, but monitoring the impact of the corporate initiatives and measuring the performance on a timely and regular basis is where the challenge lies in. So, are there any software tools/solutions that will help management teams discover the power of enterprise performance management (EPM) to improve transparency, insight, and decision-making? 

Acquiring the right technology is key to improved enterprise planning, and spreadsheets were the most commonly used tool to support business intelligence and EPM processes. Oracle’s Hyperion Performance Scorecard is one solution I am aware of that provides a flexible approach to development of scorecards supporting recognized scorecarding methodologies and industry benchmarks. You can read more about Oracle’ EPM solution here