Sunday, January 17, 2016

AdroitLogic and the UltraESB is 6 years!

The 19th of January 2016 marks 6 years since the UltraESB was first announced publicly; It also marks the completion of six years of success for AdroitLogic!

The last year was an exciting one, with our team size increasing more than two fold.. As a side effect, we've now run out of space and are actively looking for alternate locations to move into; with one of the possibilities being to purchase our own property and build our office the way we want.

On the technical front, we've developed the Enterprise Middleware Framework (EMW) last year, and deployed it in production, along with the IMonitor enhanced management console with detailed statistics and management capabilities with an Elastic Search back-end. Our core products are now available with Ansible based scripts for auto installation and configuration, including the configuration of large clusters - making the deployments easier for our customers. These enhancements will be made available to more customers this year, and detailed information about the EMW Framework, IMonitor and the Ansible scripts made publicly available.

The APIDirector and the AS2Gateway too have kept gaining traction over the past year, and we've now started investing in our Integration Platform as a Service endeavor, which is based on Containers. This would be the key product to be introduced during this coming year, along with our next major update with a secret project, currently code named 'X'.

We plan to start more aggressive marketing and sales activities, with enhanced web sites and documentation, and introduce a new Customer Portal to assist our customers with more valuable content and up-to date information. It will certainly be an exciting year ahead!

Monday, October 5, 2015

Understading Oracle Query Performance Tuning and using Indexes for optimizations

Sometime back we had a suspected issue with the performance of an Oracle query. Although the issue was found to be unrelated to the suspicion, we did investigate the issue and was able to further improve the performance based on the analysis performed.

To list indexes already existing on a table, you can login as SYSDBA and execute the following. From a Linux workstation, this would mean:

[user@host ~]$ sudo su - oracle
-bash-4.1$ sqlplus / as sysdba


From within SQLPlus, execute:
set pages 999
set lin 999

break on table_name skip 2

column table_name  format a25
column index_name  format a25
column column_name format a25

select
   table_name,
   index_name,
   column_name
from
   dba_ind_columns
where
   table_owner='APS_ESB'
order by
   table_name,
   column_position;



The output will list each table and the indexes

TABLE_NAME          INDEX_NAME            COLUMN_NAME
------------------- --------------------- -------------------------
...
DELIVERY_INFO       DELIVERY_INFO_PK      MSG_REF

...

By default, Oracle will only create an index for the primary key of a table.




To understand how a query is analyzed and performed by Oracle, use the "EXPLAIN PLAN FOR " prefix before your SQL query to analyze. Then after SQLPlus outputs "Explained", issue the query "SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());" as shown below

SQL> EXPLAIN PLAN FOR SELECT MSG_REF FROM SCHEMA.PERF_DELIVERY_INFO WHERE ST_STAGE IN (4, 6) AND ((ST_SUBSYS0_STATE  < 2 AND (ST_SUBSYS0_UPDATE_TIME  IS NULL OR ST_SUBSYS0_UPDATE_TIME  < TO_TIMESTAMP ('19-Mar-15 10:03:10.123000', 'DD-Mon-RR HH24:MI:SS.FF'))) OR (ST_SUBSYS1_STATE  < 2 AND (ST_SUBSYS1_UPDATE_TIME  IS NULL OR ST_SUBSYS1_UPDATE_TIME  < TO_TIMESTAMP ('19-Mar-15 10:03:10.123000', 'DD-Mon-RR HH24:MI:SS.FF'))) OR (ST_SUBSYS2_STATE  < 2 AND (ST_SUBSYS2_UPDATE_TIME  IS NULL OR ST_SUBSYS2_UPDATE_TIME  < TO_TIMESTAMP ('19-Mar-15 10:03:10.123000', 'DD-Mon-RR HH24:MI:SS.FF'))) OR (ST_SUBSYS3_STATE  < 2 AND (ST_SUBSYS3_UPDATE_TIME  IS NULL OR ST_SUBSYS3_UPDATE_TIME  < TO_TIMESTAMP ('19-Mar-15 10:03:10.123000', 'DD-Mon-RR HH24:MI:SS.FF'))) OR (ST_SUBSYS4_STATE  < 2 AND (ST_SUBSYS4_UPDATE_TIME  IS NULL OR ST_SUBSYS4_UPDATE_TIME  < TO_TIMESTAMP ('19-Mar-15 10:03:10.123000', 'DD-Mon-RR HH24:MI:SS.FF'))) OR (ST_SUBSYS8_STATE   < 2 AND (ST_SUBSYS8_UPDATE_TIME   IS NULL OR ST_SUBSYS8_UPDATE_TIME   < TO_TIMESTAMP ('19-Mar-15 10:03:10.123000', 'DD-Mon-RR HH24:MI:SS.FF'))) OR (ST_SUBSYS9_STATE   < 2 AND (ST_SUBSYS9_UPDATE_TIME   IS NULL OR ST_SUBSYS9_UPDATE_TIME   < TO_TIMESTAMP ('19-Mar-15 10:03:10.123000', 'DD-Mon-RR HH24:MI:SS.FF'))) OR (ST_SUBSYS10_STATE   < 2 AND (ST_SUBSYS10_UPDATE_TIME   IS NULL OR ST_SUBSYS10_UPDATE_TIME   < TO_TIMESTAMP ('19-Mar-15 10:03:10.123000', 'DD-Mon-RR HH24:MI:SS.FF'))) OR (ST_SUBSYS11_STATE   < 2 AND (ST_SUBSYS11_UPDATE_TIME   IS NULL OR ST_SUBSYS11_UPDATE_TIME   < TO_TIMESTAMP ('19-Mar-15 10:03:10.123000', 'DD-Mon-RR HH24:MI:SS.FF'))) OR (ST_SUBSYS5_STATE < 2 AND (ST_SUBSYS5_UPDATE_TIME IS NULL OR ST_SUBSYS5_UPDATE_TIME < TO_TIMESTAMP ('19-Mar-15 10:03:10.123000', 'DD-Mon-RR HH24:MI:SS.FF'))) OR (ST_SUBSYS6_STATE < 2 AND (ST_SUBSYS6_UPDATE_TIME IS NULL OR ST_SUBSYS6_UPDATE_TIME < TO_TIMESTAMP ('19-Mar-15 10:03:10.123000', 'DD-Mon-RR HH24:MI:SS.FF'))) OR (ST_SUBSYS7_STATE  < 2 AND (ST_SUBSYS7_UPDATE_TIME  IS NULL OR ST_SUBSYS7_UPDATE_TIME  < TO_TIMESTAMP ('19-Mar-15 10:03:10.123000', 'DD-Mon-RR HH24:MI:SS.FF')))) AND ROWNUM < 101 ORDER BY LAST_RECV_TIME;

Explained.

SQL> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1365086006

--------------------------------------------------------------------------------------------------
| Id  | Operation        | Name         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |             |   100 |  6400 |     |  2622   (2)| 00:00:32 |
|   1 |  SORT ORDER BY        |             |   100 |  6400 |   968K|  2622   (2)| 00:00:32 |
|*  2 |   COUNT STOPKEY     |             |     |     |     |          |      |
|*  3 |    TABLE ACCESS FULL| PERF_DELIVERY_INFO |  7763 |   485K|     |  2499   (2)| 00:00:30 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(ROWNUM<101)
   3 - filter(("ST_STAGE"=4 OR "ST_STAGE"=6) AND ("ST_SUBSYS0_STATE"<2 AND
          ("ST_SUBSYS0_UPDATE_TIME" IS NULL OR "ST_SUBSYS0_UPDATE_TIME"<TO_TIMESTAMP('19-Mar-15
          10:03:10.123000','DD-Mon-RR HH24:MI:SS.FF')) OR "ST_SUBSYS1_STATE"<2 AND
          ("ST_SUBSYS1_UPDATE_TIME" IS NULL OR "ST_SUBSYS1_UPDATE_TIME"<TO_TIMESTAMP('19-Mar-15
          10:03:10.123000','DD-Mon-RR HH24:MI:SS.FF')) OR "ST_SUBSYS2_STATE"<2 AND
          ("ST_SUBSYS2_UPDATE_TIME" IS NULL OR "ST_SUBSYS2_UPDATE_TIME"<TO_TIMESTAMP('19-Mar-15
          10:03:10.123000','DD-Mon-RR HH24:MI:SS.FF')) OR "ST_SUBSYS3_STATE"<2 AND
          ("ST_SUBSYS3_UPDATE_TIME" IS NULL OR "ST_SUBSYS3_UPDATE_TIME"<TO_TIMESTAMP('19-Mar-15
          10:03:10.123000','DD-Mon-RR HH24:MI:SS.FF')) OR "ST_SUBSYS4_STATE"<2 AND
          ("ST_SUBSYS4_UPDATE_TIME" IS NULL OR "ST_SUBSYS4_UPDATE_TIME"<TO_TIMESTAMP('19-Mar-15
          10:03:10.123000','DD-Mon-RR HH24:MI:SS.FF')) OR "ST_SUBSYS8_STATE"<2 AND
          ("ST_SUBSYS8_UPDATE_TIME" IS NULL OR "ST_SUBSYS8_UPDATE_TIME"<TO_TIMESTAMP('19-Mar-15
          10:03:10.123000','DD-Mon-RR HH24:MI:SS.FF')) OR "ST_SUBSYS9_STATE"<2 AND
          ("ST_SUBSYS9_UPDATE_TIME" IS NULL OR "ST_SUBSYS9_UPDATE_TIME"<TO_TIMESTAMP('19-Mar-15
          10:03:10.123000','DD-Mon-RR HH24:MI:SS.FF')) OR "ST_SUBSYS10_STATE"<2 AND
          ("ST_SUBSYS10_UPDATE_TIME" IS NULL OR "ST_SUBSYS10_UPDATE_TIME"<TO_TIMESTAMP('19-Mar-15
          10:03:10.123000','DD-Mon-RR HH24:MI:SS.FF')) OR "ST_SUBSYS11_STATE"<2 AND
          ("ST_SUBSYS11_UPDATE_TIME" IS NULL OR "ST_SUBSYS11_UPDATE_TIME"<TO_TIMESTAMP('19-Mar-15
          10:03:10.123000','DD-Mon-RR HH24:MI:SS.FF')) OR "ST_SUBSYS5_STATE"<2 AND
          ("ST_SUBSYS5_UPDATE_TIME" IS NULL OR "ST_SUBSYS5_UPDATE_TIME"<TO_TIMESTAMP('19-Mar-15
          10:03:10.123000','DD-Mon-RR HH24:MI:SS.FF')) OR "ST_SUBSYS6_STATE"<2 AND
          ("ST_SUBSYS6_UPDATE_TIME" IS NULL OR "ST_SUBSYS6_UPDATE_TIME"<TO_TIMESTAMP('19-Mar-15
          10:03:10.123000','DD-Mon-RR HH24:MI:SS.FF')) OR "ST_SUBSYS7_STATE"<2 AND
          ("ST_SUBSYS7_UPDATE_TIME" IS NULL OR "ST_SUBSYS7_UPDATE_TIME"<TO_TIMESTAMP('19-Mar-15
          10:03:10.123000','DD-Mon-RR HH24:MI:SS.FF'))))



As we can see above, a full table scan will occur if we do not have any indexes over the columns over which any rows will be filtered. After creating an index over the columns of significance, we can now analyze the performance again with "EXPLAIN PLAN FOR .." and the same query now executes with:

-------------------------------------------------------------------------------------------
| Id  | Operation     | Name           | Rows  | Bytes | Cost (%CPU)| Time      |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |              |   100 |  6400 |    10   (0)| 00:00:01 |
|*  1 |  COUNT STOPKEY     |              |      |      |           |      |
|*  2 |   INDEX FULL SCAN| PERF_DELIVERY_INFO_IDX |  7763 |   485K|    10   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------


As we can see, a simple index created to correctly support a query can increase the performance many folds. In the example above, the table scan was replaced with a scan only over the index, which is much more efficient in CPU usage and time.


























Tuesday, September 22, 2015

Ansible installation on RedHat RHEL server with RPM packages

Ansible is a great tool for DevOps automation. At AdroitLogic, we recently initiated a drive to create Ansible based installation scripts for all of our enterprise product versions. Now an UltraESB cluster can be setup in a matter of minutes by configuring the inventory for the different components that makes up the system. A post on this will come shortly!

However, today we addressed the issue of installing Ansible on a RHEL machine without Internet access. This is a requirement for one of our customers. Unfortunately the Ansible installation instructions does not include steps on how to install the product with raw RPMs on a hardened system without Internet access.

1. Install the YUM download plugin

[root@emw01 ~]# yum install yum-plugin-downloadonly

2. Make a directory to "capture" the packages

[root@emw01 ~]# mkdir /tmp/ansible


3. Proceed with YUM install with following options

[root@emw01 ~]# yum install --downloadonly --downloaddir=/tmp/ansible/ ansible

4. The above step will download all of the required libraries and components to the directory /tmp/ansible. You can now copy these files onto your target system, and install each of the packages as listed below. Since some of the components may depend on others, you may need to tweak the order of components if you are installing a different version of Ansible on a RHEL version other than 6.6

rpm -ivh libyaml-0.1.3-4.el6_6.x86_64.rpm
rpm -ivh PyYAML-3.10-3.1.el6.x86_64.rpm
rpm -ivh python-babel-0.9.4-5.1.el6.noarch.rpm
rpm -ivh python-crypto-2.0.1-22.el6.x86_64.rpm
rpm -ivh python-crypto2.6-2.6.1-2.el6.x86_64.rpm
rpm -ivh python-pyasn1-0.0.12a-1.el6.noarch.rpm
rpm -ivh python-paramiko-1.7.5-2.1.el6.noarch.rpm
rpm -ivh python-setuptools-0.6.10-3.el6.noarch.rpm
rpm -ivh python-simplejson-2.0.9-3.1.el6.x86_64.rpm
rpm -ivh python-jinja2-2.2.1-2.el6_5.x86_64.rpm
rpm -ivh python-httplib2-0.7.7-1.el6.noarch.rpm
rpm -ivh python-keyczar-0.71c-1.el6.noarch.rpm
rpm -ivh ansible-1.9.2-1.el6.noarch.rpm

Tuesday, August 18, 2015

Connecting to local Coherence instance with WKA instead of Multicast

I have been trying to setup a test environment for a local Coherence cache implementation for sometime, and the information found via Google was not that helpful in understanding why the client application (in this case the UltraESB) was not connecting to the already created Coherence cluster using WKA, and was trying to create its own cluster with Multicast.

The System properties I passed to the JVM included the following, but the issue was not necessarily on these as I found out with my limited knowledge of Coherence.

The key lesson was - the System properties being correct was not enough for WKA to function properly!
wrapper.java.additional.13=-Dtangosol.coherence.cluster=adrt-cluster
wrapper.java.additional.14=-Dtangosol.coherence.management=all
wrapper.java.additional.15=-Dtangosol.coherence.cacheconfig=coherence/xxxx-cache-config.xml
wrapper.java.additional.16=-Dtangosol.coherence.localhost=127.0.0.1
wrapper.java.additional.17=-Dtangosol.coherence.localport=15000
wrapper.java.additional.18=-Dtangosol.coherence.wka.address=127.0.0.1
wrapper.java.additional.19=-Dtangosol.coherence.wka.port=15000
wrapper.java.additional.20=-Dtangosol.coherence.ttl=0
wrapper.java.additional.21=-Dtangosol.coherence.distributed.localstorage=false

The solution was to create a new file "tangosol-coherence-override.xml" and place it in the classpath (make sure its at the start to get picked up before any others - if any)

<?xml version='1.0'?>
<coherence  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
            xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
            xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd">    

  <cluster-config>
    <unicast-listener>
            <well-known-addresses>
                <socket-address id="1">
                        <address>127.0.0.1</address>
                        <port>15000</port>
                </socket-address>
        </well-known-addresses>
        <address>127.0.0.1</address>
        <port>15000</port>
    </unicast-listener>
  </cluster-config>

  <logging-config>
    <severity-level system-property="tangosol.coherence.log.level">9</severity-level>
    <character-limit system-property="tangosol.coherence.log.limit">0</character-limit>
  </logging-config>
</coherence>

If your client "connects" to the WKA cluster you will see the following, and you can notice that it connected to an already existing cluster with a different process

WellKnownAddressList(Size=1,
  WKA{Address=127.0.0.1, Port=15000}
  )
MasterMemberSet(
  ThisMember=Member(Id=3, Timestamp=2015-08-18 12:24:32.153, Address=127.0.0.1:15002, MachineId=60314, Location=site:,machine:localhost,process:5634)
  OldestMember=Member(Id=1, Timestamp=2015-08-18 12:24:07.652, Address=127.0.0.1:15000, MachineId=60314, Location=site:,machine:localhost,process:5366, Role=CoherenceServer)

And if it didn't connect with WKA, and created its own Multicast group you would see the following, where you can also notice that the client you just started is the only member of that new cluster

Group{Address=224.12.1.0, Port=12100, TTL=0}
MasterMemberSet(
  ThisMember=Member(Id=1, Timestamp=2015-08-18 10:25:08.389, Address=127.0.0.1:15002, MachineId=60314, Location=site:,machine:localhost,process:4278)
  OldestMember=Member(Id=1, Timestamp=2015-08-18 10:25:08.389, Address=127.0.0.1:15002, MachineId=60314, Location=site:,machine:localhost,process:4278)

Sunday, January 18, 2015

How the UltraESB and AdroitLogic was born..

The UltraESB and AdroitLogic was born 5 years ago today! In my personal blog, I have captured some of the history behind starting this up!



The "personal" blog of Asankha Perera: How the UltraESB and AdroitLogic was born..

Tuesday, November 11, 2014

AdroitLogic Recognized in DZone’s 2014 Guide to Enterprise Integration



We are very excited to be recognized as a featured vendor in DZone’s 2014 Guide to Enterprise Integration, a premium resource focused on enterprise integration and API management trends, strategies, and tools. The guide includes topic introductions, expert opinions, best practices, and solution comparisons. 


Readers of the guide will get an overview of enterprise integration and learn about obstacles that developers are facing to create seamless integration. Topics covered by the guide include:
  • The role of message queues, middleware, and ESBs in the enterprise.
  • Decomposition patterns for breaking down monolithic architecture.
  • A model for understanding the maturity level of REST APIs.
  • A forecast of how building a large project with multiple integrations might look in the future.
DZone’s Enterprise Integration guide also offers key insights into integration and API management practices through a survey of 500+ developers and experts, allowing readers to learn trends from practitioners in the technology professional community. Additionally, the guide’s solutions directory compares different API management platforms, integration suites, ESBs, message queues, and integration frameworks to help readers wisely choose the solutions they need.


About DZone
DZone provides expert research and learning communities for developers, tech professionals, and smart people everywhere. DZone has been a trusted, global source of content for over 15 years.

Thursday, April 10, 2014

Build better software, and the world will still beat a path to your door!

Yesterday I was pleasantly surprised to read an article on the WSJ, that Atlassian was recently valued at $3.3B, and has sold $150M worth of its equity in a secondary sale mostly for the benefit of some of its long time employees.

I first got to know about Atlassian after using the JIRA issue tracker - which they had kindly made available to the Apache Software Foundation projects. It was a great product to use, and every developer I knew who had used it, simply loved it and would promote it without any hesitation wherever they went.

After leaving full time employment in late 2008, I bought my first couple of licenses for JIRA to help support some of the customers I was helping as a freelance consultant. Another great thing about Atlassian is that they help smaller companies at just $10 a piece for most of these really cool products under their Starter Program. Even the $10 goes to the Room to Read charity, and the startup licenses have already contributed more than $3M for charity!

After starting my own entrepreneurial career a few years back, I started to admire Atlassian even more. They actually bootsrapped that great company. I've read so many articles about them, and listened to the talk 'Art of the BootStrap' by the co-founders, where they share many great insights about the bootstrapping process and the rise of Atlassian. Like GitHub, they didn't go looking for funding, but instead focussed on building great products that would sell themselves - due to great experiences the users had, and the resulting feedback and word of mouth referrals. When GitHub bootstrapped, they Optimized for Happiness, since they were happy to build things of value, than about writing business plans with make believe numbers. This also allowed them to throw away things like financial projections, hard deadlines, ineffective executives that make investors feel safe, and everything that hindered employees from building amazing products.

Something even more interesting to know about Atlassian is that they do not  employ any sales folks - and with the money they save, the company invests heavily on research and development. Farquhar states that "Fifteen years ago, as long as you had the best distribution you would win". "It didn’t matter whether Oracle was worse than SAP. These days, people are making decisions based on how good the products are" - which is really true. This is the same sentiment I read on the ReadWrite.com article "The Reasons Businesses Use Open Source Are Changing Faster Than You Realize"

Large enterprises now realize that good Open Source products have great quality behind them, although they may cost significantly less than competition. And the ability to have the source code, and modify it really means that the user can extend a product even if the vendor developing or supporting it does not want to do that for you. I'd leave you to read through the slides from "2014 - The future of Open Source", and possibly the web cast of the panel discussion with Michael Skok  et al. Today 8 out of 10 choose Open Source for Quality.

Atlassian too allows any of its licensed users to download the source code - if they are interested in it. Although I've never had to do that, I feel privileged to have this option available to me - if I ever needed it. I'm sure many of you would have used quite expensive commercial software from very large companies that we all know of. Although these companies spent possibly millions of dollars, that does not make their software bug free. Many years back, I was at a client to install a leading RDBMS from an unopened box that contained the software the client purchased. However, to my surprise I found that to install that database version in that unopened box, I first had to get a service pack applied :) I seriously think that having access to the source code - even for closed proprietary software, is a great thing, since many end-users can innovate faster than some of the large companies can enhance their own products. Open source projects attract features and great ideas from many enterprise architects who use them, and these ideas turn up to be great features to sell to future customers - who are seriously happy to see such features - as they too values them very much.

Last year, a recent Fortune #1 company with ~$450 Billion in revenue selected the UltraESB, when many commercial ESBs, as well as open source alternatives existed on the market, which they could easily afford. This was after extensive analysis of our technical strengths, product stability and code quality all turned up with great results. Like Atlassian, we do not have a sales team either, but only a really strong engineering and support team and a great product that simply finds itself around, including to the top of the Fortune list of companies. We never approached any of our current customers; instead, they all found us. It was like what Emerson said, "Build a better mousetrap, and the world will beat a path to your door"

Earlier this year, we had a similar POC where we were shortlisted against a competitor with well over a hundred million dollars in funding. Again, we were selected on the technical merits, and also since we had handled the customer relationship better than how the sales team of our competitor could. When a potential customer talks to us, he talks directly to those who had written the code, installed it at many enterprises around the world, and who helped many clients before with similar and real problems. Quite obviously, no sales team can beat that - especially when the customer understands technology.

We have exciting times ahead of us now, and a great list of customers already utilizing our software in production, in addition to the recent Fortune #1.

So later this year, we will be looking forward to talk to those who believe in us, our journey so far, and our potential to take on the world. Fortunately, we will have the freedom to make a wise and informed selection, as until now, we've certainly optimized for happiness and for passion!