Sunday 2 December 2007

Why Did Commons Cache Die?

There have been a number of attempts to create a uniform caching API in the last couple of years. None has succeeded.

A fairly known one is
JCache which is based on JSR 107 for Java users. JSR 107 has been “in progress” for a few years now…

Another initiative which failed was to “common” the Java cache API. The
Commons Cache project has been declared end-of-life after not being able to ramp up.

Failing to create a standard cache API would not have been odd to me and I wouldn’t have spent my free CPU cycles around it, unless thinking of the cache selection process in banks, for instance. It strikes me that maybe there is a need for such an API. Banks, looking to select a data-grid solution, usually go through a very similar process. More than that, they design, implement, and run their “very specific” test harness. At the end of the day, they all look alike and have the exact same methods and semantics.

Would a uniform cache API help these banks execute their exercise and ease their selection process?

Moreover, is such an API needed in our industry?

I have my views and thoughts, but I'm very interested to hear yours before posting those. Feel free to leave a comment or email me to guy dot sayar at gmail dot com.



Anonymous said...

A standard java API is fine when you need to use it in one language environment (Java). Most large organizations develop and deploy in heterogeneous environments (.Net, C++, Java, Perl ...) and use the Cache from all these environments. Hence a basic requirement will be an interoperable API for all these environments. For this reason JCP is the wrong organization to standardize a caching API.

Anonymous said...

Although I agree in the need to create a unified "caching API", I think there is a reason why the projects you've mentioned have failed – it is simply not practical to unify this across the variety of applications out there. In addition, I assume you will agree that caching is only part of the end-to-end solution, and something the banks you've mention can really benefit from is a common API that encompass both the data handling (What you call "cache") and the execution model.

Anonymous said...

Guy this is an interesting question.

My personal view is that it is part of a maturity cycle i.e. if you look at the existing caching products out-there you will see that even though they all provide similar functionality their implementation and API is very different - for example we (GigaSpaces) are based on the JavaSpaces API and provide API facade around it to support Map/JDBC and POJO's, Coherence is based on Map so is Gemstone, Terractoa has their own *no API* model using JVM Bytecode enhancement. As Kamran mentioned there is also the language factor (.Net/CPP) even on that regard all solutions differ quite significantly.

For standard to evolve there needs to be certain enabling factors:

1. Convergence of functionality (And API)
2. Clear definition of the core set of requirements.
3. Market demand (Mainly from users).

Simple test of where we stand on all those indicators will show that there has been good progress in the past two years but were not there yet.

Note that this is complex area with lots of tradeoffs and that is why the maturity cycle is relatively long. It took SQL and JDBC lots of time till the standard emerged also if you look at cycle EJB and later JPA spec emerged shows that standardizations of data management semantics is quite complex and as with previous cases we sometime had to go through the failed attempts to figure the *right* way of doing things.

I expect that we will see a second iteration of standardization effort (not necessarily as JSR) around Mid 2008.

As it relate to GigaSpaces.
We are ready to help and support that effort as much as needed and even started discussions on that area. I'll talk about my view on how this process should be done based on past experience in some other time.

Random Thoughts said...

in a nutshell I think the answer is"yes" it would benefit organisations, but I have my doubts whether this would ever happen.

As Nati points out many vendors currently have different implementations at the API level and there would need to be some real co-operation to come up with a unified standard that all vendors support.

There are normally two ways this type of standard arises - either co-operation or domination - currently neither exist.

Many organisations spend a lot of cycles implementing a cache interface themselves which as vendors we end up binding to. I always have my doubts at this approach because often latency is key for the organisation, and such an abstraction does not guarantee the best latency figures from each product vendor.

Perhaps the best way for this to happen would be for the end-user organisations to get together and define what a unified cache interface looks like that they would all adhere to - by default vendors would have to support this and it may become de facto standard.

Anonymous said...

I agree with Kamran, that this standard must be valid to a variety of languages.
It will be a standard only when the organization that manages the standard will define it in an abstract way (as Oasis defines SOA)
So the first step is to define a standard, than create the development kits, and not the other way around.
As I see it most standards are created by market demand responded by the small vendors, which are followed by the large vendors that are able to invest in proper documentation, integration and marketing efforts.
The big question is in what step are we currently in...

by the way there is some progress at Jakarta: