JAOO 2005 blog

Impressions from the JAOO 2005 conference from Aarhus,Denmark

Name:
Location: London, United Kingdom

Saturday, May 20, 2006

Test

This is a test post

Friday, September 30, 2005

SOA workshop

Interesting workshop by Frank Schussman of Siemens on SOA and the design process in creating these architectures. He discussed an actual case study where they built a high performance SOA architecture to support very high throughput characteristics. This was a great all day session though some of the exposition of SOA concepts was a bit RPC like and not document centric enough ( phrases like remote APIs of services had crept into the presentation on services). it was interesting to see some of the technology decisions they ended up making - like rejection of EJBs and rolling their own container technology based on OSI's lightweight system used in embedded systems. Interestingly they also had a completely decentralised design with no central point for co-ordination of processes etc which resulted in them not stressing out their database too much.
I had a long chat with frank regarding where it is better to keep asynchrony in a service design. he had a slightly purist view in my opinion where he maintained that asynchrony should be inside service implementations and built in to the design. My view was that most SOA based projects have to deal with a large number of legacy systems which have inherently synchronous charcateristics ( especially in the way web services have been implemented in the last 3 years) and in these situations it is better to get asynchronous behaviour through middleware or an intermediate queueing system such that the consumer of a synchronous service can still behave asynchronously without the service implementation needing to change. He thinksthis is not ideal - but the real world isn't :)

Yet Another Agile Model

Sat through a talk on yet another new agile concept - 'lean software'. Mary Poppendiek talked through a history of Lean manufacturing - focussing on Toyota as a case study and attempted to draw parallels with the software engineering process. The link looks pretty tenuous to say the least but there were some interesting insights - i.e the equivalent of inventory in car production is any software development in progress - i.e any system not in production and the key concept becomes minimising this 'waste'. Also, there was a principle about taking decisions as late as possible which seemed a bit counterintuitive as a blanket axiom - surely it depends upon the number of potential downstream decisions resulting from a decision at the very least. with all these things, the biggest problem is that they are presented in these conferences with the implicit message that they are all well tried and tested techniques and are widely used and applied with no real data to back that up. In any case, most of the underlying message here was basically not different from general agile techniques like scrum and XP.
As an aside, it is phenomenal how much buzzword generation actually goes on at breakfasts and lunches during software conferences. From the would you believe it department - here are two of the gems that I overheard in serious conversation : -
-Service oriented CMMI
- Process Focused Agile
!!

speculative locking

An interesting talk from the folks at Azul systems who talked through optimizations in parallel processing and locking scenarios ( mostly using java but I guess just as easily applicable elsewhere). Taking a simple example of a hashtable that had data contention between threads, they demonstrated the performance benefits they could get by taking advantage of data collision detection in hardware as well as optmization sin their JVM that allows them to (as I understood it) do the equivalent of pessimistic locking with rollbacks for data collisions between threads. so they ultimately end up locking a lot less as a lot of potentially 'synchronized' data structures don't actually result in data collisions in practice - just as a large relational database table doesn't need to be locked if only a few rows need to be updated. That is effectively what synchronized blocks on data structures actaully end up doing.
They have an example of how this works at http://www.jaoo.dk/articles/syncadv.jsp including results.
Very interesting though clearly currently this needs specialised hardware and the azul JVM to take advantage of it. I wonder if we will see similar advancements in .NET. There alredy are , for example, a hashtable and a synchronized hashtable implementation within .NET but looking at the sample implementation in Rotor, it doesn't look like they are doing anything particularly non trivial in implementing this - though of course the production implementation could well be different. Their assesment was that widespread availability of low end hardware supporting this was at least 2 years away. All in all an interesting to see some fairly hard core engineering in action at a software conference.

Wednesday, September 28, 2005

Interesting approach to agent based solutions for high performance systems

Just sat through a session on high performance architecture by Arvindra Sehmi, Chief Architect for Microsoft EMEA. it was basically an update of an old presentation of FABRIQ that I had seen during Teched 2004 in Amsterdam and was mainly about how to use techniques from queueing theory to construct agent based configurations that involve nodes that are configured into networks to solve a particular problem. This was positioned for problems requiring very high throughput characteristics ( he showed case studies where they achieved circa 1000 msgs/sec).Key mathematical concepts underlying this make the scalability charcateristics of these agent networks predictable.
Whle the talk was interestng, i still have a problem with the positioning of this mini framework versus using a full fledged orchestration engine and services to knit together networks of autonomous units of processing ( .e how is this any different from using basic SOA with an orchestration engine and using a queueing transport). I did ask Arvind after the talk but wasn't quite satisfied with the rather generic response. Anyway, the other performance enhancing technique they used which I found very nteresting was they only used one-way messaging and no request-response patterns at all - relying on WS-addressing fault targets to determine where to put a message in error. All very good - but where does compensation logic that may be fairly complex live then ? I think from a pragmatic perspectve it looks like there must be a way of using these design principles on top of generic SOA based concepts of autonomous services and orchestration.

Domain specific Languages

This year JAOO has a whole track on Domain Specific languages, a trend that is still in the embryonic stage in the software industry but promises to catch on rapidly. The work at Intentional Software ( Charles Simonyi's new gig) , JetBrains's Meta Programming System and Microsoft's Software Factories is the first stage of a promising wave.
I attended a session by the Microsoft guys demonstrating their set of working DSL modeling that will ship with Visual Studio 2005. Very impressive stuff and what is cool is tat they have been able to create multiple views ( design, deployment etc) as projections of an abstract model of generic concepts that allows them to keep real time synchronisation between code and graphical views of a model. They also showed off a Language Workbench that is again available to design new DSls and a sample application developed by a company that had created a DSl for a set of workflow based applications for the insurance industry - impressive to see it all working. Wonder when Microsoft will start applying this panacea of software factories to make its own product ship cycles mmore predictable :)
Also attended a session by Martin Fowler of Thoughtworks which was basically a summary of his article at http://www.martinfowler.com/articles/languageWorkbench.html.The session was too short to come away with any blinding insights but the point that stuck in my mind was his belief that DSLs are primarly a means to convey configuration information in different and more user friendly ways. I'm not sure I agree with that as I believe the configuration and code based categorisations of logic are on the way to merging - particularly clear in the areas around BPM and orchestrations which can be seen either as configuration metadata or as a programming language in its own right.

Tuesday, September 27, 2005

State of Java EE sessions

Just sat through a couple of sessions on new features in Java EE and where the Java Enterprise process is going at the moment. Much of this was detail but two things impressed me :
- it was interesting to see the cross pollination taking place between java and C# as languages. C# in its latest incarnation took a leaf out of Java's booklets now supports generics while Java now has support for annotations ( called attributes in .NET). At last making a java based web service is as simple as putting a @WebService annotation on a class - something the C# language designers got right from the start. There are some sessions taking place on longer term features that people see in next generation languages - particularly around integrated query semantics becoming a first class language construct. A lot of innovation in this area has come out of the Microsoft Research labs and I hope to get a better sense of it in the next few sessions. Having played around with C-Omega ( the prebeta review of the concepts) and catching up some of the material on the LINQ project that was presented at the PDC by Andes Hjelsberg and Don Box recently, I hope the java community follows the lead here. While these features are more a programmer's tool and not likely to fundamentally change software architecture, they should at least reduce the impedance mismatch between data usage and storage and make code less buggy and smaller.
- I hadn't realised how much of a threat the java community perceives as coming from the scripting languages stack like Ruby and Python etc. The Java EE/EJB is too complex lobby is more vocal than I had expected and the adoption of simple frameworks like Ruby on rails to tackle things like the lower complexity web based projects seems to be increasing. There are Java versions of scripting and frameworks out there ( Groovy and Trails ). An interesting stat that amazed me was that in a test scenario, the configuration for a Spring based Java Ee application required more lines of code ( i assume this really means XML - deployment descriptors etc) than the total amount of code in an equivalent Ruby on Rails application. And it was also slower ! Whilst this is just anecdotal and a one off example, it does focus attention on where the java community sees it's priorities lie.
Bottom line, in order to make Java EE flourish , the focus of the community seems to be on ease of use. There is a clear drive towards removing the overhead of over specific configuration and promoting new language constructs like Annotations. There are also moves towards API simplification generally as well as trying to get a POJO persistence framework standardised. I'm not sure a particular persistence framework logically belongs in an Enterprise development stack as different situations require making different architectural decisions here but we shall watch with interest to see where this goes. As stated above - my personal preference is to move towards making data access a first class language construct such that operations on data become as simpe as manipulating variables and types without the developer needing to specify a mapping to a relational or XML store explicitly. Becoming part of the language with associated semantic richness would remove the endless ping pong of frameworks and committees that seems to bog down much progress in this space. otherwise ten years later we will still be talking about the next big O/R mapping technology.

Smarting at the 'agile' buzzword deluge

Given that JAOO was where the whole agile movement was formally launched, it is only fair that a flurry of new software development terms are unleashed this week. The latest, Smart ( described on the tin as "beyond agile") was covered in the keynote by Ivar Jacobson ( as in 'The') who described the thinking behind it. Basically, the idea is not to rely on tacit knowledge as agile processes stress because clearly that doesn't scale. So Smart processes aim to focus on having as big a process as necessary but by improving delivery such that only a relevant and lightweight version is delivered to the user as needed. This is done by embedding this logic in so called "intelligent agents" that somehow make this possible. Sounds pretty good in theory but I'm afraid that's all the 1 hour keynote allowed so it's hard to assess how much of this is actually deliverable in practice today. Jacobson has a company (Jaczone) who has a product called Waypointer which helps companies implement this - apparently it has been very well received by India based outsourcers looking for something beyond CMMI that helps them build better software and not just 'better' processes. definitely something. Also, he mentioned that in his experience, offshore companies and specifically the ones based out of India are the group keenest to adopt agile methods.
The main issue with these things remains the real world readiness of the concepts as productive,workable tools and that I found hard to guage as whilst all of the content was all common sense , it was very abstract - a specific case study of where this is actually being used would have helped. The flood of new 'agile' terms promises to assume Katrina-esque proportions here as we are shortly to be hit by yet another - 'Lean' software development on which more later..

Monday, September 26, 2005

Interesting Estimation methodology

Just sat through a very interesting talk by Jan Pries-Heje from the IT University of Copenhagen.
I liked the model they proposed - a combination of traditional use case points based estimation combined with what they call the "successive method". The slightly oddly named succesive method is simply this :

- Try to quantify uncertainty for every use case by making a range of estimates from most likely, optmisitc and pessimistic.
- use a beta distribution to estimate a mean and standard deviation of the individual use cases and then the combined project.
- focus on the use cases with the highest variance( i.e maximum uncertainty) and then drill down further.

They have a 'stopping' heuristic that serves as a guideline for when to stop drilling down.

The estimation looks pretty similar to the PERT process though I'm not sure if the formulae are exactly the same - but it's pretty much PERT.

The crucial thing here is that it allows the quantification of uncertainty which is not often done in project estimation. Especially important for vendors who submit fixed bids based on estimates. the range of uncertainty that this method spews out ( rather than just one number - the Estimate). This allows the bidder to make a clear decision to either decide that the project is too risky to bid for or to use the uncertainty ( the variance) to decide on the risk premium they can put on their costing to make sure they are being sufficiently compensated for the risk. The key is that risk is quantified and factored into the estimate and that is amost without qualification a good thing.

COCOMO - The return of

Well apparently this old chestnut is still alive and kicking. I decided to attend a session on estimation using COCOMO-II just as a refresher and two things are pretty obvious :

- The somnolent qualities of Coco(a) in all its forms have a reputation well deserved.
- As an estimation methodology for projects - this is doomed to eternal irrelevance.

There are many things not quite right about COCOMO in estimating today's projects but the glaring reason for its inapplicability is this - these days most IT projects that really need estimating ( or put it another way - where the opportunity cost of a bad estimate is high), have to do with integration. And at no stage does COCOMO concern itself with the complexities of multi system integration, interface incompatibility and risk, parallel paths and differing confidence levels in different projects etc. With an increasing trend in service oriented architecture, message based integration and EAI projects, this is unfortunately the real world landscape where most of us have to estimate these days. Whilst COCOMO-II is a laudable attempt to come to terms with the iterative nature of modern projects and address some of the deficiencies in the original 1981 model ( which assumes a waterfall approach), the overwhelming trend towards more integration based software construction has been missed somewhere ( at least to my knowledge). I think that severely limits the effectiveness of the method.

As a means of getting a second opinion it's no worse than anything else out there but is the overhead really worth it ? It reminds me of some parallels in Finance theory. The anecdote of someone beating the market by making decisions by throwing darts at a copy of the wall street journal is well known - that approach has been shown to beat the best academics using complex mathematical models on historical data to predict the market. Elements of a modern software project are not very different from the dynamics of a marketplace with conflicts of interest, zero sum situations and people and projects with diametrically opposite views vying for individual gain. I suspect a simple , multi expert assesment based triangulation technique ( i.e ask 3 reasonable people who know the landscape for estimates based on a firmed out requirements and then triangulate towards a solution) will beat any COCOMO estimate. Obviously this is gut feel more than anything else.

Orchestration Patterns

Sitting through an interesting talk by Dragos Manolescu of Thoughtworks on Orchestration patterns. He nails it with his observation that this really is the heart of a proper SOA but the industry in general is still suffering from the "SOA in 21 days" delusion which sees the focus predominantly on web-servicisifying their systems as opposed to stepping back and trying to get the benefit of business process productivity that SOA done 'properly' promises at any rate.

It was a nice little overview of the landscape of Orchestration related patterns ( more material on orchestrationpatterns.com so I won't rehash it here).
One interesting point was the way he positioned Biztalk versus IBM Websphere Business Integration modeller and brought out microsoft's pretty amateurish entry in the BPM for BA tooling space aimed at the non technical analyst who can quickly model and run what if scenarios. Biztalk 2004 , for all the PR around ease of use, is really meant to be used by pretty low level techies ( who can look at a port binding in the face and not flinch) and whilst there are sops to BAs, they are pretty much limited to excel based macros that frankly I have yet to see anyone using ( not unlike that other chestnut - HWS) . We shall see what BTS 2006 brings !!

All in all a useful talk and I just wish the industry woke up and started focussing more on BPM and orchestration rather than empty platitudes about SOA which remarkably, still seem to have currency in the trenches.

Joys of scripting

Good session by Dave Thomas. A great overview of the range of scripting technlogies and languages,or rather non statically compiled , typed languages as I prefer to see them ( not used or stressed often enough for my liking in mainstream software engineering events). A perfectly reasonable and very productive way to build software quickly and efficiently - unencumbered by constraints imposed by heavyweight language and environment framewors. There are issues to do with database access, security and scalability for enterprise systems with these languages that i hope to discuss more with the speakers later on in the conference.It's pretty much the next evolution of the wave started by Perl and has led to the rise of a new bunch of languages that are cleaner, easier, less obscure and less inbred than Perl which I hope will die a violent and preferrably unnatural death within 5 years ( hopefully in a head on crash with XSLT) . The last thing we need is an obscurantist language and culture that makes software far harder to write and maintain than it really needs to be. Like the look of Ruby and Python - have some of the ease of use and instant "getit-ness" factor that Dbase and foxpro once had. Very interesting to see a demo of a blogging engine as ruby application ( with ruby on rails and mySQL) being built from the ground up.
The development and app construction experience in ruby as reflected in the demo leaves me a bit underwhelmed. Maybe it is because they picked a fairly complex app to build - and for tha ruby on rails wasn't quite the seamless, clean and simple environment that PR releases predicted. whatever the reason, it was nothing particularly special and arguably more clunky than standard frameworks like VS2005/C#/asp.neT and some of the Java dev environments ( notably BEA workshop). This wasn't positioned so much as a demo of the language but as a plug for the ruby on rails dev environment which I have to say "wasn't all that".

JAOO Keynote

Keynote by Simon phipps- Chief Open Source Office, sun microsystems
One of the more disappointing keynotes to a software engineering conference I have had the opportunity to attend. It was, allegedly, about the "Zen of Free" - a sort of pseudo-paen to Open Source by the Chief Open Source office from Sun. I went into it a sceptic and despite some reasonable points being highlighted, the presentation and content was less visionary and persuasive than it could have been - or maybe there really isn't much of a story here anyway. two key points that stood out :
- The value in open source is not so much the community but the gatekeepers or the ubermenschen programmers who act as gatekeepers to new contributions and ensure quality. Good point and raises the inevitable question - where do you get this pool of developers from and what mechanisms does the community model provide for ensuring that the quality of the gatekeepers is consistent over time. My view is that ultimately good and innovative software gets developed by good developers - their collaboration model notwithstanding.
- Sun is putting half its money where its mouth is on Open Source - so a lot of rah rah about Open Sourcing Solaris. But they won't open source Java ( ie. the java runtime environment). Go figure.
An hours worth of standard half baked references to how Open source and the community process is the only way to develop good software now ( really !!) and gratuitous potshots at microsoft and SCO unix later - ended with what is going on in Sun at the moment. All in all a pretty flat talk.

JAOO 2005 In Aarhus , Denmark

I will post my thoughts as I attend various sessions throughout the conference. First impressions - much smaller and more academic and independent feel to it ( compared to Teched , JavaOne etc).

Part of this is about finding if this online blogging from a conference deal works.