/* Google Analytics ----------------------------------------------- */

Sunday, December 6, 2009

What is the best programming language? Who cares?

Looking at some recent articles in the blogo/twitto/webo-sphere, I was amazed about the gap between the programming language nomenklatura and the real, usage...
Today, people are learning Objective-C because of the gold rush. Make an iPhone application and you will be rich. What about Scala the new best programming object oriented language on earth? Well, who cares?
So, today, the best programming languages are the ones that are used by the most "used platform".
Ruby is being implemented on eevry platform (even SAP is doing it ;) ). Java is being reduced (google app engine) or is enhanced (Scala, Clojure). And what about dotnet? Well, Microsoft as usual is doing its stuff on its own platform. Like apple, except they do not have a good mobile platform yet.
That's so funny for me to see people eager to developing in Objective-C and with the one of the worst dev. environment on earth... But you know gold can make people do strange things ...
Anyway, if you want to learn a great language look at scala ... The future of java ... or wait for Ruby to be implemented on Iphone....

Sunday, November 22, 2009

CVS Pharmacy - Real pain, no gain

I was amazed by what I experienced in a CVS Pharmacy store, near my Hotel in Bloomington, Minnesota. I had a terrible sinusitis, and went to a CVS pharmacy for asking some drugs to help.

First thing I found funny, is that the pharmacy is selling cigarettes and cigars.

I went directly to the back of the store, where the pharmacist are. Two women dressed with a white overall were talking, behind their computers, and ignored me for a bout a minute. Then suddenly, one of them came to me (well they noticed me cool). I explained my issue.

The woman serving me looked back to the other employee "can we sell sudafed with a passport". Second women said yes. Great!

Then, problems begins. "It's not gonna work". The it's not my fault it is a software and regulation issue. The woman was not able to enter my French passport information in its software and could not deliver the right drug to me. She asked for help the second person, who told her more or less, "do not ask me I do not know and I do not have time". She was alone with the software ...

CVS Pharmacy software was not able to deliver a drug to me because I do not have an address in the USA! INCREDIBLE! The women asked me if I can use the identity and address of somebody in the USA. Well, no, we are Saturday and I stay at the hotel near your pharmacy.

So she decided to give up, and not to find a solution. Amazing! She proposed me another drug she can find over the counter (OTC) and that can be sold without any passport required. She said:
- "it's not exactly the same, but some people find it good also".
- Then I asked: "what is the difference between the two box of sudafeds (the one over the counter and behind the counter)"?
- "It's not the same drugs" ...
- "Yes I know, but what drug is not there and what is the impact on what I feel today (fire in my head)".
- "It's not the same drugs" ...
- And to close the discussion: "hopefully you will get better tomorrow".

Do you believe that? "hopefully you will get better tomorrow". You understand what it means: I do not care about your pain, my computer is not letting me deliver the drug, please leave the store and I hope you will not suffer to much. I was not looking for empathy, but this sentence killed me.

I will not spend some time on describing how much time it took me to pay, the corporate credit card not working at CVS, but working at Pizza Hut (restaurant in front of the pharmacy)

CVS pharmacy is more a shop than a real pharmacy. They clearly benefit from a regulated environment where the notion of service does not exist and they hide their inefficiency behind the healthcare regulation. I was ready to pay 50 dollars for the right drug (and again, nothing dangerous here). But they were not able to sell it to me and and the full experience was a real pain.

If you are a traveler without a US passport, do not go to CVS pharmacy, they will not help. People there are not passionated by their job and by helping people, they just sell boxes, barcode machine in hands.
The french pharmacist near my home missed me a lot yesterday ...

Friday, November 13, 2009

SonarJ 5.03 Connected to Sonar

SonarJ 5.0.3 was released and the first version of their plugin for Sonar. The SonarJ plugin is the first plugin for Sonar that allows you to check architectural and structural aspects of your project. These aspects have the biggest impact on testability, maintainability and comprehensibility of your code.
Now you can document your code architecture and push the result in your quality tool, running on your Continous build platform. And it's for free for small projects). 

Doing SOAP and Rest Services without changing code

I do not know what to think about Netrifex from Proxisoft ...
Netrifex adds web services to existing Java applications using a point-and-click browser interface.
As stated on their website the product enables users to:

  • Create web service APIs in a fraction of the time and cost required by traditional programming methods.
  • Add SOAP and REST services to applications without modifying their code.
  • Start, stop, add, modify, and delete web services without shutting down or disrupting production applications.
  • Create and administer web services through a simple point-and-click user interface. Common use-cases require no programming.
  • Generate web service interfaces automatically for applications built from common frameworks such as Apache Struts.
  • Implement stateful web services that are aware of user sessions and other types of application state.
  • Web-service-enable applications without source code (e.g., third party products, libraries). Netrifex does not need to read or re-compile source code.

Netrifex adds web services to stand-alone Java programs and applications running in Java EE containers. Netrifex works with Java 1.4.2 and higher. Windows, Linux, and Mac OS X operating systems are supported. Supported containers include Apache Tomcat and Oracle WebLogic Server. WebSphere, JBoss.
Licensing is made on CPU basis.
The question is why using such a tool? Any idea?

Thursday, November 12, 2009

2009 - Data Integration War in the cloud

The war is raging on Data Integration and data services. Two products are clearly changing the landscape: Informatica platform V9 and Pervasive DataCloud 2.

Informatica V9

To quote the company, Informatica 9 "uniquely delivers a comprehensive platform by combining products in six categories: enterprise data integration, data quality, B2B data exchange, application information lifecycle management, complex event processing and cloud computing data integration," and can be deployed "on-premise or in the internet cloud."

I will not state again, how much benefits a company can get from Informatica V9. It was really built with user needs in mind (we had intense discussions on current and future needs) and is for me clearly a step ahead from the competitors. You can read this excellent post for a better deep dive.
Disclaimer: I was associated to Informatica V9 discussions with the company since about a year now and my company is using Informatica.

Pervasive DataCloud 2

To quote the company "Pervasive DataCloud 2 is a secure and reliable on-demand services platform fully powered by Amazon Web Services. It's for developers who need to rapidly create Data Integration, Application Integration, Analytics and other data-intensive services."
The offer seems impressive and is also providing a platform, but on demand. I was especially interested by the DataRush offer which is combining the power of cloud computing with the one of data management.

Data Management for non IT people ...

Several tools are emerging to empower user on their laptop to do data manipulation without the need of a backoffice application. I call this new trend BI on the desktop.

Ormetis - Power to the user with no programming
The first one that really impressed me was Ormetis. Ormetis enables business users to quickly analyze, combine and transform multiple data sources to produce a coherent result without the help of IT.
Such transformation process is called a Scenario in Ormetis and is automatically recorded while you manipulate your data. The advantages are twofold: you get a complete audit trail for free and you can instantly recreate new results by replaying the Scenario whenever you get new input data. Ormetis does not provide any charting solution (yet?). Another advantage of Ormetis is related to data and security governance! You can then prove than no data value was changed in a scenario and you can also show what changes were made in any scenario (that can aslo be replayed if needed).
Ormetis is positionned in a niche, but can bring lots of value if you consider the time spent by non IT people trying to merge files coming from text, excel, etc.

I also looked at QlikView personal edition and Lyza. Below, you will find a high level comparison.

Ormetis vs. QlikView
Ormetis and QlikView don’t compete with each others, they actually complement each other really well.
QlikView is primarily a reporting software (Business Intelligence – BI) while Ormetis is primarily a data transformation software (ETL). BI tools are very good a presenting data (reports, graphs, dashboards, etc.) but you need to be able to get the data properly structured in the first place.
In any business, a fraction of the data is very well structured (usually stored in datawarehouses/datamarts and managed by the IT department) while the vast majority of it (sometimes up to 90% of the data) is floating around in Excel spreadsheets, text files, etc.
This is where Ormetis really shine by enabling business users to transform multiple sets of data (with different structures) into a single, coherent data set that they can then use immediately. All of that without the help of IT.

Ormetis versus Lyza (from Lyzasoft)

Ormetis and Lyza have a similar positioning. They both promote user’s autonomy (from IT), complete audit trail, on the fly analytical capabilities and innovative user interface.
Still, there are some very fundamental differences in philosophy as well as technical architecture. Lyza tend to be a all-in-one solution ranging from data preparation to reporting and analysis while Ormetis focuses only on data transformation part (which include analytical capabilities in order to make decisions about how best to transform the data).
From a technical point of view, Lyza embed a relational database (MySQL) and relies on the fact that every single piece of data is organized in a tabular way (columns and rows). While this is certainly true of some of the data, this is clearly not the case for the majority of it (data spread across multiple files, multidimensional data, complex spreadsheets, etc.)
Compared to Ormetis, Lyza lacks OLAP capabilities (groups), automatic detection of Text/CSV files structures (Ormetis patent pending), resilience to structural changes (column order, column delimiter, etc.), complete support for Unicode (non-Latin alphabets) and regional settings (date and number formats).
Because Lyza relies on a traditional relational database (rather than doing full in-memory transformation) performances degrade rapidly (disk access rather than memory access) on a standard desktop machine. For instance, some quick benchmarks made show that Ormetis is 30 to 50 times faster than Lyza for files bigger than 100.000 rows or columns.

Ormetis is indeed a good investment! Millions of rows/columns is probably the point where things start to break down in Lyza while this is Ormetis’ comfort zone. Ormetis is also recommended for Audit team due to its capability to show the content of a scenario and its impact on data.
Ormetis doesn’t have reporting capabilities (beside saving in Excel or XML format and handing of the job over to Excel) but BI tools can’t do any good reporting without good data. This is why combining it with other dedicated tools, like QlikView, is ideal. Recently Paul Clayton from Microsoft also made a post on how to use Ormetis with Excel, look here.
Lyza is also a good tool, a more all in one solution but requires more install on the desktop and is not providing the audit trail I was looking for.

Let me know your thoughts.

Sunday, November 8, 2009

Cloud Computing - evolution of IT to explain the benefits and challenges of cloud computing

As shown on this post from Kaavo cloud provider company, the evolution of IT can be used to explain the benefits and challenges of cloud computing, and to showing why and how we got to cloud computing.
You can read the full post here.

SOA - Service categorization

Dear reader(s)

I was invited to publish a guest post on SOA governance concerning how you could categorize your services. Take a look at : Art of software reuse blog.

Thursday, November 5, 2009

New articles and publication

Here is a recap of the latest publications or articles I was involved in:
  1. IS Rating: French article here and English article here.
    The official web site is here. Check the spreasheet and let us know if it works for your organization
  2. The article I wrote on Cloud Computing (InfoQ): here
  3. I answered some SOA questions in a virtual pannel published on InfoQ, here.
Enjoy reading them and feel free to comment.

Document, Document, Document

Documenting architecture and assessing that the code follows the standards architecture patterns defined is not an easy task. presented below are two tools that helped us doing it.

Documenting (and assessing) Java code architecture
I like SonarJ as a tool to help the solution architect to assess and validate the application architecture. It is free if your application is not too big (500 classes)!
The new release, version 5.0.2, comes with new metrics and a Maven plugin is now able to generate SonarJ system files out of Maven POM's.

Documenting EAI/ESB projects

PIKE Electronic has started close cooperation with TIBCO in the early 2000`s. They provide a tool called makeDoc, recognized as an official analysis tool for TIBCO`s BW, BE and iProcess products.
Now they claim to be extending their offer towards ORACLE’s ESB and BEA products and are testing MakeDoc for webMethods with Software AG before launching it on the market.
Cross EAI/ESB analysis will then be possible? Let's wait and see.

Saturday, October 31, 2009

IS Rating group launched

The IS rating working group is officially lauched. You can see the first article (in French) in Solutions and logiciels.

or you can read it in english on INFOQ.

Tuesday, October 27, 2009

Google Wave and converstions, instead of Gmail and emails

Lots of people are asking what is Google wave and why it is so important?
I think a short example is worth a thousands explanations. Take a look at how we might be planning business trips in the very near future: Google and the travel experts at Lonely Planet have teamed up to create Trippy. It’s a combination of Google’s Wave messaging platform, Google Maps and Lonely Planet profiles and reviews. And the code is here ...

The static notion of email is now replaced by a more dynamic notion, the conversation. A conversation keeps track over time of all interaction realized within the conversation. It can be an email reply, a video or document attachment, a query to specific applications. A conversation is recorder and can be replayed. Conversation can be private or public, can be text only or multimedia, can support social interface or not.

In fact you are creating within a conversation the so called "my personal workflow for that particular tasks with those particular people / applications"

You can see for example how SAP is selling its ERP and offering “agility” with Google Wave (see the https://wiki.sdn.sap.com/wiki/display/EmTech/Google%20Wave or http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/15618).

Since it is difficult to get Google Wave invite, you can sign in for Nurphy for testing.

Or you can wait for Mozilla new tool for managing conversations, called Raindrop. While most conversation aggregators are concerned with harnessing your river of data, Mozilla is breaking it down into manageable raindrops. Raindrop's mission is to "make it enjoyable to participate in conversations from people you care about, whether the conversations are in email, on twitter, a friend's blog or as part of a social networking site." Essentially, Raindrop is cutting out the noise and pulling in the information that is actually of interest.

Conversations can be used to manage and track any interaction with your clients or users. It will be used for CRM, contact center, customer support, dynamic collaboration, governance, finance (traceability), legal, etc.

Today, like for the phone at the beginning, if you are the only one having it, you have nobody to call. That's the same for Google Wave. That's why today it seems not so obvious to understand its value.

Eurocloud Membership

I requested by email to be part of Eurocloud ... I did not receive any answer yet ... I wonder if this association is truly open ...

A Decade of SOA: Where are we, Where are we Going?

I was questioned with some other people in a virtual panel about SOA by Jean Jacques Dubray for InfoQ. You can see the results here

Tuesday, October 20, 2009

Applications for Travellers

The market of application for travelers is evolving quickly. Most of them are offering a free to use version and enhance it with real-time services.

My preferred tool is still Worldmate Mobimate (preferred the gold version for enabling real-time access to flight data and alternative flights).

I did use Trip-it for some time, but, my main need is not to share but to be able to get all information I need on time and to be able to react when needed.

The new kid on the block is TripCase created by Sabre. It is not supporting my Blackberry, so I can not give my opinion on it. Anyway, it is free and can provide flight alerts, so you may want to try it.

Amadeus is also working on a Mobile Travel Wallet but the service is still in beta.

Air France KLM is also offering a social community tool for their travelers, called bluenity. I was not really seduced by the product since I need to travel on very different carriers.

AMAZON Cloud - AWS Autoscaling Beta

Amazon still leading the pack of cloud providers and is now proposing an autoscaling service in Beta.

"Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define. With Auto Scaling, you can ensure that the number of Amazon EC2 instances you’re using scales up seamlessly during demand spikes to maintain performance, and scales down automatically during demand lulls to minimize costs. Auto Scaling is particularly well suited for applications that experience hourly, daily, or weekly variability in usage. Auto Scaling is enabled by Amazon CloudWatch and available at no additional charge beyond Amazon CloudWatch fees."

The API can be downloaded here, but the main commands are the following ones:

  • as-create-launch-config command to create a Launch Configuration for your Auto Scaling Group. A Launch Configuration captures the parameters necessary to launch new Amazon EC2 instances.
  • as-create-auto-scaling-group command to create an Auto Scaling Group. An Auto Scaling Group is a collection of Amazon EC2 instances to which you want to apply certain scaling conditions.
  • as-create-or-update-trigger command to define the conditions under which you want to add or remove Amazon EC2 instances within the Auto Scaling Group. You can define conditions based on any metric that Amazon CloudWatch collects. Examples of metrics on which you can set conditions include average CPU utilization, network activity or disk utilization.

Auto Scaling tracks when your conditions have been met and automatically takes the corresponding scaling action on your behalf.

More than that, Auto Scaling groups can span multiple Availability Zones. With Multi-AZ Auto Scaling groups, Amazon provides a way to achieve a balanced group of EC2 instances that are spread across multiple Availability Zones for high availability, and provide a single entity to be managed.

Saturday, October 17, 2009

The Milk and the Cloud

Spending some time looking for Cloud computing suppliers, I was amazed to see that no European company came to my eyes. Do people realize that the current giant fight to dominate cloud computing will increase competition and innovation for the companies and countries involved?

Cloud infrastructure (aka Datacenters) are being deployed in Europe (and Asia Pacific) by US companies, of course, but slowly. Imagine when you have a product for around a year without any challenger. That's where we are, and market share will be hard to modify then. Can you imagine also the impact on e-commerce? Not yet, but it will come fast enough to make people understand the risk of not reacting now.

Cloud is changing the rules of the game. But, our students are not trained to work on the cloud. Our companies do not understand what is cloud or do not want to deploy their data in North America. You can stay social with Facebook, discuss with Twitter, stay in contact with your friends on linked-in and read your email with Google. For free of course. When doing economy courses, I learned that nothing is never free. Somebody has to pay. So where is the EU commission for enabling European services like the ones I mentioned? Where are the European banks and investor? Where are the French banks and investor? Well... Today, with a good project you can get 5 to 10 millions euro. Not enough.

I heard some time ago that Orange services will provide cloud services, may be, one day. Why not a joint venture? Cablo operator, with some ISP, hardware companies and IT consulting firms could be a quick win.

Cloud will be used by company to reduce internal cost. Outsourcing will raise, and jobs will be lost for sure in Europe. Globalization again makes cloud datacenter more interesting than internal ones. Not for the cost, but for the flexibility it provides!

At least, in Europe, and especially in France we produce great milk. Milk producers are complaining about the low prices and requesting subventions. Who will complain for the cloud?

Monday, October 12, 2009

JeeWiz is now Open Source

JeeWiz implements that the author calls "pragmatic MDA". Jeewiz is a system generator that you can use on a wide range tasks and is now open source.

An integrated generator for Hibernate/Spring/JSF/Trinidad systems is provided, plus starter examples for many other generation tasks.

More details on the JeeWiz approach, for business and project managers, and for Java developers and architects are available.
The technology sounds good, but they seems to need some help to develop it.

Sunday, October 11, 2009

Scrum doesn’t do anything

Excellent blog post on SCRUM, written by Tobias Mayer. Scrum doesn’t do anything is the perfect definition of what is SCRUM. Highly recommended.

Saturday, October 10, 2009

Simon - Java monitoring replacing Jamon?

Recently a new Java monitoring kid appeared on the block: Java Simon.
Simon claims to be the successor of JAMon. If you want to read a deep evaluation of Simon vs. JAMon, then click here.
Still less powerful than JVM dynamic introspection tool (like CA Wily Introscope) but still useful.

Séminaire Ippon sur les Portails Open Source

Très bon article du Touilleur Express sur les portails. suite au séminaire de la société IPPON sur le sujet (transparents disponibles sur le blog de la société).
J'ai ajouté un commentaire pour compléter sa vision.
Bonne lecture

Semantic ...

I do not think the semantic web pushed by W3C will ever go mainstream.

I think, social links will be used as the basic semantic level. You already can use Open Social or some others well defined and supported API. This will be what I call the first level 0f semantic.

Then will come dynamic filtering (level 2). Filtering is the opposite side of search. When you search you want to obtain a maximum number of results. Some will be highly pertinent, some not at all.
Filtering is used to find only the pertinent links or documents. No more, no less.

Dynamic filtering and categorization should improve drastically the way we do requirement management or business analysis. This will be used also to define a more "semantically defined" Enterprise architecture.

Level 3 will come with machines able to think ... Not for today, even if some expert system are now very powerful.

What will be very important is to see how people / service will create value, i.e. money, from a huge mass of "free" data. Today, facebook and twitter (level 1 - since I create my social network myself) and Google notion of Wave (level 2 - more filtering and dynamic categorization) are leading the way. New services released recently on Iphone showing enhanced reality capabilities are also going in that direction.

Another important aspect of those services is that they are "global", meaning they can cross all boundaries (real, virtual, 2D, 3D). So a google wave can be connected to an ERP (See SAP Web 2.0 demo), a real person can be connected to an avatar and teleport itself to the right virtual world (but still sit down in a meeting room to make a virtual meeting!). And of course, money and way to pay for those services will be available everywhere, whatever the device used (mobile phone, PC/Mac, netbook, virtual world, real world).

Now, think about those technologies available to all of us in our day to day life. Imagine now how the enterprise you work in could integrate those new services and opportunities. No need for a real fabric or warehouse, access to free data, and nearly free computing power (Cloud computing).

Future will be interesting ...

Friday, October 9, 2009

Revolution: Open ERP in SaaS self service mode

Open ERP is one of the most appreciated Open Source management software. Open ERP has released its new service offer - Odoo, the On demand ERP solution. This offer is a revolution in the ERP market for small and/or medium enterprises.

With Odoo, you can get a ready-to-use and complete enterprise management software in a few clicks. The subscription to Odoo is free, one need to pay at the end of the month, only if he is satisfied. With Odoo, one pay only what he really use @ 0,60€ per hour. Also the 60 hours of use per month is for free.

Get more informations under the website: http://www.odoo.com.

AJAX application optimization tool

Doloto is an AJAX application optimization tool, especially useful for large and complex Web 2.0 applications that contain a lot of code, such as Bing Maps, Hotmail, etc. Doloto analyzes AJAX application workloads and automatically performs code splitting of existing large Web 2.0 applications. After being processed by Doloto, an application will initially transfer only the portion of code necessary for application initialization.

More info here:

Thursday, October 8, 2009

SOA Management - Service Monitoring

Looking for low cost tools to do SOA monitoring I did not find so many possible solutions.

These are the ones I found:
Do you know others?

Enterprise Sign On Engine (ESOE)

The Enterprise Sign On Engine (ESOE) allows an enterprise to achieve integrated identity management, single sign on, authorization, federation and accountability for resource access across multiple platforms and technology stacks. ESOE was built at the Queensland University of Technology and open sourced to foster continued development in the community.

ESOE is built with open standards from OASIS such as SAML 2.0 and XACML 2.0 to provide the greatest amount of flexibility for implementors possible.

Did you have any experience using it?

TOGAF and ABRD on EPF ...

Based on the Eclipse Process Framework (EPF), an open source project that is managed by the Eclipse Foundation, you can now use:
  • the TOGAF Customizer contains all the content of TOGAF 9 in a structured and editable form, including guidelines, concepts, and checklists, as well as detailed work breakdown structures for the framework’s new and improved Architecture Development Method (ADM).
  • ILOG ABRD provides a well documented and structured approach for developing rule-based applications. ABRD allows organizations to avoid using ad-hoc processes or having to expend significant time and effort creating their own best practices.

Sunday, October 4, 2009

Performance Testing from the cloud

If you want to simulate really important load on your web site, you might wornder why not using the cloud? Well, You have now several options to do it, so use them.

Open Source

Pay to play

I would not be surprised to see some of the other players in the performance testing space to start offering similar services. As usual, load test pricing is not easy to forecast ...

Source: PerformanceEngineer.com

Saturday, September 26, 2009

Fourth SOA Forum organised by CIO/Le Monde Informatique

I'm invited to participate to a panel at the 4th SOA Forum held in paris in October 6.
"IT Agility, through SOA and Cloud" good subject isn't it?.
Organized by CIO / Le Monde Informatique. If you are interested, you can register on-line and see the program here.

Friday, September 25, 2009

Emo Labs - speakers are dead?

Emo Labs "invisible" speaker systems made out of clear, thin sheets of plastic can be overlaid on TV screens where they vibrate to produce sound. "Unbelievably, the sound is actually sharp, crisp, and clear."

Tuesday, September 22, 2009

Some AWS services are now available in Europe

Some AWS services are now available in Europe:
  • Amazon SimpleDB - Highly available and scalable, low/no administration structured data storage.
  • Amazon CloudWatch - Monitoring for the AWS cloud, starting with providing resource consumption (CPU utilization, network traffic, and disk I/O) for EC2 instances.
  • Elastic Load Balancing - Traffic distribution across multiple EC2 instances.
  • Auto Scaling - Automated scaling of EC2 instances based on rules that you define.
For more information, check here.

Monday, September 14, 2009

JBehave Web 2.0

JBehave is a Java-based framework designed to encourage collaboration between Developers, QAs, BAs, business stakeholders and any other team members through automated but human-readable scenarios.

Main features of Web 2.0 include:

  • Web Runner webapp that allows any scenario to be run via a simple web interface. More info
  • Web view of Stepdoc generated for the Steps used in the scenarios
  • Selenium support to help running web-based scenarios. More info .
JBehave Web 2.0 is available for Download here.

OpenPTK - User Provisioning Toolkit API

I just discovered OpenPTK Project (Open Provisioning ToolKit).
It is as an open source User Provisioning Toolkit exposing API's, Web Services, HTML Taglibs, JSR-168 Portlets with user self-service and administration examples.
The architecture supports several pluggable back-end services including Sun's Identity Manager, Sun's Access Manager and LDAPv3.
Thank you Sun
Gartner identified 10 EA pitfalls here.
  1. The Wrong Lead Architect
  2. Insufficient Stakeholder Understanding and Support
  3. Not Engaging the Business People
  4. Doing Only Technical Domain-Level Architecture
  5. Doing Current-State EA First
  6. The EA Group Does Most of the Architecting
  7. Not Measuring and Not Communicating the Impact
  8. Architecting the ‘Boxes’ Only
  9. Not Establishing Effective EA Governance Early
  10. Not Spending Enough Time on Communications
I would add the following ones:
  • Not providing different deliverables for different stakeholders
  • Not documenting decisions and rationale made
  • Focusing on designing/building an EA framework and implementing it in a tool
  • Having EA team attached to the CIO (only IT), instead of being attached to the CEO and/or to the audit Team
  • Not being part of the Application Portfolio Management
  • Not providing technical standards negotiated with procurements that project can afford to pay
  • Not integrating in the EA team, infrastructure, procurement and application teams

Friday, September 4, 2009

How to combine the existing EA frameworks in a single standard framework

I'm commenting an a post made by a great EA blogger named Adrian Grigoriu concerning "How to combine the existing EA frameworks in a single standard framework?". The discussion is hosted in linkd-in forum here.

Thursday, August 20, 2009

Java VisualVM Blogging Contest Results

The Java VisualVM Blogging Contest was designed by Sun to encourage developers to share their experience with VisualVM, available in two distributions: VisualVM available at visualvm.dev.java.net and Java VisualVM available as a JDK tool in Sun JDK distributions starting from JDK 6 update 7. Results were published recently and are shown below.

First Place Winners:

Jan Smuk — VisualVM - tool for profiling Java applications
Kiev Gama — VisualVM OSGi plugin
Rejeev Divakaran — Analyzing Memory Leak in Java Applications using VisualVM

Second Place Winners:

Di Jiang — VisualVM, Sliver Bullet for Troubleshooting
Dominic Mitchell — Heap Dump Analysis
Dustin Marx — Thread Analysis with VisualVM
Jeff Foster — JVisualVM and Clojure
Kristian Rink — jvisualvm: analyzing NetBeans and beyond…
Matthew Passell — VisualVM and Cutting Method Calls by Over 1000x
Pavan Kumar Srinivasan — Visual VM …Saved My Day
Mohammed Sanaulla — Monitoring and Profiling using VisualVM-1
Robert Baumgartner — How to use JConsole, JVisualVM or VisualVM with Oracle Application Server
Sebastian Pietrowski — VisualVm performance tuning tool

Third Place Winners:

Jonathan Demers — Solve java.lang.OutOfMemoryError: Java heap space
Leelabai P — VisualVM, Java's own monitoring, profiling and performance analysis tool
Sotohiro Terashima — Stop Jetty Server using "Java VisualVM" and "TASKKILL"
Sridhar Kasturi — Explore VisualVM

Tuesday, August 18, 2009

ETL on Demand - Informatica PowerCenter Cloud Edition

Informatica PowerCenter Cloud Edition is available on Amazon's Elastic Cloud Compute (EC2) for $24.95 per hour. The company said the hourly rate is aimed at companies with only a few applications deployed in EC2 and which need only occasional data integration services.
By hosting data integration software on EC2, customers can take advantage of Amazon's vast infrastructure, according to Chris Boorman, Informatica's head of marketing. When integrating data from one EC2-based application to another, he said, it only makes sense to put the data integration software on EC2 as well.
This is a great news ...

Monday, August 17, 2009

Stribe - Adding cloud collaboration tool

Dear all

Great news. I was selected to be a beta tester of stribes ... As you should normally see at the bottom of the blog page, a, new ribbon is available. You can create an account and better intercat with me and the reader of this blog.
I hope this will create more and more value around this blog.
Try it ... Adopt it ... Use it ...

Tibco BusinessWork ActiveMatrix for dummies ...

I had a hard time understanding the Tibco BusinessWorks ActiveMatrix (Tibco BW AMX for short) proposal. Thanks to Thomas Been from Tibco France, things are a little clearer now.

Which Tibco BW do you want?

With version 5.7 of the tool you know have the choice between two options:
  • AMX foundation based = Tibco BW AMX = BusinessWorks services are deployed in Tibco AMX foundation. The administration tool is then Tibco AMX Administrator.
  • TRA based = Tibco BW Standalone = BusinessWorks services are deployed in TRA. The administration tool is then Tibco Administrator.

Governance Cockpit = TIBCO AMX Administrator

TIBCO AMX Administrator provides configuration, deployments et monitoring services. This creates then a unique cockpit (common environment) for administrating the SOA platform, the Runtime components, and the runtime governance (UDDI directory, policy Mgt, SLA management).

TIBCO AMX Administrator hosts and offers access to AMX governance services like:
  • AMX Runtime UDDI Server
  • AMX Policy Manager
  • AMX Service Performance Manager
But it also enrich its offer with:
  • Different views for managing the service lifecycle. The same service deployed on several nodes will have all its metrics aggregated and the different configurations will be easily available.
  • Shared Resources will be declared and managed in one place.
  • Logging will be common to all AMX services. The common Logging framework offers then a unique logging for a service deployed on several nodes (if grid is activated)
TIBCO AMX Administrator was designed to be flexible and is a kind of service container. More services will be added in the future for different types of stakeholders (not only administrator).

SLA Management = Service Performance Manager (SPM)

AMX Service Performance Manager is a tool to manage SLA. So you can:
  • Define SLA et rules around them (violation rules, and also service usage conditions)
  • Actions to be executed when rules are triggered (re-deploy a service on another node, etc.).
SPM works with both Tibco BW AMX and Tibco BW Standalone and offers deployment of services in both Tibco BW AMX (deploying in different nodes) or Tibco BW standalone (will launch another instance of the TRA).
Deploying BW services in AMX Foundation is included for free in any BW client licenses. But this is not including SPM that requires new license(s).

How to move from Tibco BW Standalone to Tibco BW AMX?

Two options :
  • Archive level. Simple re-deploy, without changing the archive, in AMX foundation. Tibco Engineering Team did take care about the compatibility. In that case, TIBCO AMX Administrator provides the same archive information that in TIBCO Administrator.
  • SCA Migration. You have to open the archive in Tibco BusinessStudio, make some modifications to create an SCA valid diagram. Then you can benefit from transversal functionalities of AMX like:

    • Shared Resources usage
    • Total Visibility in TIBCO AMX Administrator, especially in the service view
    • Hosted services Invocation on the Grid (if grid is used)
    • Common logging framework usage
    • Direct integration with SPM and PM
    • The archive can not contain processes that are « not services » (files, etc).

Recommended Upgrade Process

1/ Migrate the infrastructure first and deploy previously made Tibco service archives on it.
2/ Train your teams to SCA and re-packages your Tibco projects as SCA components.
3/ Think internal Grid or go to the Cloud with Tibco Silver.

Monday, July 20, 2009

End User Performance Mgt - More client Tool

It seems that everybody wants to release its own browser plug-in or tool for end user client performance management.

My Space MSFast

MSFast is a browser plug-in that help developers to improve their code performance by capturing and measuring possible bottlenecks on their web pages. MSFast currently supports only Internet Explorer 6 and up.
  • Measure the CPU hit and memory footprint of your pages as they render on the client’s browser
  • Review screen shots of the page while it renders
  • Review the rendered HTML on each point of the page’s lifecycle
  • Measure and show estimates of the time it takes to render each section of the page in different connection speeds
  • Validate the content of your page against a set of proven “best practice” rules of web development
  • Review downloaded files and show download time estimation on different bandwidths

Sunday, July 19, 2009

To ESB or not ESB

Reading this post from the crator of Mule, I could not agree more. In my team we had so many discussions about what is an ESB and if our current Tibco BusinessWorks platform is an ESB or not.
The main issue I face is always the same. I had to face it when doing integration with SAP and having to fight with SAP team to make them use Tibco BusinessWorks and SAP adaptor instead of SAP build-in point to point solution (that was before netweaver).

How can you justify an ESB to a team that was managing its own world for years. They were used of being a master data source and they know that all messages will have to go to them first, before being published to other systems. So why do you need an ESB? Just send them the XML they want (they are the king), no need to route the message. Direct connection. Point to point.

You can try any possible sound explanations, at the end, it is always: my solution works, this solution is not expensive (people are used to do it) and i do not understand why you want to add an extra step. This is also sometimes leading to some fight between the ERP competency center or application development team and the integration competency center.

Again, to use an ESB is in some cases an obvious choices, but for some others, it is more a governance and political issue than a technology choice. Big silo created their technology fortress with only one door, use it or die. ESB is the kind of white king trying to destroy all fortress to build a (Inform)nation.

IT can learn from history ;)

I liked this post - Lucky 13

While doing your software architect job you must follow 13 rules called: Lucky 13. Here they are, exact copy from excellent post here:
  1. Be Lazy: Do not reinvent wheel, also what ever I create, it should be reusable within time and resource constraints -- From Object Oriented Principles
  2. 6 Wives and 2 Husbands Principle: 6 Wives – What, When, Why, Who, Where and To Whom. 2 Husbands – How and How many/much -- From 6 Sigma and Lean
  3. One plus One is Eleven: When two heads work together their synergetic output is more than arithmetic summation -- Extreme Programming
  4. Democracy is good but Veto system is required: In case of dispute there must be a authority to take decision à Political Science
  5. One is not enough: If there is only one way of achieving goal/target, more grey matter is required -- War Theory
  6. Nothing is future proof: No one can predict future only guess. Today’s systems is tomorrow’s legacy -- Experience
  7. Organization hierarchy governs visibility: As persons move in Organization/Project hierarchy has more visibility of overall picture -- Organizational Theory
  8. Learn Daily: The day you do not learn some thing, deduct that that day from your experience in resume -- Experience
  9. Business has Money and veto power: Architecture might be superb but if there is no money and business requirement then it is not a workable solution -- Experience
  10. Process’ absence as well as presence has its own burden: No or little process invites chaos while excessive processes brings red tape -- Process and Control Theory
  11. Time and will are pre-requisites: To active a target with given constrains Time and will power are pre-requisites apart from resources -- Time Management and Psychology
  12. Perfection is an illusion: For worldly challenges good enough solutions are sufficient -- Philosophy
  13. Be an architect not consultant: Consultant is like Seagull. He flies high, zero on some thing good, take that good thing, create some disturbance, leave shit behind and fly way -- Experience

Saturday, July 11, 2009

How to be a Good Enterprise Architect?

I would like to share some lessons learned concerning the job of Enterprise Architect.
  1. Create the team charter. Describe what you team role is and what are your main objectives. This charter will never be read by anybody, like this blog, but you need to do it.
  2. Communicate. You must begin by creating your own communication channel, for me at least an intranet web site, a wiki and an EA internal newsletter. In general, since you will not have any budget for it, it is a good way to learn HTML and web site design best practices.
  3. Create Enterprise architecture principles and technical standards. This will give all people a clear framework to work with. All technical standards should be defined if possible with Procurement and Legal, in order to facilitate the work of project team. Create also an exception process, and document all rationale for exception. This will of course create an army of enemies: developers (you never select the tools they want), project manager (with your standard, it will take more time and cost to do my projects) and also suppliers (lobbying projects and VP to use the exception process).
  4. Document everything. Every technical decision should be documented (use a wiki and make it short), every meeting should have minutes, and all technical document should be produced (use an ad hoc EA tool or standard EA document templates. For example for each project you should provide: Service Level Agreement, Non functional requirement, Architecture description, Security architecture and impact analysis of your project on the IT landscape. In order to avoid any issue, make sure that documentation is a required tasks in ANY development process used. In general, you will get millions of documents in different formats (Microsoft Visio, excel, powerpoint with circles and square) and you will get the famous sentence: "I'm so busy i do not have time to document" or "Between documentation and features, I prefer features".
  5. Define quality rules and put in place the tools to assess them. This is true for architecture, code, integration, data, etc. You need to be able to build a dashboard for each application or Key business process. Quality is not part of performance evaluation of a project manager, so, be ready to loose battles. More than that, bad quality code is a way of keeping a high technical debt leading to always more maintenance, cost and resources.
  6. Use two EA frameworks. Use Togaf 9 to run you EA shop and create an adapted EA framework for making the company EA (with a clear modeling guide). Create your framework independently from the tools, be portable. Begin small, with all data you can get and not by all data you want to get (having to see empty set of information is depressing). If you're working in North America, you may have to use a specific gov or defense EA framework. Using your own framework adapted to your company maturity and EA KPI and concerns you will be considered as an alien, and excluded from EA groups and will never be able to get EA certifications.
  7. Use a unique EA tool within the company. This is the most important think to do to ease your life, use a unique and centralized EA tools. Implement your frameworks within this tool. You will then spent thousand of euro configuring, using, and deploying a tool that nobody will use (except yourself). The only thing people will tajke care of is the web site export of your EA tool content and the colors and icons used will lead to great debates.
  8. Ask Good questions, make Good recommendations. Follow your line, do your job sincerely, stay polite, ask the right questions, follow the standards and make the right recommendations. Then if decisions are not going into what you think is the right directions, then document the possible issues and the risks. Be ready to be treated like a treator, to be excluded from discussions, to be invited to meetings where everything is already decided and where people just want your sign off, to get business provided solutions instead of questions. In crisis mode, Keep It Simple Stupid (KISS) is the law and nobody wants you to look bright. Several years later, you will have to cope with the errors made anyway, since people making bad decisions are very keen on leaving the company.
  9. Build a team and mentor your troops. The lead architect should build a team (direct report or not) and mentor as much as possible. You do not need to be better than all your architects in all subjects. You just need to trust your team and ensure proper discussion of all technical subjects. Challenging each others decisions is also a good way to be ensured that you'll take into account all possible views on a unique problem. This is a long and painful task, be ready to accept to change your mind when you're wrong and eventually to pay some beers if you've lost some bets.
  10. Be innovative at technical but also at business level. Try to understand the organizational patterns in your company and push some innovative solutions when possible. Innovation does not mean testing everything is new!
  11. Keep your team up to date with technology / business trends. The best way to do it is to make the EA got to selected conferences, offer regular trainings and invite them to buy and READ a couple of books every year. Try also to free some percentage of their work time to enable thinking, research and testing. Be careful, some EA will be very keen on accepting external meetings and spend their time doing marketecture (marketing and architecture).
  12. Bridge the Gap with Business and IT OPS. EA architect is the best person to bridge the gap with different groups, since by nature its scope is crossing the silos. Especially, try to define commonly created deliverables and meetings in order to ensure the proper collaboration and synchronization. It is also a good way to get the knowledge you need on the advancement of each projects.
  13. Test everything in your environment (cultural, technical). Never trust the brochure.
  14. Build your network. EA needs to be connected to at least the key technical people in the company, the key company suppliers and the key business people. Finance, legal and procurement are also important people to discuss with. I also recommends to go and see how what your building is used by your field employees and your clients. If you can afford to do it, try to get a subscription to access Forrester, Gartner or any other consulting services. User groups are also a very good opportunity to discuss with your peers. That's why having an EA tool is also a good investment!
I hope this will help you doing your job and understanding that being an Enterprise Architect is great and sometimes "dangerous" for your health.

Tuesday, July 7, 2009

SOA for real

I'm IT and I will be honored to let the business build the company SOA program.
Let consulting companies help you work on your business processes, do your powerpoints slides or BPMN diagrams, make the right ROI spreadsheets. As everybody says, SOA is about strategy, business vision, blah blah blah... Then, at some time, you will come back to the reality, your IS and then you will discover that you can not do your SOA in less than several years.

In the meantime, you have to make the IT shop rolling and try to make it more flexible, agile, elastic. IT overhaul is a process much more complicated than what consulting companies said. So lets ignore philosophical debate about birth and death of SOA.
Let's move to my basic stupid learned from the ground recommendations.

Organization maturity is key to build your program

I keep thinking that to beging doing SOA you need to meet some pre-conditions:
  1. Your organization should be mature enough. For me at least CMMi level 3.
  2. You need to know what are your business value chains and where you earn money
  3. You need to know how you will do billing and charge back. Whatever you do at some time you will have the question: why do I have to pay for all others? So having a clear billing plan is part of the governance (and good for financial audit).
  4. You need to know how to negotiate, implement or verify Service Level Agreement.
  5. You need to be able to cope with the organizational changes and make all teams able to not think product (silo) anymore but services (offered, required).
If you do not meet those pre-conditions, no worry, you can do a good job. But do not talk about SOA. Use the term IT overhaul or Service Based Architecture or IT/Business Shared Services.

Always begin by the data quality and categorization

The first step is to categorize, create, gather, sometimes aggregate, cleanse and organize your data. For each category of data define the business rules to be applied and their lifecycle.

If you have legacy systems holding data, you may have also to think of implementing a mapping service to cross link your data silo. This is true for all types of data categories listed below. This could be done statically using and ETL or dynamically using an EII solution. This mapping service should be available through Web Service, of course.

Note that some tools were build especially to solve data management issues, and especially the semantic mismatch between data. I was really impressed by Progress DataExtend SI (but in fact they are offering a full suite of tools) and I still love the Informatica Platform. Try not to build your own, it is not cost effective in the long run, nevertheless it could be used as a tactical solution between migration phases.

Try to categorize data in a way that will benefit you Service vision. For example:
  • Master data - cleanse and federate (or centralize) those data. Create management rules for them and define master data stewardship. One good product on the market to help you doing that is EBX.Platform from orchestra network.
  • Identity data - Identity and access management will be key in your Service approach. So define asap how you will cope with security compliance and security processes. Here we can recommend to use OpenSSO from Sun (especially the Web service federation module). For Microsoft shops, as usual, you will find everything you need in their portfolio (here). The main standards to use are WS-security, WS-Federation (Microsoft), SAML V2 (all except Microsoft for now, but this will change). Most advanced users will also use openID, Oauth, WS-Addressing.
  • Transactional data - Should reuse Master data and identity data to create traceable and coherent transactions. If the data coherence could not be enforced, then some reconciliation processes should be put in place.
  • BI Data - In general fed by an ETL and stored as a big denormalized DBMS. All business related data should finish their live here.
  • Configuration data - Always forgetted ... Remember that you want to offer services to your multiple clients. So for each of your client, you will have to store some attributes to specialize the services. Where do you store this information?

Unlock your data silo

When you do SOA (or any other name you can use), you need first to free the data from their current application silo. My approach was to ask each product or application team to build what I called "basic services".

Here, they have to define their business objects to be exchanged (using their own technical format, but if possible reusing the same business terminology to ease semantic mapping later). I love to let developer do their job with the tools they like. I only request SLA to be met. So you can use all the technology you want.

I like particularly the new Sun Java Metro, Apache CXF, Spring WS and SOAFACEs in the J2SE world, Symfony in PHP world and WCF in Microsoft World. Anyway any server based Java tool vendor is offering its SOA suite (WSO2, Jboss, OW2, IBM, Oracle, sopera).

In order to avoid the Bazaar, without creating a Cathedral, you need to create some basic architecture and technical recommendations at design time. You also need a tool to validate them, we used Parasoft SOATest. We decided to create rules at the XML and WSDL layer. That's why we did forbid usage of REST style for external communications. But again it is a question of maturity. Create your standards, and find a way to validate them automatically.

Here are some examples of rules we are looking for:
  • The naming convention associated with the document/literal wrapped pattern should be used.
  • All the operations and their input and output parameters have to be documented using the element. The documentation has to describe what the operation do, what are its typical use cases and what are its pre and post-conditions.
  • All data types definitions have to use references to external schemas.
    The element has to be used for that. Services that use the same data types should use the same schema.
  • Errors have to be reported to the client using the standard SOAP fault message. The faultcode element has to include the type of error : Technical or Business.
    The possible errors that can be returned by an operation has to be documented in the WSDL file.
Son during this first installment we understood than depending organization maturity SOA is the right term or not. Then we looked at how to prepare the company for SOA beginning to work on data, their categorization and their packaging as basic services. The objective here is to offer data as a service.

I will continue to describe my journey into SOA in several coming posts. Stay tuned.

Wednesday, July 1, 2009

How Do We Measure High Availability?


I would like to react to a post on Forrester blog, asking the following important question: How Do We Measure High Availability?.

These are the main issues we faced in my company:
  1. Explain and show to the business how to define Service Level. The business always wants 99.9%, 24h*24h, 7*7 support. But when they discover the price for building the infrastructure to support the SLA, and to measure it, they finally reduce the SLA to the minimum. But if the application down, they will scream, spreading emails to all VP. At that time they forgot their decision not to invest on the infrastructure. I'm still struggling with this one.
  2. Make the business SLA be executed. legal contracts with infrastructure is of particular importance. Penalties are really difficult to apply and to get any money back. So do not go to a big supplier if you are not big. Penalties will not frighten any big company in that area.
  3. Do not forget the network. It exists cheap Content Delivery Network today that will let you begin small and optimize the availability at the "Internet cloud" level.
  4. Choose between synthetic monitoring (play evry x minutes the same scenario to simulate user actions on the application) v.s. real monitoring (real data captured and dashboard created using a passive appliance on the network).
  5. Availability Should also be related to business value chains: number of clients lost or not able to access the application is highly important. If the global availability is low due to an FTP server use once a month, who cares?Availability should be mapped to hard dollars (euro).
  6. HA is difficult to set-up since it involves both development(s) and operation(s) teams. Operations will look at server/network availability and deduce the application one. But all servers can be up and the application stalled. Application availability is different from server availability. So who is resposnible for what? It creates issues for defining tools and process and reporting ...
  7. Measuring internally is different from externally ...
  8. Do you prefer on the cloud or on premise solutions? If you do not have CAPEX, move to rental solutions and OPEX.
Hope this will help.

Thursday, June 25, 2009

Google on Web Performance

I spend some time looking for everything I can around web performance and never discovered the dedicated Google Web Site. You can find everything you need: tutorial, video, articles and the list of great software to use.
Never mind ....

Wednesday, June 24, 2009

MIT Technology Journal on Cloud

All you need to know concerning cloud in several posts. Look at the July/August 2009 MIT Technology Journal.

Value proposition of EA

A very good post was made recently concerning a "A Value Proposition for Enterprise Architecture". It was also commented in InfoQ. It describes very well the issues, but did not really look at the root causes.
In this post I will provide my contribution to the debate ...

The main issue with EA in North America is related to the lack of experience and confidence in what we call in Europe urbanization (or city planning). In France, there is no need to discuss this need. It is part of any big enough IT organization.

Architecture is part of our culture (Bazaar and Cathedral). North American people are more "pragmatic people", they hate to plan three years ahead. Even if it is a virtual target. They rely on short term adjustments and agile development. I never heard of agile architecture ... But merely of resilient IT architecture (the name of my blog)

What's the difference between the architecture of New York and Chicago or New York and Washington. The first one was built using a pragmatic view (each road as a number, it is easy to grow and efficient to find your way), the two seconds where thought before being built and showed a clear organization of the city (by French architects ;)).

In North America, you talk about flexibility and quick wins, low hanging fruits. That's great on the short term or in a dynamic industry where companies are living and dying quickly. EMEA is more oriented towards planning and organizing (everybody knows our bureaucracy), thinking long term. That's why I recommend to have multi-cultural EA teams when possible.

It is also why TOGAF is well adapted to North America and will never work in its current form in Europe. In that case, the process is more important than the way you are organizing the city. It is not enough and covering mainly IT side of EA.

Then Finally, the latest issue I see, the most important one, is that the EA team is not independent. It is either attached to the business, the CFO or to the CIO. If you want to play the role of the man in the middle thinking globally and acting locally, you need to be independent. Of course you need your teams to be part of projects (not all projects, but the most important ones based on the business value chains) to be able to follow what is going on (Busines and IT alignment). EA team should be attached directly to the CEO.

Tuesday, June 23, 2009

Aptimize Latency Simulator - Open Source

I always struggled to make non IT people understand latency. With this small add-in (only working for IE, that's bad!) you can simulate the effects of network latency.

As the developers said: "Simple and easy to use – designed for “non-experts”, developers, operations and business people to quickly see how fast or slow their website will be over the Internet or across the WAN."

Download the Aptimize Latency Simulator

Monday, June 22, 2009

Domain Specific Language to automate deployment

I found some Domain Specific Language to automate deployment and ensure quality and security. Could be an alternative way to fill the gap between dev and ops.

Governance - Where is my code coming from?

With PCI compliance, i have to ask all teams to look at all the libraries, frameworks etc we are currently using in our code. The objective is to validate that we do not violate copyright. of course this should not be done once, we need to validate for each build. So, I was looking for tools. And you know what, I found some.
  • HP Fossology (open source): used to track and monitor the use of Open Source software within an organization. The main functionality made available at the moment is license detection, more features will be added in the next future. HP FossBazaar is a community platform to discuss best practices related to the governance of FOSS.

  • Black Duck (leader on this market): Three products are available within a unified framework- Black Duck Code Center, Export and Protex.
    • Code Center supports the front-end of the development process where developers search for and select open source components, as well as the ongoing monitoring of the components in use.
    • Protex and Export are used on the back end of the process when code needs to be validated before it is deployed.
    • The foundation of the Black Duck Suite is the Black Duck KnowledgeBase.

  • Protecode: Protecode offers a full range of products and services to help organizations properly manage their Software IP. They pretend to have solutions that detect, identify, record and report on all of the IP attributes of any software repository:

    • Enterprise IP Analyzer™ - analyzes and identifies all code in a directory, producing customizable reports identifying all IP attributes and potential violations.
    • Developer IP Assistant™ - is an Eclipse or Microsoft Visual Studio plug-in,. operating unobtrusively on a developer’s workstation, detecting in real time all code that is brought into the development environment.
    • Build IP Analyzer™ - analyzes all code that is consumed as part of a build creating a detailed report on all components that were used in the final product, ensuring there are no violations against enterprise policies.
    • Protecode IP Audit Service™ - is a software due diligence service that provides expert, analysis and reporting of an enterprise code portfolio. It establishes the Intellectual Property (IP) attributes of existing code and is effective and accurate in preparation of mergers & acquisitions or commercial transactions.

  • OpenLogic: OpenLogic provides software and services that enable enterprises to safely acquire, support, and control open source software in order to reduce potential risks and maximize the value of open source. OpenLogic Exchange (OLEX) is a free web site that provides on-demand access to over 130,000 open source packages, including the OpenLogic Certified Library of hundreds packages that have been certified for use in the enterprise. OLEX enables companies to find, research, and download hundreds of certified open source packages on demand

  • Sun License Tool (open source): utility tool that helps in analyzing the copyright headers in your sources

Sun's Next-generation SOA integration platform

I'm following this project since a year now, and I'm really impressed about the results obtained so far. I hope Oracle will be intelligent enough to leverage this work and the team around it.

Lots of new features are now available in OpenESB v3 (Project Fuji) Milestone 6:
  • Felix Runtime upgraded to version 1.8.0
  • Enhanced Enterprise Integration Patterns
  • New / Enhanced Service Types
    • S3 - (new) supports deployment to the Amazon S3 cloud environment
    • Java - (new) supports POJOs as services
    • REST - (enhanced) now supports SSL connections
  • GlassFish v3 Support: Fuji server can run on the GlassFish v3 OSGi runtime
  • Fuji Command Line Interface (CLI)
  • Web UI Enhancements
  • NetBeans IDE Enhancements
You can see nice demo application that showcases some of the new things in Milestones 5 and 6. But the best is to try it!

Top 25 Security coding error

A must read on line or pdf.

"Experts from more than 30 US and international cybersecurity organizations jointly released the consensus list of the 25 most dangerous programming errors that lead to security bugs and that enable cyber espionage and cyber crime. Shockingly, most of these errors are not well understood by programmers; their avoidance is not widely taught by computer science programs; and their presence is frequently not tested by organizations developing software for sale."

Until now, most guidance focused on the 'vulnerabilities' that result from programming errors. This is helpful. The Top 25, however, focuses on the actual programming errors, made by developers that create the vulnerabilities. As important, the Top 25 web site provides detailed and authoritative information on mitigation. "Now, with the Top 25, we can spend less time working with police after the house has been robbed and instead focus on getting locks on the doors before it happens." said Paul Kurtz, a principal author of the US National Strategy to Secure Cyberspace and executive director of the Software Assurance Forum for Excellence in Code (SAFECode).

Open source integration products

The integration market is still alive ... I discovered some interesting open source products.

Sunday, June 21, 2009

SOAP over JMS W3C spec - never too late

The SOAP over Java Message Service 1.0 specifies how SOAP should bind to a messaging system that supports the Java Message Service (JMS) [Java Message Service]. Binding is specified for both SOAP 1.1 [SOAP 1.1] and SOAP 1.2 [SOAP 1.2 Messaging Framework] using the SOAP 1.2 Protocol Binding Framework.
It is never too late ...

Cloud Impacts on Software Design

When I see the evolution of computing moving from personal computer to small devices (iphone, netbook, etc), and the evolution of software from single application to social near real time mashup application, it is rather clear that to face millions of possible users Cloud Computing will be more and more used.
Cloud computing promises a dynamic behavior enabling infrastructure supporting your application to scale up, but also and that is as important to scale down (if you have only once a month or once a year volumes peaks it's ideal).
The dark side of the story : nobody knows how it can cost ... The costing models and comparison of several players are great subjects for journal and blogs recently. Offers are difficult to compare, platforms offered are different and evolving quickly and of course, nothing is really free.
The most important impact for software architects is the cloud supplier cost model impact on the architecture to build. In order to reduce the cost, you may have to adapt the architecture to reduce the operational cost. Some people already talk about Software Design By Explicit Cost Model.r
That's why I forecast the market to be split in four:
  • New entrants will use cloud infrastructure and platform as a service in order to reduce their fixed cost at the beginning and adapt more or less dynamically based on their success.
  • Companies with already legacy code will try to use less disruptive technology like AZURE for Microsoft dotnet users (and also now PHP!), Tibco Silver for people ready to encapsulate their code in SCA, or Heroku for people developping in Ruby, etc.
  • Companies ready to be locked-in technologically in order to gain on time on integration to their major application, but still willing to benefit from cloud. Best example is Salesforce.com or Google AE. I'm sure that SAP and Oracle will follow soon.
  • People requesting high computing power for their business will use cloud computing to facilitate grid architecture implementation
Anyway, we all have to understand that cloud has a very positive impact on financial reporting and accounting (to understand why read this excellent article from William A. Sempf. So, once again, IT may be forced to use cloud ...

To go further read:

Open Source load testing tool not known

I discovered setools I was not aware of, so may be helpful for you:
  1. Pylot is an open source tool which runs HTTP load tests for testing performance and scalability of web services. It generates concurrent load (HTTP Requests), verifies server responses, and produces reports with metrics. Tests suites are executed and monitored from a GUI or shell/console.
  2. Tsung is an open-source multi-protocol distributed load testing tool. It can be used to stress almost any kind of server with HTTP, WebDAV, SOAP, PostgreSQL, MySQL, LDAP, Jabber/XMPP servers. HTTP reports are generated during the tests.
  3. Siege is an http regression testing and benchmarking utility. It was designed to let web developers measure the performance of their code under duress, to see how it will stand up to load on the internet. It supports basic authentication, cookies, HTTP and HTTPS protocols. Siege was written on GNU/Linux and has been successfully ported to AIX, BSD, HP-UX and Solaris.
Of course you can also use the most know ones: Apache JMeter, The Grinder.

For microsoft, as usual, their provide some very useful tools:

Microsoft offers 2 tools for stress testing IIS servers:

Finally, Browsermob created by Patrick Lightbody (avid open source contributor, having founded OpenQA, created Selenium Remote Control, and co-created Struts 2) is not an open source tool, but offers load testing in the cloud at a very affordable price (per as you go).

Dotnet Tools

Free continuous integration plug-in in Hudson for dotnet code are available! If you need a first tutorial, you can go here. Hudson now supports team foundation server, and Fxcop.

Static code analysis tools are also available:
  • StyleCop (free): Whereas FxCop evaluates design guidelines against intermediate code, StyleCop evaluates the style of C# source code. Style guidelines are rules that specify how source code should be formatted.
  • StyleFix provides a GUI interface to selectively exclude/include files for StyleCop
  • CodeIt.Right ($250 per user license). CodeIt.Right's biggest benefit is the automatic code refactoring within Visual studio. From the results screen you can check which violations to fix and then click the Correct Checked button.
Some interesting tools for dotnet performance optimisation:
Finally, here are some resources I found on the web concerning dotnet performance optimisation.

Lyza - Free Desktop BI for the Dummies

This soft is free, based on Java, really simple to use.
It provides just what I was needing to aggregate several sources of data easily (file, DBMS, excel, etc.). http://www.lyzasoft.com/
Great job ...

Saturday, June 20, 2009

End User Application Performance Tools again

After my post on tools for testing RIA applications, I would like to share the new list of tools I tried for finding performance issues on the web.

Visual Round Trip Analyzer

First set of tools are coming from Microsoft. They are free, easy to install, and to use. VRTA (Visual Round Trip Analyzer) requires Microsoft Network Monitor 3.x to work. In fact VRTA abstracts the use of Netmon so the user does not need to know the details of Netmon but can simply click to start/stop the capture.
VRTA has three primary features
  1. A main chart which displays http traffic in 3 dimensions,
  2. An All Files view that shows critical measurements for each file loading, and
  3. An Analysis report that indicates which file transfers are exceptions to best practice rules.
This article explain the basis for using it.


Fiddler version 2 is a Web Debugging Proxy which logs all HTTP(S) traffic between your computer and the Internet. Fiddler allows you to inspect all HTTP(S) traffic, set breakpoints, and "fiddle" with incoming or outgoing data. Fiddler includes a powerful event-based scripting subsystem, and can be extended using any .NET language.
This tool is beginning to have some numbers of interesting add-ons like watcher (security testing) and Netxpert (to identify common web performance issues)


Firebug Firebug integrates with Firefox to put a wealth of web development tools at your fingertips while you browse. You can edit, debug, and monitor CSS, HTML, and JavaScript live in any web page. You already know yslow or PageSpeed, but check those two new ones:
  • FirebugCodeCoverage is a benchmarking Firebug extension inspired by Selenium IDE for determining the percentage of your code being executed for time duration, known as code coverage. This is typically measured during automated testing to see how well the test cases are able to thoroughly test your code (with higher percentages being your goal).
  • SenSEO is a Firebug extension that analyzes a web page and indicates how well it is doing for search engine optimization (SEO). The extension checks for correct use of meta tags, presence of a title, headings, and other relevant criterions for optimal search engine optimization.

AOL Pagetest

You can also use pagetest from AOL

Free Tools for Monitoring Your Site’s Uptime

You can find here a short version of the article provided by the excellent Six Revisions web site. In this article, you will find free and useful monitoring tools to help you know when your website or web application becomes unavailable. In general advanced web sites are offering the service with some important limitations.

Here is the list of the advanced ones:
Here is the list of the basic ones:
You should be able to find one that fits your needs!

The Browser and the cloud

First installment:
Last week one of the VP of my company using a MacBook (not compliant to corporate standards which are HP PC with Windows XP and IE 6 only) with Safari, complains that one of our "business service" sending confirmation email with HTML message inside had an ugly layout. It was not a requirements when building the apps, but now it is.

Second installment:
You connect to a cool web site and then, you get a message like:
"This application requires Microsoft Internet Explorer 6.0 or higher version"
My browser should be able to post in response the following message: "I understand your point however as a customer I like to use the browser I want".

We are in 2009 and some software suppliers still not get the disastrous effect of the Lock-in anti-pattern.

When evaluating or using the cloud do not forget to see if the vendor is not trying to lock you in with a particular browser or OS or technology. When you design an SOA service, do not forget the long tail of browsers in the world (begin to test the ones of your VP ;)

For me, a good "cloud" oriented software should run in the browser. If a SaaS application needs external components or add-in to be installed, then, it's not really cloud anymore. If I need to have headaches for deploying a SAAS solution by managing all possible platforms in my company I loose the main advantages of SaaS.