Jun 172009
 
The Mega Powers (Hulk Hogan and Randy Savage) ...

Recently I’ve been working on a REST API for reporting workflow status information in Alfresco.  After getting some of the functionality nailed down, it really bothered me that I wasn’t able to use Test Driven Development (TDD) in the process.  So I went looking, and I found quite a few open source tools out there in the wild that made good prospects for acceptance testing these REST APIs that I was working on.  It was time for a SMACKDOWN! OOOOOOOH YEAAAAAAAH!

Contender #1 – Selenium

I heard of Selenium in the past and have wanted to tinker with it for a long time, so I tried this one out first.  Note that Selenium is really just the “brand name”; there are actually several inter-related offerings here.  The first is Selenium IDE, which comes as an add-on to Firefox, which is pretty awesome.  Using this tool, you can basically record your tests, and play them back.  This is very easy to try out – literally within minutes I had recorded my first tests.

That was very cool, but I needed to test result sets with dynamic data, so I had to take a look at Selenium-RC, which has APIs that enable the use of your favorite programming language: Java, C#, Perl, PHP, Python, or Ruby.  The good news with this tool is that it actually uses the real browser to do its testing.  That’s also the bad news.  For each and every test, a new Firefox was launched, which would certainly take a while as the test suite grew larger.  The main advantage that I saw with this tool is that it would be great for testing web applications with Javascript for cross-browser compatibility.  In fact, Selenium-RC is leveraged by Selenium Grid, which allows you to test cross-browser and on different operating systems.  I dig it, but I just have some simple REST APIs to test, so the whole Selenium suite is overkill for me.

Reporting of results using Selenium-RC ultimately would have to be via JUnit reporting mechanisms, which are pretty decent as I recall, but you will have an extra step to set that up in your Ant build file.

Oh, one other note.  You can use Selenium IDE to record your tests and export them as a Selenium-RC Java class.  Pretty cool, but the implementation felt like it was quick and dirty as I recall – something about the class name I gave it and the source that was generated was ‘off’.

Contender #2 – HTMLUnit

This is basically just a Java API that makes it easy to extract information from web pages programatically.  Because I knew that several other tools were built using HTMLUnit at the core, I didn’t spend any time investigating the possibility of using it directly, which was a smart decision.

Contender #3 – JWebTest

This one is cool in concept.  It’s basically an abstraction of Selenium and HTMLUnit.  If you want to test using HTMLUnit most of the time for speed but want to switch to Selenium for a cross browser sanity check from time to time, JWebTest could be your answer.  I spent maybe an hour setting it up in my environment and writing a test with it.  It didn’t handle HTTP basic authentication right out of the box (when using a URL like “http://admin:admin@localhost:8080/alfresco/my/rest/service.json”), whereas Selenium did handle such a URL properly.  I did look (just now) to see if it handles HTTP basic authentication, and it looks like it does via the API – see WebTestCase.getTestContext() and TestContext.setAuthorization().

Regardless, the bottom line with this is that it’s a Java based API that I would have to use to program all of my tests, and the ability to switch on “Selenium mode” isn’t very compelling to me for testing REST APIs.  Therefore, this one doesn’t really add much value over using HTMLUnit directly for this use case.

Contender #4 – Canoo WebTest

Initially, my first reaction regarding Canoo was “Oh man, I don’t like that it uses Ant so heavily”.  For reasons I won’t go into here, I’m using Ant (not Maven) as my build tool for this project, so ultimately, that’s not a deal breaker.  That said, getting up and running with Canoo was pretty awesome.  The instructions say to put the WebTest bin directory in your path, which generally bothers me, but I did it anyway.  Then there’s a way to generate a project skeleton (a very maven-esque thing to do), which I did, and by running their shell script within the generated test project’s directory, I was off and running.  I then created my own test files (in Ant based XML with custom Canoo tasks), plugged them into the main build script, and BOOYAH!  I was off to the races.

The execution of the tests is pretty fast, certainly faster than Selenium, and result reporting is tight:

canooreport

Canoo WebTest also has the advantage that adding new tests is a declarative exercise – no programming and compilation required.  HTTP Basic authentication is handled nicely via simple attributes on the <invoke> step.  Here’s an example:

        <webtest name="Check end date capability for assigned tasks">
            <invoke url="http://localhost:8080/alfresco/service/api/workflow/status/user/admin.json?endDate=2009-05-23"
                    description="Admin with end date 2009-05-23"
                    username="admin"
                    password="admin"/>
            <verifyText text='{"description":"Review","priority":2,"due":null,"properties":null,"percent":0,"completed":null,"status":"Not Yet Started","duration":null,"created":"2009-05-23 23:59:59.0","name":"wf:reviewTask"}'/>
            <not>
                <verifyText text='{"description":"Adhoc Task","priority":2,"due":null,"properties":null,"percent":0,"completed":null,"status":"Not Yet Started","duration":null,"created":"2009-05-24 00:00:00.0","name":"wf:adhocTask"}'/>
            </not>
        </webtest>

The Winner – Canoo WebTest

Just to spell it out clearly: Canoo is my tool of choice for REST API testing, due to ease of use, speed of execution, good reporting, and easy handling of HTTP basic authentication per test.  If/when I move to using a Maven build system, it looks like that’s alright by Canoo, since they have a maven plugin.

Other Alternatives

Other possibilities for folks out there are:

  • Celerity – Ruby based testing framework.  Not for me since I’m not a Ruby wonk.
  • JSFUnit – Specifically geared towards testing JSF applications, which is not the case here.
  • WebDriver – Similar to Selenium, and is in fact rolling into Selenium according to the FAQ.  As such I didn’t look at this for longer than 5 minutes.
Reblog this post [with Zemanta]
Sep 152008
 

Last week I began working for Alfresco Software, as I previously announced.  During that first week, I learned about Document Management, amongst other things (like the Spring Framework for example).  The end result: I wanted to kick myself.  It really would have been nice to have Alfresco’s Document Management solution in place when I was working on Gestalt/Accenture’s CMMI level 3 compliant Agile software delivery method!

Our process for defining processes was basically this:

  1. Draft the process
  2. Pilot it (and make revisions based on what was learned)
  3. Approve it
  4. Deploy it

Of course, there were several sub-steps within those processes, and they required version control, auditing, and moving documents to different folders at certain times (a document workflow).  At the time, we used Sharepoint as best we could to manage all this.  It handled version control and auditing, but it had two shortcomings as I recall.  First, there was no automated way to baseline a set of documents as being part of a release candidate (such as you can do with CVS or Subversion tagging).  Second, the moving of documents was all manual, every step of the way.  This doesn’t sound like much, but as I recall, we had six or seven folders in the workflow, and we could have used some automation when doing round robin peer reviews within our team.  And the deployment of these assets was no trivial matter; I remember it took me almost a whole day to learn how to deploy a set of process assets, and then deploy a set of them for the first time.

So as I went through “Getting Started With Document Management“, I was shaking my head the whole time.  It is so easy to create content rules and workflow rules.  Instead of manually moving documents from folder to folder, a workflow could have been set up to do that automatically.  Instead of manually notifying a teammate that it’s their turn in the round robin peer review chain, the workflow could have done that for us.  And best of all, we could have easily set up a templatized space that could have been used for all of the processes and associated documents that we delivered over the course of over two years.  Finally, because Alfresco is open source and standards based, we could have extended the platform to automate our specific processes for deploying process assets.

Considering the number of documents we handled, the amount of reviews, the number of gates in the process, and the number of people involved, I have no doubt that if we used Alfresco we would have saved a lot of time and therefore money as we defined, piloted, approved, and deployed new Agile processes across the company.

So yeah, document management software is a great thing.  I only wish I knew about it years ago.

Sep 082008
 

Today is my first day working with Alfresco, which I am very excited about!  Let me tell you why.

Over the weekend I read an old article by Peter Drucker called “Managing Oneself”.  In it he basically says that knowledge workers should know their strengths, how they perform best, and what their values are.  Then you can make well informed decisions regarding where you belong and what you can contribute.  I found this article very interesting in light of my recent job search.  When I started out, I knew what my strengths were.  First, I have deep technical experience as a software engineer, having spent eight of the last ten years writing software, and doing all of the things associated with it (see my profile on LinkedIn for details). Second, I’m very well versed in Agile process design, modeling, implementation, and deployment, having spent two years working with a great team on developing the processes for an Agile software development methodology that was also CMMI level 3 compliant.  That methodology is well on its way to becoming THE official Agile delivery method for Accenture. Third, I have been an active ScrumMaster since January of this year, and thus have competency with Agile project management (for which I’ve been told by several people that I do a very good job with).  Finally, I’ve always had a thirst for learning, have a commitment to delivering quality work products, and work very well as part of a team.

I perform best under deadlines. I learn best by doing, and second best by reading.  I believe that I work best as part of a small organization.  I value integrity, family, learning and growth, excellence, and service.  In the workplace, that means doing the right thing, doing it well, doing it transparently, serving the customer, and always learning and growing.  That’s the kind of environment that I want, and I believe I have found it with Alfresco.

Alfresco is an open source software company that delivers enterprise content management software.  In my new role, as I understand it, I will be doing some pre-sales work, identifying how Alfresco software can help deliver on potential customers’ needs.  I’ll also be doing some architecture and design work for customers and partners, once they’ve made a decision to use Alfresco’s software.  Finally, I’ll be doing some development, contributing back to the open source products that Alfresco offers.  In this role, I believe I can leverage my strengths as a software engineer, as a planner (the process work I did required lots of planning), and as a project manager.  I’ll get to learn and hopefully master the domain of enterprise content management.  I believe the team is a very good one, based on my interviews, a person I know that works there (Hi Jess!), and Matt Asay, whose blog I have been reading on and off since the beginning of this year.  Not to mention the successful nature of the business!

So today I begin a new journey with a new team, and I’m very excited to get started!  I hope to define exactly what I will contribute over the coming weeks while learning about enterprise content management.  If anyone has suggestions or ideas on how I can quickly come up to speed, please comment here and let me know!

Jun 162008
 

I don’t pretend to know what the future of the Social Web holds, but I have some ideas about the markup language that will power much of it.  First though, let me recap the short history of Social Web Markup.

About a year ago Facebook launched their social network with a full application platform consisting of a rich set of social APIs, many of which were wrapped with easy to use tags called FBML (FaceBook Markup Language).  Since then, a few things have happened.  First, Bebo opened up their social network with their own markup language which they called SNML (Social Network Markup Language), which was mostly the same as the FBML collection, though it included some tags only offerred by the Bebo platform.  Finally, Ringside Networks released beta versions of their Social Application Server, which supported many of the FBML tags, and a few only available on the Ringside Platform.

This is all well and good, since tag libraries for specific social networks definitely enable social application developers and designers to create rich social applications quickly.  What is unfortunate about the current situation is that these tag libraries are closed in that they are only supported by the platforms that offer them (though the open source Ringside Social Application Server supports many FBML tags).  This means that social application developers would have to rewrite portions of their applications in order to deploy to multiple social networks.  I’m reminded of the early days of the J2EE application server market, when each vendor offered their own tag libraries in an effort to differentiate their platforms from each other.  In the end though, most of those tag libraries did many of the same things via different syntax, and ultimately JSR-52 was established and the JSTL (JavaServer Pages Standard Tag Library) was produced. Now all J2EE application server vendors support JSTL.

I don’t necessarily see the same course of events unfolding in the social web space, but perhaps there will be some similarities.  First of all, I’m inclined to believe that social application developers will want to be able to write a social application once and run it anywhere. In order for that to happen, those developers would have to code using standard APIs and tags, which is an argument for a standard for a social tag library.  Alternatively, because Ringside offers the ability to render social tags via widgets, I can see a whole community emerging around social tag development, which would in turn enable the rendering of those tags via widgets anywhere across the web.

What do you think?  Are you a social application developer?  Do you want to be able to write once and run anywhere?  If so, how do you see social tags evolving?

Jun 112008
 

Last week Jonathan Otto, author of the Run Voomaxer Facebook application, and Eric Pascarello, highly acclaimed author of AJAX In Action and JavaScript: Your Visual Blueprint for Building Dynamic Web Pages, joined the Ringside team and participated in some great discussions with the team.  In one of those discussions, Rich Friedman gave an overview of various open source licenses and what they mean, including GPL, LGPL, BSD, and others.

Free video streaming by Ustream

May 282008
 

Last week I started fiddling with UStream. At first it was just a cool new technology that I was trying out, but it has evolved into a daily part of our lives at Ringside Networks. We’ve been streaming live video for about a week, which has been working out very well. For those team members that are remote (we have 3-4 depending on the day), they have been privy to the office conversation that they have always missed. Since we started streaming, those remote teammates have been clamoring for better cameras, and more of them.

You can view our live stream here. Note that there are portions of the day that are very boring. For example, at the moment, anyone that is tuned in will be watching my face as I type this blog entry and listening to Weezer. However, Twitter is a great tool for notifications. Whenever an interesting discussion is going on, I’ve made a habit of turning the camera outward towards my colleagues and the white board and logging a message on twitter with a link to our live audio/video stream. Follow me on Twitter if you’d like to get these updates.

The most interesting part of our use of this technology is that it pairs nicely with our open source development model. For those developers out there using our software, they can watch and listen while we discuss how to resolve a bug, how we will prioritize our work for the next beta release (every two weeks), or just get a feel for where we stand on a daily basis (our daily stand up meetings are at 2:30pm EST). Even better than watching live though, is that the community will have the ability to participate through the chat window in the UStream interface. For our remote team members, we’ve been using Skype to bring them into the live discussion. As users of our software, you could potentially have the same level of access.

To me, live streaming takes open source development to the next level of openness and provides an engaging experience that will ultimately result in better software and faster solution delivery due to the availability of this rich communication medium.

By the way, we also tried Stickam for a day, which offers group video chat capabilities. Our experience has been that Stickam’s availability is not as good as UStream’s. Also, Stickam’s user interface wasn’t very intuitive or descriptive. We have tried on two separate occasions to coordinate three video streams in the same session without success. Regardless, this service has great potential, and I look forward to improvements that are surely coming.