The word count example explained @ http://static.springsource.org/spring-hadoop/docs/current/reference/html/batch-wordcount.html didn’t run for me.

I followed the following steps:

Imported & compiled the source (comes bundled with spring-data-hadoop distribution) in Spring Tool Suite IDE on windows box Exported the executable jar with all required dependencies using the ‘installApp’ (Run As -> GradleBuild -> InstallApp) option in the IDE. Copied ‘build/install/batch-wordcount’ to the Linux Hadoop cluster  Executed the sample using ‘./build/install/wordcount/bin/wordcount classpath:/launch-context.xml job1’

However the execution failed with ClassNotFoundException for class org.apache.hadoop.examples.WordCount$TokenizerMapper.

In a previous blog-post we got to know about enterprise integration patterns (EIPs). Now in this post we will look into Apache Camel framework that realizes those patterns. 

About Camel:

Apache Camel is an open source project which is almost 5 years old and has a large community of users. At the heart of the framework is an engine which does the job of mediation and routes messages from one system to another.

Couple of the issues encountered when I started experimenting with Spring for Apache Hadoop

One, the Hadoop job that I was running was not appearing on the Map/Reduce Administration console or the Job Tracker Interface

And the other: I was trying to run the job from Spring Tool Suite (STS) IDE on a windows machine, whereas the Hadoop cluster was on Linux machines. There were permission issues (AccessControlException) which inhibited job execution in this mode.
Loading