If any issue is observed in production, there are two major aspects related to providing the solution. First ‘how quick you can analyze the root cause’ and second ‘how quick you can fix the issue’. Story starts from analyzing the root cause. Unless, you find the root cause, you can’t even think about providing solution for that. Can you?

Now let’s think about actual production environment. There may be multiple JVMs where application is deployed.  For the developer or support team, it’s very tedious to go to each box, download the log files and start analyzing the log files to know the root cause of the problem.

To get rid of this tedious task, ‘Splunk’ can help you. Splunk is a product that turns machine data into valuable insights. Splunk can index application logs at centralized location and provides rich user interface on top of indexed logs. With the help of this user interface you can look for data patterns that you might be interested in. Splunk is an Agent-Sever based platform where agents are responsible to collect and index any machine data from various resources at real time.



Licensing Aspects

Splunk charges it’s customer on the basis of how may GB data per day collected/indexed. If you want to try this and you download this first time you will get all of the Enterprise features of Splunk for 60 days and you can index up to 500 megabytes of data per day.

Features
  • Fast data search and analysis 
  • Facilitates custom dashboards
  • Graphical representation
  • Access Control
  • Monitor and Alert
  • Distributed Search
  • Reports

Do you want to play with Splunk?

If yes, you can follow pretty simple step-by-step instructions from here to install it. First try to install it as ‘Local System User’. Once you install and log-in to the Splunk Web you will get below page:


Click on ‘Add Data’ link.


Click on ‘A file or directory of files’ link and provide your log file location. Once you provide your log file location and save data successfully you will see below page:


Now ready to search and click on ‘Start Searching’ link. In search box you can provide your data pattern to search in the log files. You can save your search pattern and result with actions provided with ‘Save’ button. Also, you can create reports and alerts using ‘Create’ button.



Isn't it so easy and interesting? Of course yes. I was really impressed when I tried in my local environment and used different features. 

Alternative

There are many open sources in market which also provides centralized logging. For more detail refer this  link.

0

Add a comment

You might be seeking for the option to profile (capturing method's execution time) your spring application. Spring provides different ways to profile the application. Profiling should be treated as a separate concern and spring AOP facilitates easy approach to separate this concern. With the help of spring AOP you can profile your methods without making any changes in actual classes. 

You just need to perform pretty simple steps to configure your application for profiling using spring AOP:

In application context file, add below tag to enable the load time weaving          context:load-time-weaver      

With this configuration, AspectJ's Load-time weaver is registered with current class loader.

Why Locking is required?

When two concurrent users try to update database row simultaneously, there are absolute chances of losing data integrity. Locking comes in picture to avoid simultaneous updates and ensure data integrity.

Types of Locking

There are two types of locking, Optimistic and Pessimistic. In this post optimistic locking is described with example.

Optimistic Locking: This locking is applied on transaction commit.

I have got a problem in coding contest. In this post, I would like to share the approach to solve this problem. I would definitely not say that I have invented something new and I am not trying to reinvent the wheel again. I just described the end to end approach that I followed to solve this problem.It was really a great brain storming activity.

Jenkins is an open source tool which provides continuous integration services for software development. If you want to get more detail about Jenkins and it's history I would suggest refer this link. This post will help you installing and configuring Jenkins and creating jobs to trigger maven builds. Setting up Jenkins is not a rocket science indeed. In this post, I would like to concise the installation and configuration steps.

Peer code review is an important activity to find and fix mistakes which are overlooked during development. It helps improving both software quality as well as developers skills. Though, it’s a good process for quality improvement. But this process becomes tedious if you need to share files and send review comments through mails, you need to organize formal meetings and you need to communicate peers who are in different time zone.

If you are using Hudson as continuous integration server and you might feel lazy about accessing Hudson explicitly to check the build status or checking Hudson build status mails, there is an option to monitor Hudson build and perform build activities in Eclipse IDE itself. This post describes about installing, configuring and using the Hudson in Eclipse IDE.

As a developer or architect, you always need to draw some sequence diagrams to demonstrate or document your functionality. And of course, if you do this manually you have to spare much time for this activity. Just think about, what If sequence diagram is generated automatically and you get it free of cost. Your reaction would be ‘wow’ this is great. But the next question will be ‘how’.

If you are using JPA 2.0 with Hibernate and you want to do audit logging from middle-ware itself, I believe you landed up on the exact place where you should be. You can try audit logging in your local environment by following this post.

Required JPA/Hibernate Maven Dependencies

JPA Configuration in Spring Application Context File

For JPA/Hibernate, you need to configure entity manager, transaction, data source and JPA vendor in your ‘applicationContext.xml’ file.

If any issue is observed in production, there are two major aspects related to providing the solution. First ‘how quick you can analyze the root cause’ and second ‘how quick you can fix the issue’. Story starts from analyzing the root cause. Unless, you find the root cause, you can’t even think about providing solution for that. Can you?

Now let’s think about actual production environment. There may be multiple JVMs where application is deployed.

Few months back we had a debate about using Camel vs. Enterprise bus in a new project. I was on the camel side , I found hard to have an ESB just for integration and service chaining. With this blog post, based upon my understanding I will try to summarize when to use what.

Background

An existing software that has different independent modules needs to be redesigned.

It is almost a decade old and hasn't been through many design changes.  Business services are created on top of those modules.

The last time our team worked on Esper for complex event processing, it was version 3.4.0. One of the requirement we envisaged was for EPL statements to be externalized into configuration files rather than keeping them in code. So, we came up with this XML configuration file where one can configure EPLs, provide certain attributes (epl-name, enable/disable etc.) and associate them with a listener.

More than a year back, during some research related to CEP, I came across Storm which was "touted" as being a CEP engine and it was very difficult to come to terms with these assertions. Storm and S4 had just entered the market; for me, having some prior experience with ESPER, there was no comparision between ESPER and Storm/S4.

Reviewing Storm a year later now (Release 0.8.2) and it seems very mature, popular as well as very equipped to be used as a CEP engine.

Since the day one when we started working on Storm, I was mistaken on spouts modus operandi. I believed spouts can both pull and push data from the sources. But when I was about to implement a push styled spout, I stumbled with a few challenges.

I wanted to build a thrift base spout such that different event sources can push data on it. I found one such implementation storm-scribe (https://github.com/stormprocessor/storm-scribe) and on its wiki it quotes this to be a push styled.

Yesterday our Cassandra development cluster broke down, Mahendra reported that on executing any statement on cassandra-cli, it errs prompting a weird message 'schema disagreement error' on console. I googled, my usual way of being :) and found this FAQ on Cassandra wiki. This clearly describes the problem's reason:

Prior to Cassandra 1.1 and 1.2, Cassandra schema updates assume that schema changes are done one-at-a-time.

We faced a weird issue with Hector, one of the API which we developed to read data from cassandra crashed when we tried integrating DAO layer with other app dependencies. We were shocked and had no clues on what went wrong. Though the code we developed was thoroughly tested.

Cassandra composites as discussed in Datastax blog influenced us to adapt composite modelling in one of our pilot project. We used Hector APIs as the client library for this assignment. Below is the column family example which uses composite comparator type.

Prologue

Rich media applications (RIA) have led to a tremendous acceptance of web applications. And along with this, instead of HTML travelling back and forth, XMLs are interchanged to communicate the information. Before the advent of JSON, XML was considered as the de-facto interchange language of the web. 

Further, with growing acceptance of web services in the community, XML interchange is becoming very popular style of data interchange.

Here I am writing to cover up the difference between OrphanRemoval=true and CascadeType.REMOVE in JPA2.0 with hibernate.
Loading