Splunk 5 offers fault-tolerant indexing for mission-critical applications

30.10.2012
The new version of the Splunk machine data search engine comes with a distributed indexing technology that could save storage costs for those customers running the software as a high-availability service.

"The data that is being collected in Splunk is becoming more mission critical," said Sanjay Mehta, Splunk vice president of product marketing, explaining the need for distributed indexing.

can also generate reports more quickly than its predecessor, the company claims, and comes with new tools to link the software to third-party programs.

The Splunk search engine was designed to collect and index data generated by machines, such as log files from servers and routers. Administrators can use such data to troubleshoot problems and ensure smooth operations. The company has also pitched Splunk as a tool for business managers to collect and analyze operational intelligence.

This is the first version of Splunk to use a new indexing technology that incorporates replication into its routine operations. The software will store multiple copies of its index, which it uses to answer user queries, across different servers. If one server goes down, indexing will continue on the other server, or servers. When the downed server comes back online, it is then updated with the new information. Users consulting Splunk can get their answers from any operational server, which increases the reliability of the service.

"The index data is replicated as it is streaming into Splunk. You can make as many copies as you need," Mehta said. "We have a distributed architecture, so the query tier determines where to fulfill the queries."