Vertica

Archive for April, 2013

Taking a Moonshot at Big Data Analytics for Everyone

HP Vertica is very excited about Monday’s announcement of the HP Moonshot system.

Why? Because we believe that the combination of the HP Vertica Analytics Platform running on the HP Moonshot Servers offers a truly game-changing value proposition for a variety of customers, and new segments of the market.

Moonshot is, simply put, a groundbreaking system which offers customers the ability to rapidly deploy, scale and manage with dramatically lower space and energy constraints. While traditional IT services that support business functions will continue to be served by general purpose server infrastructure, a new computing platform is required for specialized workloads that can deliver innovative solutions to market at unprecedented speed and scale.

 

We’ve already successfully tested the HP Vertica Analytics Platform on HP Moonshot Servers, and achieved very comparable performance to traditional Big Data Analytics hardware across certain performance ranges, which for a large segment of the market is more than sufficient to handle their Big Data Analytics loads – while offering very significant potential cost, space and energy savings.

Running Vertica on Moonshot offers yet another proof point of the unmatched value provided by HP’s combination of Information Optimization solutions, and a great example of the opportunity created by innovation that makes us so excited to be a part of the greater OneHP.

To learn more about HP Project Moonshot, visit http://www.hp.com/go/moonshot

Comparing Pattern Mining on a Billion Records with HP Vertica and Hadoop

Pattern mining can help analysts discover hidden structures in data. Pattern mining has many applications—from retail and marketing to security management. For example, from a supermarket data set, you may be able to predict whether customers who buy Lay’s potato chips are likely to buy a certain brand of beer. Similarly, from network log data, you may determine groups of Web sites that are visited together or perform event analysis for security enforcement. In this blog post, we will show you how the HP Vertica Analytics Platform can efficiently find frequent patterns in very large data sets.

A pattern mining algorithm

Frequent patterns are items that occur often in a data set. After finding frequent patterns, analysts can use methods such as association rule mining to discover rules in the data. A classic example of an association (from Wikipedia) is that customers who buy diapers also tend to buy beer from a supermarket. While there are many frequent pattern mining algorithms in literature, we will use the FP-growth algorithm. FP-growth is considered efficient as it performs fewer database scans and does not require candidate set generation [1].

Instead of describing FP-growth in detail, we list the main steps from a practitioner’s perspective. We need to perform the following steps to obtain frequent patterns using FP-growth:

  1. Create transactions of items
  2. Count occurrence of item sets
  3. Sort item sets according to their occurrence
  4. Remove infrequent items
  5. Scan DB and build FP-tree
  6. Recursively grow frequent item sets

Let’s use an example to illustrate these steps. We will assume that our data set is a Web proxy log from a corporate network that, among other things, has IP address and Web sites visited as fields. Our goal is to find patterns such as Web sites that are visited together. After step 1, we obtain a set of transaction items shown in Table 1. Each transaction lists the Web sites visited from each IP address. After steps 2 and 3, we get Table 2 that has items sorted by their frequencies. Assuming that an item is considered frequent only if it occurs more than three times, then in step 4 we will discard cnn and yahoo from the table. In step 5 we use the pruned table to create an FP-tree (Figure 1). Finally, in step 6 we grow frequent patterns. The final output is shown in Table 3. The output, for example, shows that many users tend to visit both the Web sites of HP and Amazon.

Table 1: Sites Visited

Table 2: Sorted Items

Figure 1: FP-tree

Table 3: Final output of frequent patterns

Parallel pattern mining on the HP Vertica Analytics Platform

Despite the efficiency of the FP-Growth algorithm, single-threaded sequential version of FP-Growth can take very long on large data sets. Fortunately, we can rewrite the algorithm using SQL and HP Vertica user-defined functions (UDFs), and let the HP Vertica Analytics Platform parallelize the implementation. The main issue to resolve is how to map the algorithm to SQL statements and then remove dependencies between UDFs so that they can run independently and in parallel. Below are the statements that we used in the HP Vertica Analytics Platform. Let’s assume that we are still working with the Web proxy log example introduced earlier.

  1. Create transaction of items
    • SELECT DISTINCT srcIP, hostname INTO uniqueSipHn FROM networkLog;
  2. Count frequency of occurrence of each host name
    • SELECT count(hostname) INTO hnCnt FROM uniqueSipHn;
  3. List host names visited by each IP and also the frequency of each host name.
    • SELECT a.srcIP, b.hostName, b.frequency into sipHnCnt FROM uniqueSipHn a INNER JOIN hnCnt b ON a.hostName=b.hostName;
  4. Build conditional transactions. Assume an item is frequent if it occurs more than 20,000 times.
    • SELECT t1.hostName, t1.srcIP, t2.hostName AS condItem INTO condTr FROM sipHnCnt t1 JOIN sipHnCnt t2 ON (t1.srcIP=t2.srcIP) and (t1.count>20000 and t2.count>20000) and ((t2.count>t1.count) or (t2.count=t1.count and t2.hostName>t1.hostName))
  5. Generate patterns in parallel using UDF.
    • SELECT FPGrowth(srcIP, condItem, 20000) OVER(PARTITION BY hostName ORDER BY srcIP) INTO frequentItems FROM condTr;

The real test: a billion records, and, of course, Hadoop

Now that we know how to implement parallel frequent pattern mining in the HP Vertica Analytics Platform, let’s see how the implementation performs a large data set. Our input data is a few days’ worth of Web proxy logs. The log file is 330 GB in size, and has a billion records each with 22 fields. For comparison, we use Mahout’s implementation of parallel frequent pattern mining (Cloudera Hadoop 2.0 and mahout-0.7). We wrote a MapReduce program to create transactions from the log (step 1 of the algorithm). Our test bed consists of 12 HP ProLiant servers, each with 12 cores, 96GB RAM, and 128GB SSD.

Figure 2 depicts our results. On 4 servers, the HP Vertica Analytics Platform can complete the end-to-end pattern mining in fewer than 140 seconds. Hadoop takes 1,250 seconds (20 minutes)—approximately 9x more time than the HP Vertica Analytics Platform. As we increase the number of servers to 12, both the HP Vertica Analytics Platform and Hadoop take less time to complete. However, unlike Hadoop, the HP Vertica Analytics Platform has close to linear scaling for this setup.

Are you searching for patterns in your data set? Want a fast and easy-to-use data analytics platform? Evaluate the HP Vertica Community Edition today.


[1] Mining frequent patterns without candidate generation. Jiawei Han, Jian Pei, Yiwen Yin. SIGMOD 2000.

Get Started With Vertica Today

Subscribe to Vertica