Vertica

Archive for the ‘Vertica 7’ Category

Database Designer in HP Vertica 7

With the HP Vertica 7, you can use Database Designer with Management Console. As in previous releases, you can still run Database Designer from Admin Tools, but its integration with Management Console offers an additional easy-to-use method for creating a database design.

Database Designer optimizes query performance and minimizes the disk storage that the database uses. It does this by analyzing your logical schema, sample data, and, optionally, your sample queries. Then, Database Designer creates a physical schema design (a set of projections) that can be deployed automatically or manually.

Check out the following demo to get started with the new Database Designer feature in Management Console.

* When using this new feature, remember that, to create the design, you must be a DBADMIN user or have the DBUSER role assigned to you with write access to the tables in your schema.


For more information, visit www.vertica.com/documentation.

Viewing Query Plans and Profile Data in Management Console 7

With HP Vertica 7.0, Management Console offers a new way to visualize your query plans and get profile information about your queries. You can run EXPLAIN on a query using Management Console’s Query Plan Visualizer, and it provides you with a visual representation of the query plan with the lowest cost. Management Console highlights and links to key information in the output, so you can spot issues at a glance. When you profile a query, Management Console provides a graphical view of what resources were used when HP Vertica executed the query.

Our new video tutorial walks you through using Management Console to view query plans and profile data. See the video below, and make sure to check out the other video tutorials we offer here.


You can also read more about the feature in this blog post: Visualizing Your Query Plan with Management Console 7.

The ConvergedSystems 300 video


Yesterday, we posted a new video to the site featuring our very own product marketing manager Steve Sarsfield. The video breaks down the key advantages of the new ConvergedSystems 300 as an all-in-one platform for your big data analytics needs. Check out the video above or go to the main ConvergedSystems 300 here to learn more.

HP Vertica Tutorials You Asked, We Listened.

Over recent months, we’ve heard our community request short, instructional videos and tutorials to help them learn more about the rich and powerful features of the HP Vertica Analytics Platform.

Well, we heard you, and have developed and posted some initial videos to help you maximize your investment in HP Vertica. We’ve posted a new videos that highlight new features in HP Vertica 7 (“Crane”). Among the videos we’ve posted are:

  • A two-part series on the HP Vertica Connector for HCatalog. Part 1 provides an overview. Part 2 includes a demonstration:
  • A demonstration of the power of HP Vertica Flex Zone:
  • A tuturial on how to run Database Designer in Management Console 7:
  • A five-part series that demonstrates how to set up HP Vertica 7 with Amazon Web Services
  • An introduction to HP Vertica 7 Fault Groups

You can see these and all video tutorials here. Here’s a sample:


Stay tuned in the weeks ahead. We’ll be posting new videos that highlight new features in Management Console, how to use Fault Groups to set up large clusters, and more.

We’d love to hear more from you! If you have any suggestions or ideas for topics for future videos, let us know. You can post your ideas on our forum at community.vertica.com, or you can send ideas to vertica-docfeedback@hp.com

We’re committed to your success! Check back soon to see what’s new in HP Vertica Tutorials!

Gartner Magic Quadrant Released – HP Vertica Enters the Leader’s Quadrant

The Gartner Magic Quadrant has long been recognized as critical research that organizations rely on to weigh, evaluate, and ultimately select vendors as the infrastructure for their IT initiatives.

Yesterday, Gartner released the 2014 Gartner Magic Quadrant for Data Warehouse and Database Management Systems. We are very proud to announce that the HP Vertica Analytics Platform has entered the Leaders Quadrant, gaining in both terms of execution as well as its ability to fulfill our vision of storing, exploring, and serving data and insights to thousands of organizations.

We encourage you to read this complimentary report, as you consider HP Vertica to handle your most extreme Big Data analytics initiatives. And, we especially want to thank all of our innovative customers that push us each and every day to build the best, most scalable, and open analytics platform on the planet.

Want to get started with HP Vertica? Download our Community Edition – it’s free up to 1 TB with no time limit.

Visualizing Your Query Plan with Management Console 7

You run your newly crafted query and patiently wait for the results to appear on the terminal. You stare at your clock, waiting. 1 minute, 2 minutes, then 5, then 10. Your heart sinks. Why is it taking so long,? The query should be done by now, you tell yourself. You built your projections to optimize the joins, you’re sure there is enough memory to avoid spilling to disk. You start to doubt yourself at this point, so you’ll check to make sure.

You decide to run EXPLAIN to see if there’s anything obvious that the optimizer did incorrectly. You open a separate VSQL window and run EXPLAIN. You can see that there’s a hash-join at Path ID 4-that’s not good. You wonder, why isn’t this a merge-join? And, you could have sworn you were joining on sorted columns. You’d better check the sort order on the columns for your projections. What’s the query for that, again, you wonder. Well, since that may not be the bottleneck anyway; you decide to check the profile information for the query. You try to remember– which table stores profile information? EXECUTION_ENGINE_PROFILES, or QUERY_PLAN_PROFILES?”? What columns? Probably should select on all of them and see which columns I need.

And once you do find the columns you need, you may realize that trying to understand VSQL profile-metric outputs is not how you want to spend your afternoon.

But that doesn’t mean you are forever doomed to wade through dense text to get your answers…

Welcome to Management Console Query Plan Visualizer!

In the HP Vertica Analytics Platform 7., Management Console (MC) offers a simple interface, the Query Plan Visualizer, for getting plan or profile information on the your query. The Query Plan Visualizer provides a graphical view of the details of your query, allowing you to quickly identify and understand problem areas.

Explain SS

Let’s examine the same query mentioned previously using MC’s Query Plan Visualizer. Just paste in the query text and click Explain Plan . The results are shown here:

explain run ss

MC’s EXPLAIN output maintains the structure of the plan, and also highlights important information such as “No Statistics,” while linking to relevant metadata for the projections used and columns materialized. For example, we can see that Path ID 3 is a hash join, but now we can actually find out why.

Explain_Screenshot3

ExplainScreenshot4

So now we know why there was a hash-join instead of a merge-join. But how do we see how the query was actually executed? We can get the profile metrics for your query using either of these methods:

  1. We can click “Profile Query.” MC then executes the query and displays profile information for it once it completes. However, our query takes a while to run. This option may take a bit of time…
  2. We can also examine past query activity via the Activity tab located at the bottom of the screen. Using the Activity tab, we can retrieve profile information for queries we’ve already run.

In this case, we’ll choose the second option.

To do so:

    1. Go to the Activity tab,
    2. Select Queries from the dropdown menu for our chart type. This chart shows us a graph of number of queries run in the past.

Because we know our query was run recently, we’ll see it at the right side of the graph. Clicking that location brings us a table of query activity from the past few minutes. Sorting the queries by Elapsed brings our long-running query to the top.

run query ss

Clicking Explain/Profile on the far right of the table brings us back to the Query Plan Visualizer page and requests the profile information from the HP Vertica database.

query profile ss

The screen above shows a collapsed view of the profile information, which hides projection and column information. Metric information for each path appears to the right of the plan. We can measure 5 types of metrics for each path: disk usage, memory usage, data sent, data received, and time spent. Each blue bar represents the relative usage of a metric among all other paths. For example, in the Time column, we can see that the row of Path ID 3 has the largest blue bar (at about 35% fullness). This means, that out of all the paths, Path ID 3 took 35% of the total execution time. Now we can easily see that it was indeed our hash-join that took the most amount of time. Additionally, we can see that the disk-read on Path ID 6 was also responsible for a significant portion of the execution time.

So what about that pie chart? The pie chart shows how long the query took in each of its phases. As the query runs, it goes through multiple phases before it completes. Ideally, the query will spend most of its time in the “execution phase,” as the other phases should happen relatively quickly. So if your pie chart is mostly green, that’s good. Think of the chart as a sanity check that validates whether your query spent most of its time where it should.

Additionally, if you want to track the progress of a long running query, you can profile it with “Enable Monitoring” checked. With monitoring enabled, the counter values on the right hand side update at the set interval time, as well as show how much they increased or decreased by since the previous update. So rather than waiting for the query to complete profiling before you can see profile metric information, you can get the latest information on what paths are currently being processed at your set update-interval.

monitoring ss

By removing the need to know the specific queries required for getting profile information, and by making relevant data (projection metadata, query events) just a click away, the MC Query Plan Visualizer can greatly simplify the process of getting and understanding profiling information. If you’re still using version pre-7.0 version of MC, be sure to upgrade to a new Vertica 7.0 and give this a whirl :-)

Avoiding the OLAP Cliff for Count Distinct Queries in Vertica

Since the early days of data warehousing and the heyday of Ralph Kimball, data warehouse practitioners recognized the use of pre-computed aggregates to be “the single most effective tool the data warehouse designer has to improve performance” (footnote 1)., However, there was then, and continues to be, a gaping hole in the dimensional modelling approach concerning distinct aggregates, and in particular what to do about COUNT(DISTINCT x).

Let’s say that you want to count the number of distinct users who visit your website each day, each week, and each month. You can solve this problem using a pre-computed aggregate. However, the number of distinct users who visited each week cannot be computed from the number of distinct users who visited each day because some customers may have visited your web site on more than one day in the same week. Since you can’t roll a distinct aggregate up and you can’t incrementally maintain it, you’re pretty much stuck with computing what you need from the detail data.

Before HP Vertica, this had always been the Achille’s Heel of OLAP: operations worked 10x to 1000x faster if you had a pre-computed aggregate (which looks fantastic in a demo). If however, you asked a question that depended on an aggregate that you hadn’t pre-computed or that could not be rolled up, then you fell off the “OLAP cliff” and needed to compute the answer from the detail data, and that could take a long time. Query performance for OLAP queries was highly variable and appeared erratic.

HP Vertica’s columnar MPP database architecture changed most of that. By operating directly on a compressed columnar representation of the data, combined with the ability to massively parallelize the computation, most aggregations could be computed in real time without pre-materializing the aggregates. However, computing COUNT(DISTINCT p) could still be an expensive, memory-intensive operation, even on HP Vertica’s massively parallel architecture. Computing a distinct aggregate on Vertica’s MPP architecture can be broken into these phases:

  • computing a partial aggregate per distinct group on each node
  • redistributing/collecting the partial aggregates on the same node and aggregating again per distinct group
  • sending the results to the initiator node.

HP Vertica has been highly optimized for computing COUNT DISTINCT, but in some cases the computation can still require a great deal of memory and data movement. Since COUNT DISTINCT can be expensive to compute and cannot be rolled up, some people have called this the “COUNT DISTINCT Pain”.

HP Vertica 7.0.0 introduces a new family of aggregate functions designed to alleviate this pain when exact results are not required:

  • APPROXIMATE_COUNT_DISTINCT()
  • APPROXIMATE_COUNT_DISTINCT_SYNOPSIS()
  • APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS().

APPROXIMATE_COUNT_DISTINCT(x) is the direct equivalent of COUNT(DISTINCT x), and by default, has an accuracy within 1% of the value returned by COUNT(DISTINCT x), 97% of the time. You can specify that you require more or less accuracy than the default with an optional second argument. Whereas COUNT(DISTINCT x) requires a relatively large amount of memory per aggregate, APPROXIMATE_COUNT_DISTINCT(x) requires only 1500 bytes of memory per aggregate to achieve either:

  • 5% accuracy 97% of the time (typically within 2%), or
  • 50K bytes of memory per aggregate for 1% accuracy 97% of the time.

Furthermore, there is no need to make the partial aggregates distinct before sending the data to the initiator node, as required by COUNT(DISTINCT x).

In a performance experiment on a 1Tb TPC-H dataset, using a single-node developer’s laptop, performing an ungrouped COUNT DISTINCT on (l_orderkey, l_partkey) required 211 seconds. Using APPROXIMATE_COUNT_DISTINCT, the same computation took just 4.02 seconds, a factor of 52 times faster. In other cases where the number of distinct values was small, COUNT DISTINCT and APPROXIMATE_COUNT_DISTINCT were equally fast.And, in some cases where HP Vertica’s COUNT DISTINCT optimizations kick in, COUNT DISTINCT can be faster. So, while your mileage may vary, you should note that there are cases where APPROXIMATE_COUNT_DISTINCT is clearly a lot faster.

But it gets better, because unlike COUNT(DISTINCT x), APPROXIMATE_COUNT_DISTINCT rolls up in the same way SUM(x) and COUNT(x) roll up. By materializing the internal “synopsis” used by APPROXIMATE_COUNT_DISTINCT, you can roll it up later to preserve the full accuracy of APPROXIMATE_COUNT_DISTINCT(). On the same 1Tb TPC-H dataset, precomputing APPROXIMATE_COUNT_DISTINCT_SYNOPSIS on (l_orderkey, l_partkey) and grouping by a low cardinality column, and materializing the result with CREATE TABLE AS SELECT took about 30 seconds. Rollingprecomputed aggregate up with APPROXIMATE_COUNT_DISTINT_OF_SYNOPSIS() took just 64.7 milliseconds, more than 3200x faster than running COUNT DISTINCT against the detail data.

To illustrate, let’s suppose that you’re a political insider using the Pulse innovation package from HP Vertica. HP Vertica Pulse enables you to analyze the sentiment expressed in a tweet. You want to be notified, in real-time, when the number of distinct persons who post a tweet with a negative sentiment about one of several political topics exceeds N in any 1-week period. Instead of constantly running COUNT DISTINCT on the most recent weeks’ worth of tweets, you could compute and save an APPROXIMATE_COUNT_DISTINCT synopsis once per hour, and then run a relatively fast query that combines the pre-materialized synopses with a real-time synopsis computed from the most recent partial hour. Remember that this would not work with a regular COUNT DISTINCT because, if any individuals posted multiple tweets in the same week, they would be counted multiple times. The remarkable thing is that double counting will not occur with aggregating APPROXIMATE_COUNT_DISTINCT synopses. To allow for the possibility of a false-negative signal, you could adjust the alert threshold downward, and if triggered, compute an exact COUNT DISTINCT. However, in this case, the accuracy of APPROXIMATE_COUNT_DISTINCT is much higher than the accuracy of the sentiment classifications, so the measure of interest is intrinsically subjective and approximate anyway.

To compute and save an approximate count distinct synopsis, use the APPROXIMATE_COUNT_DISTINCT_SYNOPSIS() grouping function. To roll up a set of pre-computed synopses, use the APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS() grouping function. That’s all there is to it.

At the time of this writing, HP Vertica is the first and only SQL database to provide approximate count distinct with user-controllable accuracy and to support the rollup of approximate count distinct aggregates. Using the HP Vertica MPP column store, you can avert the OLAP cliff, and using pre-computed synopses, you can avoid the COUNT DISTINCT pain. For the first time since the dawn of data warehousing, you can compute and incrementally maintain pre-computed aggregates for count distinct with controllable accuracy, and roll these aggregates up in an OLAP framework.

———–

(1) – “The Data Warehouse Toolkit”, Ralph Kimball, pg 190.

Get Started With Vertica Today

Subscribe to Vertica