Archive for the ‘Workload management’ Category

What’s New in Dragline (7.1.0): Resource Pool Routing

Resource Pool Routing from Vertica Systems on Vimeo.

HP Vertica 7.1.x introduces new features that allow you to dynamically reroute queries to secondary resource pools.

Use the new CASCADE TO parameter when creating or altering a resource pool to indicate a secondary resource pool to which queries can cascade and execute when they exceed the RUNTIMECAP of the pool on which they started running. This way, if a query exceeds its initial pool’s RUNTIMECAP, it can cascade to a designated secondary pool with a larger RUNTIMECAP instead of causing an error.

Because grant privileges are not considered on secondary pools, you can use this functionality to designate secondary resource pools where user queries can cascade to without giving users explicit permission to run queries on that pool.

Check out this video to learn more about dynamically rerouting queries.
For more information, see the documentation.

Workload Management Metrics – A Golden Triangle

Modern databases are often required to process many different kinds of workloads, ranging from short/tactical queries, to medium complexity ad-hoc queries, to long-running batch ETL jobs to extremely complex data mining jobs (See my previous blog on workload classification for more information.) DBAs must ensure that all concurrent workload, along with their respective Service Level Agreements (SLAs), can co-exist well with each other while maximizing a system’s overall performance.

So what is concurrency? Why should a customer care about concurrency?

Concurrency is a term used to describe having multiple jobs running in an overlapping time interval in a system. It doesn’t necessarily mean that they are or ever will be running at the same instant. Concurrency is synonymous to multi-tasking and it is fundamentally different from parallelism, which is a common point of confusion. Parallelism represents a state in which two or more jobs are running at the exact same instant. The simplest example might be a single CPU computer. On such a computer, you can, in theory, run multiple jobs by context-switching between them. This gives the user the illusion of virtual parallelism or that multiple jobs are running on the single CPU at the same time. However if you take a snapshot at any given instant , you’ll find there is one and only one job running. In contrast, actual parallel processing is enabled by multiple working units (e.g. multiple cpu/cores in a modern database server such as the HP DL380p). Because Vertica is an MPP columnar database and an inherent multi-threaded application, it can take advantage of this multiple-CPU/core server architecture to process queries in both a concurrent and a parallel manner.

Most customers do not usually care about concurrency directly. Rather, they have a specific requirement to execute a certain workload in a database governed by a set of throughput and response time (latency) objectives. Throughput (TP) is defined as the number of queries/jobs that a database can perform in a unit of time and is the most commonly used metric to measure a database’s performance. Response time (or latency) is the sum of queuing time and runtime and as such it depends on both concurrency (as a proxy for overall system load) and query performance (= inverse of runtime).

For a given workload, the three metrics: throughput (TP), concurrency, and performance are related to each other by the simple equation:
Throughput = Concurrency * Performance

Knowing any two of these three metrics, you can derive the third. This relationship can be visually illustrated by the following Workload Management Metrics Triangle:


Concurrency is often NOT a direct customer requirement because it depends on query performance and throughput SLA. Customer requirements are usually in the form of something like this: “We need to process 10K queries in one hour with an average response time of 1 min or less.” So throughput (TP) is often the metric that customer is interested in and concurrency is a “derived” metric.

Let’s consider a hypothetical customer POC requirement of processing twelve hundred queries in one minute and assume that there are two competing systems, X and Y.

On System X, executing such a workload would require a currency level of 40 with an average query runtime of 2s.

On System Y, assuming average query response is 100ms, executing the same workload, requires a concurrency level of only 2 (because 20/s=2*1/100ms).

What does this mean for the customer? Clearly System Y with its superior query processing capability needs far less concurrency to satisfy the SLA than System X and hence it is a better platform (from a purely technical perspective).

To summarize, for a given throughput (TP) SLA, the better the query/job performance, the less concurrency it needs. Less concurrency generally means less or more efficient resource usage and better overall system performance (since there will be more spare system resources to process other workloads). The goal of any workload performance tuning exercise should never be about increasing concurrency. Instead it should focus on minimizing a query’s resource usage, improving its performance and applying the lowest possible concurrency level to satisfy a customer’s throughput (TP) and response time (latency) requirement.

Po Hong is a senior pre-sales engineer in HP Vertica’s Corporate Systems Engineering (CSE) group with a broad range of experience in various relational databases such as Vertica, Neoview, Teradata and Oracle.

A Method for Vertica Workload Classification


Modern analytic databases such as HP Vertica often need to process a myriad of workloads ranging from the simplest primary-key lookup to complex analytical queries that include dozens of large tables and joins between them. Different types of load jobs (such as batch type ETL jobs and near real-time trickle loads) keep the data up-to-date in an enterprise data warehouse (EDW). Therefore, an enterprise class database like HP Vertica must have a robust yet easy-to-use mixed-workload management capability.

The Resource Manager

HP Vertica manages complex workloads using the Resource Manager. With the Resource Manager, you manage resource pools, which are pre-defined subsets of system resources with an associated queue. HP Vertica is preconfigured with a set of built-in resource pools that allocate resources to different request types. The General pool allows for a certain concurrency level based on the RAM and cores in the machines.

HP Vertica provides a sophisticated resource management scheme that allows diverse, concurrent workloads to run efficiently on the database. For basic operations, the built-in general pool is usually sufficient. However, you can customize this pool to handle specific workload requirements.

In more complex situations, you can also define new resource pools that can be configured to limit memory usage, concurrency, and query priority. You can even optionally restrict each database user to use a specific resource pool to control memory consumption of their requests.

Understanding and Classifying Workloads

Before you start thinking about resource pools and workload optimization in HP Vertica, you must first develop a solid understanding of the customer’s workloads and know how to properly classify them.

What should you use for classification criterion? You could pick apart a complex query, studying its structure in detail, counting and adding up the number of tables, joins and aggregate functions, number and types of derived tables and analytical functions to come up with some weighted score. However this kind of approach is extremely tedious and subjective and, as a result is not a practical option.

What if we use the standalone runtime of a query as the criterion? This method is also problematic because a query that runs in one minute while using up 80% of a system’s resources should obviously not be in the same category as another query that runs in the same amount of time (one minute) but uses < 0.1% of the resources.
In HP Vertica, the best proxy for query complexity is its memory usage. Being a modern MPP columnar database, HP Vertica is rarely, if ever, I/O bound. HP Vertica is also less likely to hit the CPU bottleneck because of the tremendous power and speed of modern multi-core CPUs. Therefore, the most common resource bottleneck in a production HP Vertica cluster running a complex mixed-workload is often memory. Because of this, the HP Vertica Resource Manager focuses on establishing equitable memory allocation among different workloads or pools. This ensures that no resource pool is starved out of memory in the worst-case scenario -- under full system load.

Determining Memory Requirements for a Query

If you can somehow determine quickly how much memory a query requires per node, then you can use that value to classify an HP Vertica query (or any other job). Based on extensive real-world performance tuning experience gained from working with some of HP Vertica’s biggest and most demanding customers, I have found the following classification rules to be very easy to use and effective:

Small: <200 MB
Medium: 200-800 MB
Large: >800MB

How can you quickly determine a query’s memory requirement? It turns out that HP Vertica has a convenient profiling option (similar to EXPLAIN.) You can use the PROFILE statement to get the total memory required for the query (among other things). As a best practice, you should set up a small and dedicated profiling pool for this purpose, as shown in the following example:


Creating a specific profiling pool forces your query to borrow from the default general pool for any extra memory that it needs to execute. If you use the general pool (a common mistake), depending on the detailed pool settings, the reserved memory may be more than a query actually needs. HP Vertica could be “fooled” to report on reserved memory as opposed to the actual allocated/used memory under certain circumstances and this would skew your result.

For more information on the Resource Manager, see Managing Worloads in the HP Vertica Documentation set.

Po Hong is a senior pre-sales engineer in HP Vertica’s Corporate Systems Engineering (CSE) group with a broad range of experience in various relational databases such as Vertica, Neoview, Teradata and Oracle.

Get Started With Vertica Today

Subscribe to Vertica