Archive for the ‘workload analyzer’ Category

Workload Analyzer – from past history to future performance!

by CJ Sun

One of the routine yet challenging tasks for a database administrator (DBA) is to ensure that the database is in a “healthy” state – from the viewpoints of smooth operations and optimal performance. In most databases, performance tuning is a huge time sink because of the myriad changes in the application or database deployment that could impact performance:

  • Data/Schema changes: loading more data, adding/dropping tables
  • Workload changes: queries, concurrency pattern changes
  • System configuration parameter changes
  • Cluster/Hardware changes: adding new nodes, storage/memory/network upgrade
  • User/role management: new users, new role/privilege grants

At HP Vertica, our design philosophy has been to make database tuning a science rather than the black art it often is! Easier said than done of course but we’ve been working on this from day one! From the outset, we made sure that physical design can be automated using the Database Designer, with which data can be deployed with the optimal query performance, minimized storage footprint and the desired data redundancy level. With Vertica 6, we’ve taken this approach to a new level with the Workload Analyzer (WLA), which make health checks and tuning a breeze! By analyzing the historical performance-related events and activities, the WLA can locate where performance degradation occurred, analyze the root cause, evaluate the current system status, and thus recommend tuning action for DBA to rectify the problem.

The workflow is illustrated as follows:

Can you guess what’s going on? Vertica collects every ounce of data conceivable about itself as it is running using a component called the Data Collector.  Surprise! This is a Big Data problem. Then we use the world’s best analytics platform (i.e. Vertica) to analyze this historical data, combined with current system status (catalog objects, locks, sessions, data layout, etc) to identify problems and recommend solutions.

Here are just some types of statistics that are collected by the Data Collector.

  • stale data statistics causing a sub-optimal query plan
  • any operation that spilled data to disk
  • outdated physical design compared to current workload
  • incorrect system parameter settings
  • mis-configured system services
  • potential security risks
  • data skew and fragmentation problems
  • excess resource usage

And here are some tuning recommendations you may see:

  • set configuration parameter
  • invoke database designer to design projections and drop unused ones
  • reconfigure background system services
  • adjust table partitioning and projection segmentation
  • enforce password policy on privileged user

The ultimate power to accept the recommendations is left to the DBA, of course, who can review and choose to whether or when to apply them.

Using the Workload Analyzer

Workload analyzer is an online tool and can be run against a live database. It has been integrated into the new management console (MC) since v6.0, and it is scheduled to refresh daily by default. The DBA can monitor the tuning recommendations in MC and take recommended action at appropriate time.

In addition to the MC, the Workload Analyzer can be manually invoked through command line where users can specify the 2-dimensional scope of analysis:  1. is it on system-wide, or a specific schema/table);  2. how far WLA should look back in the system event history.  The command is like following:

select analyze_workload(‘scope’, [‘since_time’]);

Configuring and Extending the WLA

Best of all, the Workload Analyzer is configurable and extensible! The WLA includes a rule-based framework to convert observations into tuning recommendations. Since rules work independently from each other, they can be individually disabled or enabled anytime to control WLA scope.

WLA rules can be configured using one or more parameters. A parameter value can be changed to tailor each rule for a different customer. For example, the rule detecting data skew has parameters for minimal table size and data skew factor threshold, which can be adjusted for customers with different data skew tolerances. In the rule for monitoring CPU utilization, the parameter for CPU usage threshold and duration can be set lower after a hardware upgrade. These parameter changes can be done on-the-fly without requiring cluster restart or server upgrade.

In Vertica 6.0, the system offers 19 built-in rules. In addition to these system-defined rules, the WLA also supports user-defined rules to extend its tuning capability. Users can define their own rules to meet their business requirements. For example, a customer may create a rule to monitor queries from a particular client and raise a flag when the query runtime exceeds a certain threshold. These user-defined rules can be added, dropped from system or enabled or disabled at will.

There you have it – what better proof point for Vertica than using it to analyze data about itself! The Workload Analyzer offers self-diagnosis capabilities to detect performance degradation from workload history, and intelligently recommends tuning actions to the DBA.  It even detects some types of security vulnerabilities. Do you have WLA rules to share with our community?

Get Started With Vertica Today

Subscribe to Vertica