Vertica

Archive for the ‘Technical’ Category

Is Big Data Giving You Grief? Part 3: Bargaining

Is Big Data Giving You Grief?  Part 3: Bargaining

“Can’t we work with our current technologies (and vendors)? But they cost too much!”
Continuing the five part series about the stages of big-data grief that organizations experience, this segment focuses on the first time organizations explore the reality of the challenges and opportunities presented by big data and start to work their way forward…with bargaining.
Coping with a missed opportunity often brings some introspection. And with that comes the need to explore what-ifs that may provide a way forward. Here are some of the more common what-ifs that organizations explore during this phase.

What if I go to my current vendor? They’re sure to have some great technology. That’ll fix the problem.
This is a perfectly fine path of inquiry to explore. The only issue with this, as mentioned in my previous post (Part 2: Anger) is that vendors have a tendency to re-label their technology to suit a desirable market. So their technology offerings may not actually be suited to big data needs. And spending time and effort exploring these technologies to verify this can distract and prevent you from moving forward.

Also, vendors may have a business that relies on high-margin technology or services that were priced for a time before the big data explosion. So, the economics of their technology may suit them, but not the organization in need – your company. For example, if I need to store a petabyte of data in a data warehouse, I might require a several hundred node data warehouse cluster. If my current vendor charges a price of a hundred thousand US dollars per node, this isn’t economically feasible since I can now find alternatives that are purpose-built for large scale database processing and are priced at 1/5th or 1/10th that (or less!).

What if I hire some smart people? They’ll bring skills and insight. They’ll fix the problem.
Like the question above, this is a perfectly reasonable question to ask. But hiring bright people with the perfect skills can be very difficult today – the talent pool for big data is slim, and the hiring for these folks is highly competitive. Furthermore, hiring from outside doesn’t bring in the context of the business. In almost every business, there are nuances to the products, culture, market, and so forth that have a meaningful impact on the business. Hired guns, no matter how skilled, often lack this context.

Also, just bringing in new people doesn’t necessarily mean that your organization’s technology will suit them. Most analytic professionals develop their way of operating—their “game plan”—early in their career, and often prefer a particular set of technologies. It’s likely your new hires will want to introduce technologies they’re familiar with to your organization. And that can introduce additional complexity. A classic example of this is hiring a data science team who have spent the last decade analyzing data with the SAS system. If the organization doesn’t use SAS to begin with, the new team will likely press to introduce it.. And that may conflict with how the how the organization approaches analytics.

What if I download this cool open source software? I hear that stuff is magic, so that’ll fix the problem.
Unlike the first two what-ifs, this one should be approached with great caution! As mentioned in my previous post, open source software has something of a unique tendency to be associated with vague, broad, exaggerated, and often contradictory claims of functionality. This brings to mind a classic bit of satire by the Saturday Night Live crew, first aired in 1976: “New Shimmer is both a floor wax and a dessert topping!” The easy mistake to make here is for the technology team to rush forward, install the new stuff and start to experiment with it to the exclusion of all else. Six months (and several million dollars of staff time) later, the sunk cost in the open source option is so huge that it becomes a fait accompli. Careers would be damaged if the team admitted that it just wasted six months proving that the technology does not do what it claims, so it becomes the default choice.

What if I do what everybody else is doing? Crowds have wisdom, so that’ll fix the problem.
The risk with this thinking is similar to that posed by open source. This often goes hand-in-hand with hiring big data smarts – companies often bring in people from the outside and pay them to do what they’ve done elsewhere. It can definitely accelerate a big data program. But it can also guarantee that the efforts are more of a me-too duplication of something the rest of the industry has already done rather than true innovation. And while this may be suited for some businesses, the big money in big data is in being the first to derive new insights.

These are all perfectly acceptable questions that come up as organizations begin to acknowledge, for the first time, the reality of big data. But this isn’t the end of the discussion by any means. It’s important to avoid getting so enamored with exploring one or two of the above options that you don’t follow through on the “grief” process. But the natural next step is to be intimidated by the challenge, which will serve as an important reality check. I’ll cover this in the next segment: depression. So stay tuned!

Next up: Depression “The problem is too big. How can we possibly tackle it?”

Physical Design Automation in the HP Vertica Analytic Database

Automatic physical database design is a challenging task. Different customers have different requirements and expectations, bounded by their resource constraints. To deal with these challenges in HP Vertica, we adopt a customizable approach by allowing users to tailor their designs for specific scenarios and applications. To meet different customer requirements, any physical database design tool should allow its users to trade off query performance and storage footprint for different applications.

In this blog, we present a technical overview of the Database Designer (DBD), a customizable physical design tool that primarily operates under three design policies:

  • Load-optimized –DBD proposes the minimum required set of super projections (containing all columns) that permit fast load and deliver required fault tolerance.
  • Query-optimized –DBD may propose additional (possibly non-super) projections such that all workload queries are fully-optimized
  • Balanced—DBD proposes projections until it reaches the point where additional projections do not bring sufficient benefits in query optimization.

These options allow users to choose to trade off query performance and storage footprint, while considering update costs. These policies indirectly control the number of projections proposed to achieve the desired balance among query performance, storage and load constraints.
In real-world environments, query workloads often evolve over time. A projection that was helpful in the past may not be relevant today and could be wasting space or slowing down loads. This space could instead be reused to create new projections that optimize current workloads. To cater to such workload changes, DBD operates in two different modes:

  • Comprehensive–DBD creates an entirely new physical design that optimizes for the current workload while retaining parts of the existing design that are beneficial and dropping parts that are non-beneficial
  • Incremental– Customers can optionally create additional projections that optimize new queries without disturbing the existing physical design. Customers should use the incremental mode when workloads have not changed significantly. With no input queries, DBD optimizes purely for storage and load purposes.

ram_comprehensiveMode

The key challenges involved in the projection design are picking appropriate column sets, sort orders, cluster data distributions and column encodings that optimize query performance while reducing space overhead and allowing faster recovery. The DBD proceeds in two major sequential phases. During the query optimization phase, DBD chooses projection columns, sort orders, and cluster distributions (segmentation) that optimize query performance. DBD enumerates candidate projections after extracting interesting column subsets by analyzing query workload for predicate, join, group-by, order-by and aggregate columns. Run length encoding (RLE) is given special preference for columns appearing early in the sort order, because it is beneficial for both query performance and storage optimization. DBD then invokes the query optimizer for each workload query and presents a choice of the candidate projections. The query optimizer evaluates the query plans for all candidate projections, progressively narrowing the set of candidates until a stopping condition (based on the design policy) is reached. Query and table filters are applied during this process to filter one or more queries that are sufficiently optimized by chosen projections or tables that have reached a target number of projections set by the design policy. DBD’s direct use of the optimizer’s cost and benefit model guarantees that it remains synchronized as the optimizer evolves over time.

ram_inputParameters

During the storage optimization phase, DBD finds the best non-RLE column encoding schemes that achieve the smallest storage footprint for the designed projections via a series of empirical encoding experiments on the sample data. In addition, DBD creates the required number of buddy projections containing the same data but distributed differently across the cluster, enabling the design to be tolerant to node-down scenarios. When a node is down, buddy projections are employed to source the missing data in the down nodes. In HP Vertica, identical buddy projections (with same sort orders and column encodings) enable faster recovery by facilitating direct copy of their physical storage structures and DBD automatically produces such designs.

When DBD is invoked with an input set of workload queries, the queries are parsed and useful query meta-data is extracted (e.g., the predicate, group-by, order-by, aggregate and join query columns). Design proceeds in iterations. In each iteration, one new projection is proposed for each table under design. Once an iteration is done, queries that have been optimized by the newly proposed projections are removed, and the remaining queries serve as input to the next iteration. If a design table has reached its targeted number of projections (decided by the design policy), it is not considered in future iterations to ensure that no more projections are proposed for it. This process is repeated until there are no more design tables or design queries are available to propose projections for.

To form the complete search space for enumerating projections, we identify the following design features in a projection definition:

  • Feature 1: Sort order
  • Feature 2: Segmentation
  • Feature 3: Column encoding schemes
  • Feature 4: Column sets (select columns)

We enumerate choices for features 1 and 2 above, and use the optimizer’s cost and benefit model to compare and evaluate them (during the query optimization phase ). Note that the choices made for features 3 and 4 typically do not affect the query performance significantly. The winners decided by the cost and benefit model are then extended to full projections by filling out the choices for features 3 and 4, which have a large impact on load performance and storage (during the storage optimization phase).
In summary, the HP Vertica Database Designer is a customizable physical database design tool that works with a set of configurable input parameters that allow users to trade off query performance, storage footprint, fault tolerance and recovery time to meet their requirements and optionally override design features.

Get Started With Vertica Today

Subscribe to Vertica