In last week’s blog, I listed the top five ways in which I’ve seen organizations struggle in conducting Vertica evaluations. This week, I’d like to discuss the best practices that drive good Vertica evaluations. What’s a “good” evaluation? For us at HP, it’s one that produces results that allow a company to make a technology decision that maximizes the value of the investment to the business. For our team, it’s not about convincing organizations to buy something they don’t want or need. It’s about associating an investment in our technology with a tangible business outcome.
That said, here are my top five ways to run a great Vertica evaluation.
Best Practice 1: Think Outside the Box
Having spent the early years of my career working with databases by Oracle and Microsoft, I developed a set of core beliefs about how databases worked…and about how they could work. So when I branched out and started working with newer database technologies, my first efforts focused around very conventional data warehousing patterns – rigorous pre-design of a somewhat normalized star or snowflake schema; designing around long loads and longer running queries; thinking in terms of row level transactionality; and so forth.
I had to unlearn a lot of these preconceptions to put the newer technologies to effective use. Case in point – for many years the star/snowflake schema has been the go-to design for data marts and warehouses. It turns out the design was really driven by two separate needs:
- The notion of “master data”, or dimensions which have been scrubbed, and the performance characteristics of row-based databases when applied to data warehouse use cases.
- Industry dogma that a snowflake schema is “just the way you do it” because legacy databases just aren’t fast enough. And as a result many IT shops believe that entire categories of business questions fall into the “I’m sorry Dave, but I can’t do that” bucket – because they just wouldn’t run.
For big data analytics, some of these beliefs need to be unlearned. Schema is now a flexible thing that can be defined as needed, even at query runtime. Technology like Vertica incorporates a number of analytic extensions which enable a laundry list of business questions that were previously difficult or impossible to answer.
So when preparing to evaluate Vertica, think outside the box. For specific insights on this, read on!
Best Practice 2: Test What You Need to Test
I always recommend to businesses that we identify three types of evaluation criteria: those that need to be verified by tests in the evaluation, those that can be verified in other ways such as references, and those that are considered “nice to have”. This kind of approach will help the evaluation in a number of ways. First, it’ll distill out the tests of core importance. Second, it will help the account team spend time on the things that matter most. Finally, it’ll minimize the time it takes to complete the evaluation.
To my first point of thinking outside the box – our team does these evaluations every day, whereas most businesses only run them every few years. It’s tough to be good at something you don’t do often. We can help identify good tests to run as well as best practices for getting everything done smoothly. So don’t hesitate to ask our team for help when it’s time to identify your test plan.
Best Practice 3: Test What the Business Cares About
This is the corollary to my point last week about not using pure technology-defined success criteria. In my years in the IT trenches, I saw many technology investments fail to deliver the desired business outcome. Often this was because the business was not involved in the technology selection process. The way to fix that is by involving business stakeholders in the evaluation – identify use-cases that are timely and relevant (and for which there’s data), so that when the evaluation is done, business stakeholders can be comfortable that they’re going to get what they need. And the IT team knows it can deliver. This is a very powerful way to mitigate a number of risks.
This is another way in which we can help. We’ve got experts who understand analytic use cases and industry particulars, and who can facilitate the discovery of business-relevant evaluation tests. And we’re happy to work with companies to do this.
Best Practice 4: A Pilot is Not Production
When technology teams don’t conduct frequent evaluations, the inclination is to think of an evaluation like a production implementation with a rigid set of processes and a set schedule. And while evaluations can be run this way, it often results in a case of what I call “use case myopia” – the focus is on testing the technology against goals of incrementally improving things. Sometimes this is appropriate, but when selecting technology for big data analytics, this may miss the mark. Whereas it might be beneficial to the business to build a database so that the analysts can get their reports more quickly, focusing on that test may miss the fact that new data technology allows for business questions which were previously impossible.
For example, I’ve worked with multiple organizations whose first test in an evaluation was to see whether they could run a report more quickly. But after a bit of conversation, we identified analytic use cases that the company didn’t even consider because they were so used to older technology being too slow or hard to use. And these use cases were transformational – they represented entirely new capabilities like fraud detection in seconds instead of days, A/B testing in real time for every feature, real-time application optimization, and behavioral targeting. And the list goes on.
Best Practice 5: Think “partnership”
I saved my personal favorite for last. Having been on both the purchaser and vendor sides of the table, I’ve seen the different ways businesses can approach technology purchases. But the most consistently successful approach I’ve seen is when an organization partners with strategic vendors. This transforms a technology evaluation from a test of nuts and bolts to a test of whether you can build what you need.
Partnering has some requirements though. First, make sure the vendor brings enough to the table to warrant a strategic partnership. In the big data space, there are plenty of vendors who want to be strategic, but lack either the business or technology capabilities to really deliver on the promise. Second, a partnership will require a measure of transparency and trust. This will allow your vendor to help you in ways you might not have thought they could. For example, we at HP can bring all of the capabilities of one of the largest, most-established technology vendors in the world to the table. In the big data space, that means we can help companies leverage things like deep linking, pattern recognition, breakthrough hardware designs, and much more. And as your partner, we’ll help you sort through it all so you don’t have to.
In an evaluation, this means that we can help an organization think out of the box, in terms of a good test plan and business relevant use cases, and help make the evaluation a good one.
“Wherever you go, there you are” –Yogi Berra
IT teams very often find themselves in a set of circumstances with many constraints – budget, time, people, knowhow, and so forth. In that context, an evaluation can represent a lot of work. I’ve watched many businesses work their way through as many as six separate technology evaluations to make a big data platform choice – a considerable investment of time and money. I’ve found that when a company works with us closely during the evaluation process, it goes more quickly and with less investment of their time. So if your organization is about to embark on the big data journey and needs to think about evaluations, we can help. Click here to arrange to talk with one of our folks and learn more.