Vertica

Archive for the ‘Data Scientists’ Category

Is Big Data Giving You Grief? Part 1: Denial

My father passed away recently, and so I’ve found myself in the midst of a cycle of grief. And, in thinking about good blog topics, I realized that many of the organizations I’ve worked with over the years have gone through something very much like grief as they’ve come to confront big data challenges…and the stages they go through even map pretty cleanly to the five stages of grief! So this series was born.

So it’ll focus on the five stages of grief: denial, anger, bargaining, depression, and acceptance. I’ll explore the ways in which organizations experience each of these phases when confronting the challenges of big data, and also present strategies for coping with these challenges and coming to terms with big data grief.

Part One: Denial

“We don’t have a big data problem. Our Oracle DBA says so.”

Big data is a stealth tsunami – it’s snuck up on many businesses and markets worldwide. As a result, they often believe initially that that they don’t need to change. In other words, they are in denial. In this post, I’ll discuss various forms of denial, and recommend strategies for moving forward.

Here are the three types of organizational “denial” that we’ve seen most frequently:

They don’t know what they’re missing

Typically, these organizations are aware that there’s now much more data available to them, but don’t see that how it represents opportunity to their business. Organizations may have listened to vendors, who often focus their message on use cases they want to sell into – which may not be the problem a business needs to solve. But it’s also common for an organization settle into its comfort zone; the business is doing just fine and the competition doesn’t seem to be gaining any serious ground. So, the reasoning goes, why change?

The truth is that, as much as those of us who work with it every day feel that there’s always a huge opportunity in big data, for many organizations it’s just not that important to them yet. They might know that every day, tens of thousands of people tweet about their brand, but they haven’t yet recognized the influence these tweets can have on their business. And they may not have any inkling that those tweets can be signals of intent – intent to purchase, intent to churn, etc.

They don’t think it’s worth doing

Organizations in denial may also question whether dealing with big data is worth doing. An organization might already be paying a technology vendor $1 million or more per year for technology…and this to handle just a few terabytes of data. When the team looks at a request to suddenly deal with multiple petabytes of data, it automatically assumes that the costs would be prohibitive and shuts down that line of thinking. This attitude often goes hand-in-hand with the first item…after all, if it’s outrageously expensive to even consider a big data initiative, it seems there’s no point in researching it further since it can’t possibly provide a strong return on investment.

Somebody is in the bunker

While the prior two items pertained largely to management decisions based on return on investment for a big data project, this one is different. Early in my career I learned to program in the SAS analysis platform. As I pursued this for several different firms, I observed that organizations would tend to build a team of SAS gurus who held the keys to this somewhat exotic kingdom. Key business data existed only in SAS datasets which were difficult to access from other systems. Also, programming in SAS required a specialized skillset that only a few possessed. Key business logic such as predictive models, data transformations, business metric calculations, etc. were all locked away in a large library of SAS programs. I’ve spoken with more than one organization who tells me that they’ve got a hundred thousand (or more!) SAS datasets, and several times that many SAS programs floating around their business…many of which contain key business logic and information. As a result, the SAS team often held a good position in the organizational food chain, and its members were well paid.

One day, folks began to discover that they could download other tools that did very similar things, didn’t care where the data resided, cost a fraction of SAS, and required less exotic programming skills.

Can you see where this is going?

I also spent some years as an Oracle DBA and database architect, and witnessed very similar situations. It’s not uncommon – especially given how disruptive big data technologies can be – to see teams go “into the bunker” and be very reluctant to change. Why would they volunteer to give up their position, influence and perks? And so we now are at the intersection of information technology and a classic change management challenge.

Moving forward past denial

For an organization, working through the denial stage can seem daunting, but it’s very do-able. Here are some recommendations to get started:

Be prepared to throw out old assumptions. The world is rapidly becoming a much more instrumented place, so there are possibilities today that literally didn’t exist ten years ago. The same will be true in another ten years (or less). This represents both opportunity and competitive threat. Not only might your current competitors leverage data in new ways, but entirely new classes of products may appear quickly that will change everything. For example, consider the sudden emergence in recent years of smartphones, tablets, Facebook, and Uber. In their respective domains, they’ve caused entire industries to churn. So it’s important to cast a broad net in terms of looking for big data projects to deliver value for your business.

Big data means not having to say “no.”  I’ve worked with numerous organizations who have had to maintain a high cost infrastructure for so long that they’re used to saying “no” when they ‘re approached for a new project. And they add an exclamation point (“no!”) when they’re approached with a big data project. Newer technologies and delivery models offer the chance to put much more in the hands of users. So, while saying no may sometimes be inevitable, it no longer needs to be an automatic response. When it comes to an organization’s IT culture, be ready to challenge the common wisdom about team organization, project evaluation and service delivery. The old models – the IT service desk, the dedicated analyst/BI team, organizing a technology team into technology centric silos such as the DBA team, etc. may no longer be a fit.

Big data is in the eye of the beholder. Just because vendors love to talk about Twitter (and I’m guilty of that too), doesn’t mean that Twitter is relevant to your business. Maybe you manufacture a hundred pieces of very complex equipment every year and sell them to a handful of very large companies. In this case, it’s probably best not worry overmuch about tweets. You might have a very different big data problem. For instance, you may need to evaluate data from your last generation of devices which had ten sensors that generate ten rows of data per second each. And, you know that the next generation will have ten thousand sensors generating a hundred rows per second each – so very soon it’ll be necessary to cope with around ten thousand times as much data (or more – the new sensors may provide a lot more information than the older ones). And if the device goes awry, your customer might lose a $100 million manufacturing run. So don’t dismiss the possibilities in big data just because your vendor doesn’t talk about your business. Push them to help you solve your problems, and the vendors worth partnering with will work with you to do this.

Data expertise is a good thing. Just because you might not need ten Oracle DBA’s in the new world doesn’t mean that you should lay eight of them off. The folks who have been working intimately with the data in the bunker often have very deep knowledge of the data. They frequently can retool and, in fact, find themselves having a lot more fun delivering insights and helping the business. It may be important to re-think the role of the “data gurus” in the new world.  In fact, I’d contend that this is where you may find some of your best data scientists.

While organizational denial is a tough place to be when it comes to big data, it happens often. And many are able to move past it. Sometimes voluntarily, and sometimes not – as I’ll describe in the next installment.  So stay tuned!

Next up:

Anger: “We missed our numbers last quarter because we have a big data problem! What the heck are we going to do about it?”

The Automagic Pixie

The “De-mythification” Series

Part 4: The Automagic Pixie

Au∙to∙mag∙ic: (Of a usually complicated technical or computer process) done, operating, or happening in a way that is hidden from or not understood by the user, and in that sense, apparently “magical”

[Source: Dictionary.com]

In previous installments of this series, I de-bunked some of the more common myths around big data analytics. In this final installment, I’ll address one of the most pervasive and costly myths: that there exists an easy button that organizations can press to automagically solve their big data problems. I’ll provide some insights as to how this myth has come about, and recommend strategies for dealing with the real challenges inherent in big data analytics.

Like the single-solution elf, this easy button idea is born of the desire of many vendors to simplify their message. The big data marketplace is new enough that all the distinct types of needs haven’t yet become entirely clear – which makes it tough to formulate a targeted message. Remember in the late 1990’s when various web vendors were all selling “e-commerce” or “narrowcasting” or “recontextualization”? Today most people are clear on the utility of the first two, while the third is recognized for what it was at the time – unhelpful marketing fluff. I worked with a few of these firms, and watched as the businesses tried to position product for a need which hadn’t yet been very well defined by the marketplace. The typical response by the business was to keep it simple – just push the easy button and our technology will do it for you.

I was at my second startup in 2001 (an e-commerce provider using what we would refer to today as a SaaS model) when I encountered the unfortunate aftermath of this approach. I sat down at my desk on the first day of the job, and was promptly approached by the VP of Engineering, who informed me that our largest customer was about to cancel its contract – we’d been trying to upgrade the customer for weeks, during which time their e-commerce system was down. Although they’d informed the customer that the upgrade was a push-button process, it wasn’t. In fact, at the time I started there, the team was starting to believe that an upgrade would be impossible and that they should propose re-implementing the customer from scratch. By any standard, this would be a fail.

Over the next 72 hours, I migrated the customer’s data and got them up and running.   It was a Pyrrhic victory at best – the customer cancelled anyhow, and the startup went out of business a few months later.

The moral of the story? No, it’s not to keep serious data geeks on staff to do automagical migrations. The lesson here is that when it comes to data driven applications – including analytics – the “too good to be true” easy button almost always is. Today, the big data marketplace is full of great sounding messages such as “up and running in minutes”, or “data scientist in a box”.

“Push a button and deploy a big data infrastructure in minutes to grind through that ten petabytes of data sitting on your SAN!”

“Automatically derive predictive models that used to take the data science team weeks in mere seconds! (…and then fire the expensive data scientists)!”

Don’t these sound great?

The truth is, as usual, more nuanced. One key point I like to make with organizations is that big data analytics, like most technology practices, involves different tasks. And those tasks generally require different tools. To illustrate this for business stakeholders, I usually resort to the metaphor of building a house. We don’t build a house with just a hammer, or just a screwdriver. In fact, it requires a variety of tools – each of which is suited to a different task. A brad nailer for drywall. A circular saw for cutting. A framing hammer for framing. And so on. And in the world of engineering, a house is a relatively simple thing to construct. A big data infrastructure is considerably more complex. So it’s reasonable to assume that an organization building this infrastructure would need a rich set of tools and technologies to meet the different needs.

Now that we’ve clarified this, we can get to the question behind the question. When someone asks me “Why can’t we have an easy button to build and deploy analytics?” What they’re really asking is “How can I use technological advances to build and deploy analytics faster, better and cheaper?

Ahh, now that’s an actionable question!

In the information technology industry, we’ve been blessed (some would argue cursed) by the nature of computing. For decades now we’ve been able to count on continually increasing capacity and efficiency. So while processors continue to grow more powerful, they also consume less power. As the power requirements for a given unit of processing become low enough, it is suddenly possible to design computing devices which run on “ambient” energy from light, heat, motion, etc. This has opened up a very broad set of possibilities to instrument the world in ways never before seen – resulting in dramatic growth of machine-readable data. This data explosion has led to continued opportunity and innovation across the big data marketplace. Imagine if each year, a homebuilder could purchase a saw which could cut twice as much wood with a battery half the size. What would that mean for the homebuilder? How about the vendor of the saw? That’s roughly analogous to what we all face in big data.

And while we won’t find one “easy button”, it’s very likely that we can find a tool for a given analytic task which is significantly better than one that was built in the past. A database that operates well at petabyte scale, with performance characteristics that make it practical to use. A distributed filesystem whose economics make it a useful place to store virtually unlimited amounts of data until you need it. An engine capable of extracting machine-readable structured information from media. And so on. Once my colleagues and I have debunked the myth of the automagic pixie, we can have a productive conversation to identify the tools and technologies that map cleanly to the needs of an organization and can offer meaningful improvements in their analytical capability.

I hope readers have found this series useful. In my years in this space, I’ve learned that in order to move forward with effective technology selection, sometimes we have to begin by taking a step backward and undoing misconceptions. And there are plenty! So stay tuned.

The Single-Solution Elf

The “De-mythification” Series

Part 3: The Single-Solution Elf

In this part of the de-mythification series, I’ll address another common misconception in the big data marketplace: that there exists a single piece of technology that will solve all big data problems. Whereas the first two entries in this series focused on market needs, this will focus more on the vendor side of things in terms of how big data has driven technology development, and give some practical guidance on how an organization can better align their needs with their technology purchases.

Big Data is the Tail Wagging the Vendor

Big data is in the process of flipping certain technology markets upside-down. Ten or so years ago, vendors of databases, ETL, data analysis, etc. all could focus on building tools and technologies for discrete needs, with an evolutionary eye – focused on incremental advance and improvement. That’s all changed very quickly as the world has become much more instrumented. Smartphones are a great example. Pre-smartphone, the data stream from an individual throughout the day might consist of a handful of call-detail records and a few phone status records. Maybe a few kilobytes of data at most. The smartphone changed that. Today a smartphone user may generate megabytes, or even gigabytes of data in a single day from the phone, the broadband, the OS, email, applications, etc. Multiply that across a variety of devices, instruments, applications and systems, and the result is a slice of what we commonly refer to as “Big Data”.

Most of the commentary on big data has focused on the impact to organizations. But vendors have been, in many cases, blindsided. With technology designed for orders of magnitude less data, sales teams accustomed to competing against a short list of well-established competitors, marketing messages focused on clearly identified use cases, and product pricing and packaging oriented towards a mature, slow-growth market, many have struggled to adapt and keep up.

Vendors have responded with updated product taglines (and product packaging) which often read like this:

“End-to-end package for big data storage, acquisition and analysis”

“A single platform for all your big data needs”

“Store and analyze everything”

Don’t these sound great?

But simple messages like these mask the reality that there are distinct activities that which comprise big data analytics, and that these activities come with different technology requirements, and much of today’s technology was born in a very different time – so the likelihood of there being a single tool that does everything well is quite low. Let’s start with the analytic lifecycle, depicted in the figure below, and discuss the ways this has driven the state of the technology.

analytic_lifecycle

This depicts the various phases of an analytic lifecycle from the creation and acquisition of data through the exploration and structuring to analysis and modeling, to putting the information to work. These phases often require very different things from technology. Let’s take the example of acquiring and storing of large volumes of data with varying structure. Batch performance is often important here, as is cost to scale. Somewhat less important is ease of use – load jobs tend to change at a lower rate than user queries, especially when the data in a document-like format (e.g. JSON). By contrast, the development of a predictive model requires a highly interactive technology which combines high performance with a rich analytic toolkit. So batch use will be minimal, while ease of use is key.

Historically, many of the technologies required for big data analytics were built as stand-alone technologies: a database, a data mining tool, an ETL tool, etc. Because of this lineage, the time and effort required to re-engineer these tools to work effectively together as a single technology, with orders of magnitude more data, can be significant.

Despite how a vendor packages technology, organizations must ask themselves this question: what do you really need to solve the business problems? When it comes time to start identifying a technology portfolio to address big data challenges, I always recommend that customers start by putting things in terms of what they really need. This is surprisingly uncommon, because many organizations have grown accustomed to vendor messaging which is focused on what the vendor wants to sell as opposed as to what the customer needs to buy. It may seem like a subtle distinction, but it can make all the difference between a successful project and a very expensive set of technology sitting on the shelf unused.

I recommend engaging in a thoughtful dialog with vendors to assess not only what you need today, but to explore things you might find helpful which you haven’t thought of yet. A good vendor will help you in this process. As part of this exercise, it’s important to avoid getting hung up on the notion that there’s one single piece of technology that will solve all your problems: the single solution elf.

Once my colleagues and I dispel the single solution myth, we can then have a meaningful dialog with an organization and focus on the real goal: finding the best way to solve their problems with a technology portfolio which is sustainable and agile.

I’ve been asked, more than once “Why can’t there be a single solution? Things would be so much easier that way.” That’s a great question, which I’ll address in my next blog post as I discuss some common sense perspectives on what technology should – and shouldn’t – do for you.

Next up: The Automagic Pixie

 

The Real-Time Unicorn

The “De-mythification” Series

Part 1: The Real-Time Unicorn

This is part one of a series I call the “de-mythification” series, wherein I’ll aim to clear up some of the more widespread myths in the big data marketplace.

In the first of this multi-part series, I’ll address one of the most common myths my colleagues and I have to confront in the Big Data marketplace today: the notion of “real-time” data visibility. Whether it’s real-time analytics or real-time data, the same misconception always seems to come up. So I figured I’d address this, define what “real-time” really means, and provide readers some advice on how to approach this topic in a productive way.

First of all, let’s establish the theoretical definition of “real-time” data visibility. In the purest interpretation, it means that as some data is generated – say, a row of log data in an Apache web server – the data would immediately be queryable. What does that imply? Well, we’d have to parse the row into something readable by a query engine – so some program would have to ingest the row, parse the row, characterize it in terms of metadata, and understand enough about the data in that row to determine a decent machine-level plan for querying it. Now since all our systems are limited by that pesky “speed of light” thing, we can’t move data any faster than that – considerably slower in fact. So even if we only need to move the data through the internal wires of the same computer where the data is generated, it would take measurable time to get the row ready for query. And let’s not forget the time required for the CPU to actually perform the operations on the data. It may be nanoseconds, milliseconds, or longer, but in any event it’s a non-zero amount of time.

So “real-time” never, ever means real-time, despite marketing myths to the contrary.

There are two exceptions to this – slowing down time inside the machine, or technology which queries a stream of data as it flows by (typically called complex event processing, or CEP). With regard to the first option: let’s say we wanted to make data queryable as soon as the row is generated.  We could make the flow from the logger to the query engine part of one synchronous process. So the weblog row wouldn’t actually be written until it were also processed and ready for query. Those of you who administer web and application infrastructures are probably getting gray hair just reading this as you can imagine the performance impact to a web application. So, in the real world, this is a non-starter.  The other option – CEP – is exotic and typically very expensive, and while it will tell you what’s happening at the current moment, it’s not designed to build analytics models.  It’s largely used to put those models to work in a real-time application such as currency arbitrage.

So, given all this, what’s a good working definition of “real-time” in the world of big data analytics?

Most organizations define it this way: “As fast as it can be done providing a correct answer and not torpedoing the rest of the infrastructure or the technology budget”.

Once everyone gets comfortable with that definition, then we can discuss the real goal: reducing the time to useful visibility of the data to an optimal minimum. This might mean a few seconds, it might mean a few minutes, or it might mean hours or longer. In fact, for years now I’ve found that once we get the IT department comfortable with the practical definition of real-time, it invariably turns out that the CEO/CMO/CFO/etc. really meant exactly that when they said they needed real-time visibility to the data. So, in other words, when the CEO said “real-time”, she meant “within fifteen minutes” or something along those lines.

This then becomes a realistic goal we can work towards in terms of engineering product, field deployment, customer production work, etc. Ironically, chasing the real-time unicorn can actually impede efforts to develop high speed data flows by forcing the team to chase unrealistic targets for which, at the end of the day, there is no quantifiable business value.

So when organizations say they need “real-time” visibility to the data, I recommend not walking away from that conversation until fully understanding just what that phrase means, and using that as the guiding principle in technology selection and design.

I hope readers found this helpful! In the remaining segments of this series, I’ll address other areas of confusion in the Big Data marketplace. So stay tuned!

Next up: The Unstructured Leprechaun

 

A Sneak Peek of HP Vertica Pulse, Harnessing the Volume and Velocity of Social Media Data

The Web provides us with a myriad of ways to express opinion and interest—from social sites such as Twitter and Facebook to blogs and community forums to product reviews in ecommerce sites to many more. As a result, customers have significant influence in shaping the perceptions of brands and products. The challenge for the managers of those entities on which opinion and interest is expressed is to understand, in an automated way and in as close to real-time as possible, what people are talking about and how they feel about those topics, so that they can better understand and respond to their community.

HP Vertica Pulse — now in private beta — is HP’s scalable, in-database answer to the problem of harnessing the volume and velocity of social media data. Executed through a single line of SQL, HP Vertica Pulse enables you to extract “attributes,” or the aspects of a brand, product, service, or event that your users and customers are talking about; and the ability to assign a sentiment score for each of these attributes, so that you can track your community’s perception on the aspects of your business that your community cares about. Understand whether your customers are looking for a particular feature, how they react to a facet of your product or service as you anticipated, or if they are suddenly encountering problems. See how their perceptions change over time.

We used HP Vertica Pulse at HP Discover to capture attendee sentiment. We captured tweets related to HP Discover, the Tweeter’s screen name, and the timestamp. We ran the tweets through HP Vertica Pulse and visualized the results in Tableau. The whole effort was up and running in just a few hours. The screenshot below shows the major aspects:



 

  • We watched the interest of the crowd change over time. With each successive keynote, we saw new initiatives and people appear in the word cloud. Meg Whitman received a lot of press, as did HP’s New Style of IT and HAVEn. So did Kevin Bacon, who participated in Meg’s keynote.
  • HP Vertica Pulse surfaced news in the data analytics world. In a trial run using tweets related to data analytics, we saw “Walmart” — not a common name in the world of analytics — appear in the word cloud. A quick drilldown in Tableau revealed that Walmart recently purchased data analytics company Inkiru.
  • We captured the most prolific Tweeters. We could expand on this data to include influencer scores and reach out to the most influential posters.
  • We captured sentiment on all of the tweets. In a friendly forum like HP Discover, we expect the majority of the tweets to be neutral or positive in nature.

HP Vertica Pulse is a result of an ongoing collaboration with HP Labs and is built on Labs’s Live Customer Intelligence (LCI) technology. The Labs team has already had great success with LCI as evidenced in part by their Awards Meter application. HP Vertica has also built a social media connector that loads tweets of interest directly from Twitter into the HP Vertica Analytics Platform, allowing you to start understanding your community right away.

HP Vertica Pulse is yet another example of how you can use the HP Vertica Analytics Platform to bring analytics to the data, speeding analysis time and saving the effort of transferring your data to an external system. Because HP Vertica Pulse is in-database, you can store your text data and the associated sentiment alongside sales, demographic, and other business data. HP Vertica Pulse will help you to harness the voice of your customer so that you can better serve them.

We are now accepting applications to trial Pulse in our private beta. To participate, contact me, Geeta Aggarwal, at gaggarwal@vertica.com.

BDOC – Big Data on Campus

I had a great time speaking at the MIT Sloan Sports Analytics Conference yesterday, and perhaps the most gratifying part of doing a panel in front of a packed house was how many students were in the audience. Having been a bit of a ‘stats geek’ during my college years, I can assure you that such an event, even with a sports theme, would never have drawn such an audience back then.

It was even more gratifying to read this weekend’s Wall Street Journal, with the title Data Crunchers Now The Cool Kids on Campus. Clearly this a terrific time to be studying – and teaching – statistics and Big Data. To quote the article:

The explosive growth in data available to businesses and researchers has brought a surge in demand for people able to interpret and apply the vast new swaths of information, from the analysis of high-resolution medical images to improving the results of Internet search engines.

Schools have rushed to keep pace, offering college-level courses to high-school students, while colleges are teaching intro stats in packed lecture halls and expanding statistics departments when the budget allows.

 

Of course, Big Data training is not just for college students, and at HP Vertica we are working on programs to train both professionals as well as students in conjunction with our colleagues in the HP ExpertOne program. We invite those interested in learning more to contact us – including educational institutions who are interested in adding Big Data training to their curriculum.

Recapping the HP Vertica Boston Meet-Up

This week, some of our Boston-area HP Vertica users joined our team at the HP Vertica office in Cambridge, MA. Over some drinks and great food, we had the honor of hearing from HP Vertica power users Michal Klos followed by Andrew Rollins of Localytics. Both Michal and Andrew offered some valuable insight into how their businesses use the HP Vertica Analytics Platform on Amazon Web Service (AWS).

Michal uses the HP Vertica installation in the cloud, hosted on AWS. The highlight of Michal’s presentation was a live demonstration of a Python script using Fabric (a Python library and command-line tool) and Boto (Python interface to AWS) that executed code to quickly set up and deploy a Vertica cluster in AWS. Launching nodes on the HP Vertica Analytics Platform in AWS eliminates the need to acquire hardware and allows for an extremely speedy deployment. Michal was very complimentary of the recent enhancements to our AWS capabilities in the recently-released version 6.1 of the HP Vertica software.

Michael Klos Demonstration

Following Michal’s demonstration, Andrew took the floor to talk about how Localytics uses the HP Vertica Analytics Platform to  analyze user behavior in mobile and tablet apps.  With HP Vertica, Localytics gives their customers access to granular detail in real-time. Localytics caters to their clients by launching a dedicated node in the cloud for each customer. With the HP Vertica Analytics Platform powering their data in AWS, their customers can start gathering insightful data almost immediately.

Our engineers then took the stage to serve as a panel for questions from the floor. It’s not often that our engineers get the opportunity to answer questions from customers and interested BI professionals in an open forum discussion. Everyone took full advantage of the occasion, asking a number of questions about upcoming features and current use cases.  In addition, our engineers were able to highlight a number of new features from the 6.1 release that the users in attendance may not have been taking advantage of yet.

Meet-ups serve as a fantastic catalyst for users and future users to interact with each other, share best practices and have a valuable conversation with different members of the HP Vertica team. We reiterate our thanks to Michal and Andrew, and to all those that joined us at our offices — thank you for an excellent meet- up!

Don’t miss another valuable opportunity to hear from fellow HP Vertica user Chris Wegrzyn of the Democratic National Committee on our January 24th webinar at 1PM EST. We will discuss how the HP Vertica Analytics Platform revolutionized the way a presidential campaign is run. Register now!

Get Started With Vertica Today

Subscribe to Vertica