Vertica Blog

OpenText Vertica 23.3 – the Smarter Data Lakehouse

Posted July 31, 2023 by Paige Roberts, Vertica Open Source Relations Manager

Read More

Unveiling the Most Recent Version of the Vertica Grafana Data Source Plugin

With over 380K downloads, the Vertica Grafana Data Source plugin just got an upgrade! The plugin was migrated from the deprecated older Grafana toolkit to align with Grafana's new Create-Plugin tool. This accelerates the plugin development with their modern build set up that requires no additional configuration. Additionally, the Vertica SQL Go driver received an...
Quick Tip on a blue enter key on a keyboard

Setting Session Authorization to Troubleshoot

There are possible scenarios in which a dbadmin would want to run queries as another user to troubleshoot or test. You can use SET SESSION AUTHORIZATION to impersonate another user and run queries. Let's understand this with an example. Here we create a user named test, resource pool named userpool, and make this a default...

Adding Nodes to Fault Groups

This blog post was authored by Sarah Lemaire. Suppose you are adding new cluster nodes to your Vertica database. You want to add those nodes to particular fault groups without having to restart your Vertica database. The following steps use the example of a database with five racks and fault groups, with 9 Vertica nodes...

Analytic Queries in Vertica

This blog post was authored by Soniya Shah. Analytic functions handle complex analysis and reporting tasks. Here are some example use cases for Vertica analytic functions: • Rank the longest standing customers in a particular state • Calculate the moving average of retail volume over a specific time • Find the highest score among all...
Visual data flow graph showing parallel spark Vertica data sharing

Integrating with Apache Spark

The Vertica Connector for Apache Spark is a fast parallel connector that allows you to use Apache Spark for pre-processing data. Apache Spark is an open-source, general purpose, cluster-computing framework. The Spark framework is based on Resilient Distributed Datasets (RDDs), which are logical collections of data partitioned across machines. For more information, see the Apache...

Subscribe For Email Updates

Sign-up and select Vertica in your preferences to receive our monthly Vertica newsletter.

Sign-up

Working with Joins

This blog post was authored by Soniya Shah. Vertica supports a variety of join types. This post discusses the following joins: • Inner joins • Left, right, and full outer joins • Natural joins • Cross joins In Vertica, we refer to the tables participating in the join as left or right. The left table...
Three 3D arrows, different colors pointing in different directions

Time Series Analytics

This blog post was authored by Soniya Shah. Time series analytics is a powerful Vertica tool that evaluates the values of a given set of variables over time and groups those values into a window based on a time interval for analysis and aggregation. Time series analytics is useful when you want to analyze discrete...

Building a Secure Vertica Environment

This blog post was authored by Soniya Shah. Vertica has a client-server architecture system, where applications that reside on the client access the Vertica cluster through drivers including ODBC, JDBC, OLEDB and ADO.NET. This post discusses secure client to server communications, authenticating access to Vertica, and administrator access. Method Vertica Options Authentication: Validate user credentials...

Explore Popular Topics

Modern Database Analytics

Vertica Presentation at the db tech showcase Tokyo 2017

On September 5th, Kanako Obayashi from the Vertica Best Practices team presented at the db tech showcase Tokyo 2017, one of the largest database events in Japan. Kanako's presentation was about Vertica advanced analytics, including machine learning and geospatial analysis. More than 50 people attended her session. Kanako began her session by noting that more...
Programmer

What’s New in Vertica 8.1.1: Flex Parser Updates

This blog post was authored by Soniya Shah. Vertica 8.1.1 introduces an optional parameter to the FCSVPARSER function. The FCSVPARSER specifies how to load data into Vertica from a CSV data source. The new parameter allows you to define or override column names in the target file for data loaded from a CSV data source....

Compute Engine or Analytical Data Mart for Distributed Machine Learning? Vertica Explains How to Choose

This blog post was authored by Sarah Lemaire. On Tuesday, August 22, The Boston Vertica User Group hosted a late-summer Meetup to talk to attendees about compute engines and data mart applications, and the advantages and disadvantages of both solutions. In the cozy rustic-industrial atmosphere of Commonwealth Market and Restaurant, decorated with recycled wood pallets,...

MERGE Statement with Filters

This blog post was authored by Soniya Shah. Vertica 8.1 introduced new functionality for the MERGE statement. In this post, we discuss new functionality for MERGE that allows users to filter conditions on INSERT and UPDATE clauses in a MERGE statement. The MERGE operation allows users to join the target table on another table, a...

Do you need a database or a query engine?

This blog post was authored by Steve Sarsfield. As we travel through life, we are constantly assessing our choices. Should you eat that salad, or opt for the burger? Should you marry your partner or seek greener pastures elsewhere? All of us do these assessments in both our personal and business lives. However, it may...

What’s New in Vertica 8.1.1: Catalog Memory Improvements

This blog post was authored by Soniya Shah. In Vertica 8.1.1, we introduce a performance improvement that reduces catalog memory usage for users with a large number of NULL values in tables. The improvement affects all string data types, including BINARY, VARBINARY, LONG VARBINARY, CHAR, VARCHAR and LONG VARCHAR. The improvement scales with the data...