Updated 1/11/2022

Important: Before Upgrading

Vertica for SQL on Hadoop Storage Limit

Vertica for SQL on Hadoop is licensed per node, on an unlimited number of central processing units or CPUs and an unlimited number of users. Vertica for SQL on Hadoop is for deployment on Hadoop nodes. Includes 1 TB of Vertica ROS formatted data on HDFS.

Starting with Vertica 10.1, if you are using Vertica for SQL on Hadoop, you cannot load more than 1 TB of ROS data into your Vertica database. If you were unaware of this limitation and currently have more than 1 TB of ROS data in your database, before upgrading to Vertica 10.1 or higher please make the necessary adjustments to stay below the 1 TB limit, or contact our sales team to explore other licensing options.

Identify and Remove Unsupported Projections

With version 9.2, Vertica has removed support for pre-join and range segmentation projections. If a table's only superprojection is one of these projection types, the projection is regarded as unsafe.Before upgrading to 9.2 or higher, you are strongly urged to check your database for unsupported projections. If the upgrade encounters these projections, it is liable to fail. You must then revert to the previous installation.

Solution: Run the pre-upgrade script

Vertica has provided a pre-upgrade script that examines your current database and sends to standard output its analysis and recommendations. The script identifies and lists any unsupported projections. If the script finds projection buddies with different SELECT and ORDER BY clauses, it generates a deploy script. Run this script to remedy projections so they comply with system K-safety.

https://www.vertica.com/pre-upgrade-script/

For more information, see Fixing Unsafe Buddy Projections in the Vertica documentation.

Contents

About Vertica Release Notes

Vertica 11.0.2

Vertica 11.0.1

Vertica 11.0.0

About Vertica Release Notes

The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 11.0.x.

They also contain information about issues resolved in:

Downloading Major and Minor Releases, and Service Packs

The Premium Edition of Vertica is available for download at https://support.microfocus.com/downloads/swgrp.html.

The Community Edition of Vertica is available for download at https://www.vertica.com/download/vertica/community-edition.

The documentation is available at https://www.vertica.com/docs/11.0.x/HTML/index.htm.

Downloading Hotfixes

Hotfixes are available to Premium Edition customers only. Contact Vertica support for information on downloading hotfixes.

New in Vertica 11.0.2

See the Vertica 11.0.1 New Features Guide for a complete list of additions and changes introduced in this release.

Client Drivers

JDBC: Configurable Network, Node, and Socket Timeouts

To supplement the existing timeout parameter LoginTimeout, which sets the time limit for the client to log in to the database, you can now configure the following timeouts with the following JDBC connection properties:

Configuration

Extended Support for User-Level Parameters

Most session-level configuration parameters can also now be set on individual users.

Containers and Kubernetes

Managed Kubernetes Services Support

Vertica supports managed Kubernetes services on Amazon Elastic Kubernetes Service (EKS).

Expanded Communal Storage Support

Previously, Vertica only supported communal storage on Amazon Web Services (AWS) S3 or S3-compatible storage locations. Vertica expanded its support to include the following communal storage locations:

VerticaDB Operator Available on OperatorHub.io

The VerticaDB operator is available on OperatorHub.io, a registry that provides access to Kubernetes operators to simplify adoption.

Custom Volume Mount for Vertica Server Container

Mount a custom volume in the Vertica server container filesystem for tasks that require that data persists between pod life cycles.

vlogger Image

Vertica offers the vlogger image to deploy a sidecar utility container for logging. The vlogger sends logs from vertica.log to stdout on the host node for log aggregation.

For implementation details, see Creating a Custom Resource.

Hybrid Kubernetes Cluster

If you have an Eon database, you can run hosts separate from the database and within Kubernetes. This architecture is useful in scenarios where you want to:

Database Management

New Database Parameter: AccessPolicyManagementSuperuserOnly

The new AccessPolicyManagementSuperuserOnly parameter controls whether the superuser has exclusive privileges to manage access policies.

Data Types

Complex Types in Native Tables

Vertica supports using heterogeneous complex types in native (ROS) tables. Both native and external tables can have columns of complex types, including nested complex types up to the maximum nesting depth of 100. You can use the ROW (struct), ARRAY, and SET types in native and external tables. Maps in data can be represented as ARRAY of ROW. Selected parsers support loading data with complex types, currently Parquet and ORC.

Eon Mode

Tuple Mover Mergeout Can Run on Any Node

Previously, the Tuple Mover only ran on nodes that were the primary subscriber to a shard. Now, the primary subscriber plans mergeout operations. It chooses a node (which can be itself, or another primary or secondary node) to execute the mergeout operation. Any primary or secondary node that has its TM resource pool's MAXMEMORYSIZE and MEMORYSIZE settings set to greater than 0 is eligible to run the Tuple Mover. If you do not want a secondary subcluster to execute mergeout operations, change these resource pool settings at the subcluster level.

Letting any node in the database run the Tuple Mover helps spread the overhead of performing mergeout.

Read-Only Mode on Loss of Quorum or Primary Shard Coverage

Previously, an Eon database would react to the loss of quorum or primary shard coverage by shutting down to prevent potential data corruption. Now, the database does not shut down in response to either of these events. Instead, it goes into read-only mode. In this mode, queries that to not affect the global catalog can still run on subclusters that have shard coverage. DDL and DML statements that would change the global catalog fail with an error message.

Read-only mode lets users still query the database while you resolve the issues with the failed nodes. Once you restart the down nodes and they recover, Vertica reforms the cluster. Then nodes resubscribe to shards and the database returns to its normal operation.

Geospatial Analytics

New ST_GeomFromGeoJSON Function

The new ST_GeomFromGeoJSON function accepts a GeoJSON geometry representation object and returns a GEOMETRY object. Load GeoJSON data into Vertica to execute spatial functions, such as determining the distance between two geometries.

Loading Data

Ability to Access S3 Buckets Configured as Requester Pays

You can now configure Vertica to access data on S3 buckets that are configured as Requester Pays buckets—that is, buckets where the requester pays the cost of accessing data on the bucket. In this case, the bucket owner only pays to store the data. This feature can be enabled for individual buckets by setting the S3BucketConfig parameter, or for all buckets by setting the S3RequesterPays parameter.

Machine Learning

Support for Random-forest PMML Models

You can now import random-forest PMML models. In PMML, a random-forest model is an ensemble of TreeModels called a multiple model.

Specify Multiple Percentiles When Calling APPROXIMATE_PERCENTILE

You can now specify multiple percentiles in the form of an array of floats when calling the APPROXIMATE_PERCENTILE function.

Stored Procedures

Support for GEOMETRY and GEOGRAPHY

PL/vSQL now supports the data types GEOMETRY and GEOGRAPHY.

ALTER PROCEDURE

You can now alter existing procedures with ALTER PROCEDURE.

Deprecated and Removed in Vertica 11.0.2

Vertica retires product functionality in two phases:

Deprecated

No Vertica functionality was deprecated in this release.

Removed

The following functionality was removed:

FunctionalityNotes
Spark connector V1 
System table WOS_CONTAINER_STORAGE 
WOS-related columns in system tables  

Resolved in 11.0.2-2

Updated 01/11/2022

Issue KeyComponentDescription
VER-80157 UI - Management Console

Security vulnerability CVE-2021-45105 was found in earlier versions of the Apache log4j library used by the MC. The library has been updated to resolve this issue.

VER-80168Kafka IntegrationSecurity vulnerabilities CVE-2021-45105 and CVE-2021-44832 were found in earlier versions of the log4j library used by the Vertica/Apache Kafka integration. The library has been updated to resolve this issue.
VER-80242 UI - Management ConsoleSecurity vulnerability CVE-2021-44832 was found in earlier versions of the Apache log4j library used by the MC. The library has been updated to resolve this issue.
VER-80252Execution EngineCertain queries with ill-formed predicates that contained a large number of fields caused Vertica to run out of memory while trying to return an error message. This issue has been resolved: now, Vertica can build and return error messages of the type operator-not-found, regardless of length.

Resolved in 11.0.2

Updated 12/23/2021

This release addresses the issues below.

Issue KeyComponentDescription
VER-50526Data Removal - Delete, Purge, Partitioning, EON

Eon mode now supports optimized delete and merge for unsegmented projections.

VER-77427Client Drivers - JDBCPreviously, JDBC would check $JAVA_HOME/security for the truststore and the default password was an empty string. Now, if a truststore path (trustStore) is not specified, JDBC first checks the original location ($JAVA_HOME/security/*), then checks the default JVM truststore at $JAVA_HOME/lib/security/*. If a truststore password (trustStorePassword) is not specified, it uses the password "changeit."
VER-78338Execution EngineSystem table query_consumption calculated peak_memory_kb by summing peak memory consumption from all nodes. This issue has been resolved: query_consumption now calculates peak_memory_kb by finding the maximum of peak memory that was requested among all nodes.
VER-78671Execution EngineQuery predicates were reordered at runtime in a way that caused queries to fail if they compared numeric values (integer or float) to non-numeric strings such as 'abc’. This issue has been resolved.
VER-78833OptimizerThe optimizer occasionally chose a regular projection even when it was directed to use available LAP/Top-K projections. This issue has been resolved.
VER-79001 Admin Tools

Calling "command_host -c start" or "restart_node" to start an EncryptSpreadComm-enabled database now gives a more useful error message, instructing users to call start_db instead.

In general, to start or stop an EncryptSpreadComm-enabled database, users should call start_db or stop_db.

VER-79057 Backup/DR The hash function changed in Vertica releases <= 11.0. As a result, backup manifest file digests that are generated when backing up earlier releases (< 11) do not match the new snapshot manifest file for the same object. This issue has ben resolved: in cases like this, Vertica now ignores digest mismatches.
VER-79085 Admin Tools, Database Designer Core Admintools rebalance_data with ksafe=1 returned an error when database K-safety was set to 0. This issue has been resolved.
VER-79135 Client Drivers - JDBC

Previously, changing the session timezone on the server did not affect the results returned when querying a timestamp. You could access the timestamp by calling getAdjustedTimestamp() on the result set object, but this was not functioning properly in binary transfer mode.

This issue has been resolved. In text and binary transfer modes, calling getAdjustedTimestamp() on a result set object returned by the server that contains a timestamp now properly returns the timestamp based on the timezone session parameter.

VER-79141 Tuple Mover

Users can now control configuration parameter MaxDVROSPerContainer knob and set it to any value >1. The new formula is:

max(2, (MaxDVROSPerContainer+1)/2)

A two-level strata helps avoid excessive DVMergeouts: DVs with fewer deleted rows than (TBL.nrows/MaxDVROSPerStratum), where maxDVROSPerStratun is max(2, (MaxDVROSPerContainer+1)/2), are placed at stratum 0; if the number of these DVs exceeds MaxDVROSPerStratum, they are merged together. As before, larger DVs at stratum 1 are not merged.

VER-79146EON, ResourceManager Subcluster level resource pool creation now supports specifying cpuaffinityset and cpuaffinitymode.
VER-79180 UI - Management Console Previously, the feedback feature had an issue uploading feedback information. The default behavior was changed, and now the feature sends information by email.
VER-79236 Tuple Mover In previous releases, the DVMergeout plan read storage files during plan compilation stage to determine the offset and length of each column in the storage files. Accessing this metadata incurred S3 API calls. This issue has been resolved: the tuple mover now calculates column offsets and length of each without accessing the storage files.
VER-79259 FlexTable In rare cases, MapToString used to return null when the underlying VMap was not null. This was an issue with displaying VMaps but no data loss happened. The issue was fixed.
VER-79349 Data Export, S3 When connecting to S3 using https, the S3EXPORT function failed to authenticate the server cerficiate unless the aws_ca_bundle parameter was set. This issue has been resolved: the system CA bundle is now used by default.
VER-79350OptimizerJoins with an interpolated predicate did not recognize equivalency between date/time data types that specified and omitted precision--for example data types TIME(6) and TIME. The issue has been resolved.
VER-79383 Execution Engine On rare occasions, a Vertica database crashed when query plans with filter operators were canceled. This issue has been resolved.
VER-79475 HadoopThe hdfs_cluster_config_check function failed on SSL enabled hdfs clusters, now this is fixed.
VER-79510Admin ToolsPreviously, Vertica used the same catalog path base and data path base for nodes and admintools. Now, admintools uses the data base path as set in admintools.conf, as distinct from the catalog path base.
VER-79513 Optimizer Under certain conditions, flaws were found in the logic that checked for duplicate keys. This issue has been resolved.
VER-79548 Scrutinize In release 11.0, the Vertica agent used a new method to transfer files through different nodes, but this change prevented scrutinize from sending zip files. This issue has been resolved.
VER-79562 DepotIf you called copy_partitions_to_table on two tables with the same pinning policies, and the target table had no projection data, the Vertica database server crashed. This issue has been resolved.
VER-79742 Security Changes to LDAPLinkURL and LDAPLinkSearchBase orphaned LDAPLinked users. This issue has been resolved: users are no longer orphaned if the new URL or search base contains the same set of users, and previously orphaned users are un-orphaned.
VER-79820 Backup/DR During backup, vbr sends queries to vsql and reads the results from each query. If the vsql output was very long and comprised multiple lines that ended with newline characters, vbr mistakenly interpreted the newline character to indicate the end of output from the current query and stopped reading. As a result, when vbr sent the next query to vsql, it read the unread output from the earlier query as belonging to the current query. This issue has been resolved: vbr now correctly detects the end of each query result and reads it accordingly.

Known issues in 11.0.2

Updated 12/23/21

Issue KeyComponentDescription
VER-78711Execution Engine

Vertica does not fully support direct comparisons between VARCHAR and NUMERIC data types. For example, given the following table definition and data:

=> CREATE TABLE r1 (a varchar(12), b numeric(8,2));

=> SELECT * FROM r1;

a | b

-----+------

1 | 1.00

2 | 2.00

abc | 3.00

(3 rows)

Vertica views the following query as a VARCHAR=NUMERIC comparison, and fails when it encounters a string such as ‘abc’.

=> SELECT * FROM r1 WHERE a = b;

ERROR 2826: Could not convert "abc" from column r1.a to a float8

Workaround: Coerce the predicate operand to a float:

=> SELECT * FROM r1 WHERE a = b::float;

a | b

---+------

1 | 1.00

2 | 2.00

(2 rows)

New in Vertica 11.0.1

See the Vertica 11.0.1 New Features Guide for a complete list of additions and changes introduced in this release.

admintools

start_db Now Accepts a List of Hosts to Start

The start_db tool now accepts a list of hosts to start the database using the -s option. This option lets you start just the primary nodes in an Eon Mode database. See Starting the Database.

Client Drivers

ADO.NET and ODBC: Hostname-based Load Balancing

You can now load balance workloads from the ADO.NET and ODBC client drivers by resolving a single hostname to multiple IP addresses. When you specify the hostname for a connection, the client driver automatically randomizes the IP address to which it resolves. For details, see Load Balancing in ADO.NET and Load Balancing in ODBC.

Containers and Kubernetes

Admission Controller is Included with the Operator Helm Chart

The admission controller is included with the VerticaDB operator Helm chart. For details, see Containerized Vertica.

Custom Certificates for Admission Controller Webhook

Previously, you had to use cert-manager to generate and manage TLS certificates for the admission controller webhook. Now, you have the option to use custom certificates to encrypt webhook communications.

When you install the VerticaDB operator Helm chart, you can provide a PEM-encoded certificate authority (CA) bundle, and TLS key and certificates.

For an overview of TLS and the admission controller webhook, see Containerized Vertica on Kubernetes. For implementation details, see Installing the Helm Charts and Helm Chart Parameters.

Mount Custom TLS Certificates in Container

You can mount multiple custom TLS certificates to secure internal and external communications for your custom resource. Each certificate is mounted in the Vertica server container filesystem. The operator replaces updated certificates and reschedules pods when you add or a delete an existing certificate.

For an overview, see Containerized Vertica on Kubernetes. For implementation details, see Creating a Custom Resource.

Authenticate to Any S3-Compatible Storage Location

Previously, TLS restrictions on the custom resource only permitted HTTPS connections to Amazon Web Services (AWS) S3 communal storage. Now, you can mount a self-signed certificate authority (CA) bundle to authenticate any S3-compatible connection to your custom resource.

For an overview, see Containerized Vertica on Kubernetes. For implementation details, see Creating a Custom Resource.

Custom Volume Mount for Sidecar

To persist data between life cycles for a sidecar utility container, you can create custom volumes and mount them in the sidecar container filesystem. You can use any Kubernetes volume type for the custom volume.

For an overview, see Containerized Vertica on Kubernetes. For implementation details, see Creating a Custom Resource.

Upgrade Vertica Automatically

The operator automates Vertica server version upgrades for custom resources. For details, see Upgrading Vertica on Kubernetes.

Database Management

SHARED Storage Locations Are Shared on All Nodes

A SHARED storage location is now shared by all database nodes. Previously, it was possible to use a shared location on specific nodes only. If a database being upgraded contains shared storage locations that are not present on all nodes, those locations will be added to the missing nodes as part of the upgrade.

SHARED DATA and SHARED DATA,TEMP storage locations have been deprecated.

For details, see Managing Storage Locations.

Eon Mode

Queuing Pinned Objects for Download

By default, pinned objects are queued for download from communal storage as needed to execute a query or DML operation. SET_DEPOT_PIN_POLICY functions now support a new Boolean argument that, if set to true, overrides this behavior and immediately queues newly pinned objects for download.

For details, see Pinning Depot Objects.

Start Database With Just Primary Nodes

You can now start an Eon Mode database using just the primary nodes. This feature lets you start a subset of the hosts in the database. The admintools start_db tool now accepts a list of primary nodes using the -s option. See Start Just the Primary Nodes in an Eon Mode Database.

Installation and Upgrade

Upgrading Does Not Replace UDxs

Upgrading no longer re-installs all UDxes. If a UDx was already installed, the upgrade script uses the IF NOT EXISTS directive to avoid recreating it. This means that if you originally installed a UDx as unfenced (the default is fenced), this status does not change after upgrade. For details, see Loading UDxs.

Licensing and Auditing

Community Edition License Changes

As part of the user agreement for the Community Edition (CE) license, you agree to allow Vertica to collect limited, non-personally identifying information about your use of Vertica. See Community Edition License.

Loading Data

CSV Parsers: ENCLOSED and ESCAPE May Have the Same Values

The FCSVPARSER and default parser (DELIMITED (Parser)) have parameters to specify escape and enclose characters. Previously, these values could not be the same. This restriction has been lifted.

Machine Learning

PMML Updates

Extended Model Support

You can now import TreeModels. TreeModels are powerful models used for both classification and regression problems.

New PMML Subtags

Vertica now supports the following tags and subtags:

For a full list of supported subtags, see PMML Features and Attributes.

Profiling

Extended Support for LABEL Hint

The following statements now support the LABEL hint:

Security and Authentication

TLS CONFIGURATION: data_channel

You can now configure internode encryption with the data_channel TLS CONFIGURATION. This setting was originally controlled by the now-deprecated security parameter DataSSLParams.

If DataSSLParams is properly configured, its configuration is automatically ported to data_channel on upgrade.

Minimum TLS Version: 1.2

Clients must support TLS 1.2 or greater to connect to Vertica if TLS is enabled.

Non-Superuser Access Policy Management

Non-superusers can now manage access policies on and copy access policies from tables that they own. Previously, only superusers could manage access policies.

SQL Functions and Statements

CONTAINS and ARRAY_FIND Support Complex Types

The CONTAINS and ARRAY_FIND functions now support elements of complex types. The element being searched for must have the same schema, which might require explicit casts.

Queuing Pinned Objects for Download

By default, pinned objects are queued for download from communal storage as needed to execute a query or DML operation. SET_DEPOT_PIN_POLICY functions now support a new Boolean argument that, if set to true, overrides this behavior and immediately queues newly pinned objects for download.

Syntax updates apply to the following functions:

For details, see Pinning Depot Objects.

Extended Support for LABEL Hint

The following statements now support the LABEL hint:

New Function MATCH_COLUMNS

MATCH_COLUMNS returns all columns in queried tables that match the specified pattern. MATCH_COLUMNS is specified as an element in a SELECT list.

CREATE FUNCTION Statements Support IF NOT EXISTS

You can use IF NOT EXISTS with the CREATE FUNCTION Statements to avoid replacing an existing function. You might want to use this in upgrade or test scripts that require, and therefore load, UDxs. By using IF NOT EXISTS, you preserve the original definition, which might have changed the fenced status from the default.

Stored Procedures: Support for OR REPLACE and IF NOT EXISTS

CREATE PROCEDURE (Stored) now supports OR REPLACE and IF NOT EXISTS.

External Procedures: Support for IF NOT EXISTS

CREATE PROCEDURE (External) now supports IF NOT EXISTS.

Stored Procedures

PL/vSQL: EXCEPTION_CONTEXT for GET STACKED DIAGNOSTICS

You can now retrieve information about the call stack an exception with EXCEPTION_CONTEXT. For details, see Errors and Diagnostics.

RUNTIMECAP

You can now set the maximum runtime of stored procedures with the session parameter RUNTIMECAP.

Supported Platforms

FIPS mode on SUSE Linux Enterprise Server 15 SP2

Vertica has been tested in FIPS mode using OpenSSL 1.1.1.g on SUSE Linux Enterprise Server 15 SP2. Vertica supports FIPS mode on FIPS-compliant operating system versions that are equal to or higher than the tested version.

For additional details, see FIPS 140-2 Supported Platforms.

System Tables

Non-superuser Access Policy Management

Non-superusers can now view access policies on the tables that they own with ACCESS_POLICY. Non-superusers must first be granted SELECT privileges on the table to use this feature.

User-Defined Extensions

Library Management Does Not Require Superuser Privilege

Creating, altering, or dropping a library previously required the Superuser privilege. Superusers can now grant the UDXDEVELOPER role to developers or maintainers of UDx libraries. Users with this privilege can create libraries and alter or drop libraries that they created.

GRANT (Library) now supports an explicit DROP grant, to allow users with the UDXDEVELOPER role to drop libraries that they did not create.

For more information, see Loading UDxs.

Deprecated and Removed in 11.0.1

Vertica retires product functionality in two phases:

Deprecated

The following Vertica functionality was deprecated and will be retired in future versions:

FunctionalityNotes

Shared DATA and DATA,TEMP storage locations

See CREATE LOCATION.
Admission Controller Webhook imageThe admission controller webhook is included in the VerticaDB operator image. For details, see VerticaDB Operator.
Admission Controller Helm chartThe admission controller is included in the verticadb-operator Helm chart. For details, see Installing the Helm Charts.

Removed

The following functionality was removed:

FunctionalityNotes
admin_tools -t config_nodes 
DBD meta-function DESIGNER_SET_ANALYZE_CORRELATIONS_MODE 

Resolved in 11.0.1-3

Updated 12/16/2021

Issue KeyComponentDescription
VER-79881 Tuple Mover

Users can now control configuration parameter MaxDVROSPerContainer knob and set it to any value >1. The new formula is:

max(2, (MaxDVROSPerContainer+1)/2)

A two-level strata helps avoid excessive DVMergeouts: DVs with fewer deleted rows than (TBL.nrows/MaxDVROSPerStratum), where maxDVROSPerStratum is max(2, (MaxDVROSPerContainer+1)/2), are placed at stratum 0; if the number of these DVs exceeds MaxDVROSPerStratum, they are merged together. As before, larger DVs at stratum 1 are not merged.

VER-79886ScrutinizeIn release 11.0, the Vertica agent used a new method to transfer files through different nodes, but this change prevented scrutinize from sending zip files. This issue has been resolved.
VER-79910SecurityChanges to LDAPLinkURL and LDAPLinkSearchBase orphaned LDAPLinked users. This issue has been resolved: users are no longer orphaned if the new URL or search base contains the same set of users, and previously orphaned users are un-orphaned.
VER-80080Kafka IntegrationThis release updates the Kafka integration’s Log4j library. The updated library addresses the CVE-2021-44228 and CVE-2021-45046 security vulnerabilities found in earlier versions.
VER-80081UI - Management ConsoleThis release updates the Management Console’s Log4j library. The updated library addresses the CVE-2021-44228 and CVE-2021-45046 security vulnerabilities found in earlier versions.

Resolved in 11.0.1-2

Updated 11/24/2021

This release addresses the issues below.

Issue KeyComponentDescription
VER-79278UI - Management ConsoleThe KPI dropdown on the Manage page did not open correctly, and the Activity Page displayed as unauthorized. This issue has been resolved.
VER-79587Backup/DRThe hash function changed in Vertica releases <= 11.0. As a result, backup manifest file digests that are generated when backing up earlier releases (< 11) do not match the new snapshot manifest file for the same object. This issue has ben resolved: in cases like this, Vertica now ignores digest mismatches.
VER-79621OptimizerThe optimizer occasionally chose a regular projection even when it was directed to use available LAP/Top-K projections. This issue has been resolved.
VER-79667FlexTableIn rare cases, MapToString returned null when the underlying VMap was not null. This was an issue with displaying VMaps but no data loss happened. This issue has been resolved.
VER-79743HadoopPreviously, hdfs_cluster_config_check would fail on SSL enabled hdfs clusters. This issue has been resolved.
VER-79780Data Removal - Delete, Purge, PartitioningEon mode now supports optimized delete and merge for unsegmented projections.

Resolved in 11.0.1-1

Updated 11/10/2021

This release addresses the issues below.

Issue KeyComponentDescription
VER-79417Tuple MoverIn previous releases, the DVMergeout plan read storage files during plan compilation stage to determine the offset and length of each column in the storage files. Accessing this metadata incurred S3 API calls. This issue has been resolved: the tuple mover now calculates column offsets and length of each without accessing the storage files.
VER-79516Execution EngineOn rare occasions, a Vertica database crashed when query plans with filter operators were canceled. This issue has been resolved.
VER-79525Data Export, S3When connecting to S3 using https, the S3EXPORT function failed to authenticate the server cerficiate unless the aws_ca_bundle parameter was set. This issue has been resolved: the system CA bundle is now used by default.
VER-79535Admin Tools, Database Designer Front-End/UIAdmintools rebalance_data with ksafe=1 returned an error when database K-safety was set to 0. This issue has been resolved.
VER-79540Admin Tools, Database Designer CoreAdmintools rebalance_data with ksafe=1 returned an error when database K-safety was set to 0. This issue has been resolved.
VER-79581Admin ToolsPreviously, Vertica used the same catalog path base and data path base for nodes and admintools. Now, admintools uses the data base path as set in admintools.conf, as distinct from the catalog path base.

Resolved in 11.0.1

Updated 10/25/2021

This release addresses the issues below.

Issue KeyComponentDescription
VER-67295 Security LDAPLink now properly handles nested groups.
VER-68210 UI - Management Console MC failed to import a database that contained a table named "users" in the public schema. This issue has been resolved.
VER-75768 Data Removal - Delete, Purge, Partitioning Users could remove a storage location that contained temporary table data with drop_location . This issue has been resolved: if a storage location contains temporary data, drop_location now returns an error and hint.
VER-75794 Data Removal - Delete, Purge, Partitioning Calling meta-function CALENDAR_HIERARCHY_DAY with the active_years and active_month arguments set to 0 can result in considerable I/O. When yoiu do so now, the function returns with a warning.
VER-77175 Execution Engine Some sessions that used User Defined Load code (or external table queries backed by User Defined Loads) accumulated memory usage through the life of the session. The memory was only used on non-initiator nodes, and was released after the session ended. This issue has been resolved.
VER-77583 Installation: Server RPM/Deb Several Python scripts in the /vertica/scripts directory used the old Python 2 print command, which prevented them from working with Python 3. They have been updated to the new syntax.
VER-77688 Documentation The script to backup and restore grants on UDx libraries shown in the documentation topic "Backing Up and Restoring Grants" contained several bugs. It has been corrected.
VER-77771 Documentation Documentation now informs users not to embed spaces before or after comma delimiters of the ‑‑restore-objects list; otherwise, vbr interprets the space as part of the object name.
VER-77818 Client Drivers - ADO If you canceled a query and immediately called DataReader.close() without reading all rows that the server sent before the cancel took effect, the necessary clean-up work was not completed, and an exception was incorrectly propagated to the application. This issue has been resolved.
VER-77999 Kafka Integration When loading a certain amount of small messages, filters such as KafkaInsertDelimiters can return a VIAssert error. This issue has been resolved.
VER-78001 Backup/DR When performing a full restore, AWS configuration parameters such as AWSEndpoint,AWSAuth and AWSEnableHttps were overwritten on the restored database with their backup settings. This issue has been resolved: the restore leaves the parameter values unchanged.
VER-78313 Optimizer The configuration parameter MaxParsedQuerySizeMB knob is set in MB (as documented). The optimizer can require very large amounts of memory for a given query, much of it consumed by internal objects that the parser creates when it converts the query into a query tree for optimization.

Other issues were identified as contributing to excessive memory consumption, and these have been addressed, including freeing memory allocated to query tree objects when they are no longer in use.
VER-78391 Admin Tools When installing a package, AdminTools compares the md5sum of the package file and the md5sum indicated in the isinstalled.sql script. If the two md5sum values do not match, AdminTools shows an error message and the installation fails. The error message has been improved to show the mismatch as the reason for failure.
VER-78470 DDL - Table When a table was dropped, the drop operation was not always logged. This issue has been resolved.
VER-78555 Database Designer Core Database Designer generated files with wrong permissions. This issue has been resolved.
VER-78576 ComplexTypes An error in constant folding would sometimes incorrectly fold IN expressions with string value lists. This issue has been fixed.
VER-78577 UI - Management Console Management Console returned errors when configuring email gateway aliases that included hyphen (-) characters. This issue was resolved.
VER-78578 DDL If a column was set to a DEFAULT or SET USING expression and the column name embedded a period, attempts to change the column's data type with ALTER TABLE threw a syntax error. This issue has been resolved.
VER-78612 Catalog Engine If COPY specified to write rejected data to a table, subsequent removal of a node from the cluster rendered that table unreadable. This issue has been resolved: the rejected data can now be read from any node that is up.
VER-78619 Execution Engine Queries on system table EXTERNAL_TABLE_DETAILS with complex predicates on the table_schema, table_name, or source_format columns either returned wrong results or caused the cluster to crash. This issue has been resolved.
VER-78632 Optimizer Queries with multiple distinct aggregates sometimes produced wrong results when inputs appeared to be segmented on the same columns as distinct aggregate arguments. The issue has been resolved.
VER-78682 Data Removal - Delete, Purge, Partitioning, DDL The type metadata for epoch columns in version 9.3.1 and earlier was slightly different than in later versions. After upgrading from 9.3.1, SWAP_PARTITIONS_BETWEN_TABLES treated those columns as not equivalent and threw an error. This issue has been resolved. Now, when SWAP_PARTITIONS_BETWEN_TABLES compares column types, it ignores metadata differences in epoch columns.
VER-78726 Optimizer Partition statistics now support partition expressions that include the date/time function date_trunc().
VER-78730 DDL If you profiled a query that included the ENABLE_WITH_CLAUSE_MATERIALIZATION hint, Vertica did not enable materialization for that query. This issue has been resolved.
VER-78750 Catalog Engine In earlier releases, if you set CatalogSyncInterval to a new value, Vertica did not use the new sync interval until after the next scheduled sync as set by the previous CatalogSyncInterval setting was complete. This issue has been resolved: now Vertica immediately implements the new sync interval.
VER-78767 Optimizer Attempts to add a column with a default value that included a TIMESERIES clause returned with a ROLLBACK message. This issue has been resolved.
VER-78856 DDL - Projection, Optimizer Eligible predicates were not pushed down into subqueries with a LIMIT OVER clause. The issue has been resolved.
VER-78969 Hadoop Exporting Parquet files with over 2^31 rows caused node failures. The limit has now been raised to 2^64 rows.
VER-79260 UI - Management Console Previously, the feedback feature had an issue uploading feedback information. The default behavior was changed, and now the feature sends information by email.

Known issues in 11.0.1

Updated 10/25/21

Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.

Issue Key ComponentDescription
VER-78310 Client Drivers - JDBC JDBC complex types return NULL arrays as empty arrays. For example, when executing this SQL statement:
SELECT ARRAY[null,ARRAY[1,2,3],null,ARRAY[4,5],null] as array;

The array column the server returns will be:
[[],[1,2,3],[],[4,5],[]]

Because of the null values in the string literal, it should return this value:
[null,[1,2,3],null,[4,5],null]

This is a work around to a limitation in Simba.
VER-78074 Procedural Languages, UDX Stored procedures that contain DML queries on tables with key constraints do not return a value.
VER-69803 Hadoop The infer_external_table_ddl meta-function will hang if you do not set authentication information in the current session when analyzing a file on GCP.
VER-64916 Kafka Integration When Vertica exports data collector information to Kafka via notifier, the serialization logic does not properly handle Vertica internal data type translation. When importing or copying that data back into the Management Console, Vertica uses the raw data in Kafka and inserts garbled data into the extended monitoring database.
VER-64352 SDK Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2:

CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python';
DROP LIBRARY lib1;
CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python';

Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails.
VER-62983 Hadoop When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas.
VER-61420 Data Removal - Delete, Purge, Partitioning Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM.
VER-61069 Execution Engine In very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely.
VER-48020 Hadoop Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load.

New in Vertica 11.0.0

See the Vertica 11.0.0 New Features Guide for a complete list of additions and changes introduced in this release.

admintools

set_ssl_params Updated for TLS CONFIGURATION

The tool set_ssl_params has been updated to set the certificate parameters for the server TLS CONFIGURATION, which act as direct equivalents to the deprecated parameters SSLPrivateKey, SSLCertificate, and SSLCA.

Because TLS CONFIGURATIONs are catalog objects, the database must now be up to configure client-server TLS.

For more information on client-server TLS configuration, see Configuring Client-server TLS.

Configuration

Viewing User-Level Configuration Settings

Two new features facilitate access to user-level configuration settings:

Database Designer

Database Designer Supports ZStandard Compression Encodings

The design and deployment scripts generated by Database Designer can now recommend one of the following Zstandard compression encodings for projection columns:

Similarly, the Database Designer function DESIGNER_DESIGN_PROJECTION_ENCODINGS also now recommends Zstandard compression encodings as appropriate.

Data Export

Export to ORC and Delimited Formats

You can export data from Vertica as ORC or delimited data. You can export a table or other query results, and you can partition the output. See EXPORT TO ORC, which is very similar to EXPORT TO PARQUET, and EXPORT TO DELIMITED.

You can export data to HDFS, S3, Google Cloud Storage (GCS), Azure Blob Storage, or an NFS mount on the local file system.

EXPORT TO DELIMITED deprecates S3EXPORT, which will be removed in a future release.

Export to Linux File System

When exporting data you can now write to the node-local Linux file system. Previously, exports to Linux were only supported for an NFS mount. For details, see Exporting to the Linux File System.

Data Types

ORC Complex Types

All complex types supported in the Parquet format are also now supported in the ORC format: arrays, structs, combinations of arrays and structs, and maps. Support for reading ORC structs as expanded columns has been removed. The documentation for Complex Types for Parquet and ORC Formats in Working with External Data has been refactored.

Maps in ORC and Parquet Data

You can use an array of structs in the definition of an external table to read and query ORC or Parquet data that uses the Map type. See Combinations of Structs and Arrays.

Null Structs

A null struct in ORC or Parquet data is now treated as a null ROW. Previously, a null struct was read as a ROW with null fields.

Comparison Operators for Complex Types

Collections of any element type and dimensionality support equality, inequality, and null-safe equality operators. One-dimensional collections of any element type also support comparsions (greater than, less than). See ARRAY and SET.

ROWs of any field type support equality, inequality, and null-safe equality operators. ROWs that contain only primitive fiels types or ROWs of primitive field types also support comparisons (greater than, less than).

Documentation Updates

Developing UDxs

The sections under Developing User-Defined Extensions (UDxs) for specific UDx types (Aggregate Functions (UDAFs), Analytic Functions (UDAnFs), Scalar Functions (UDSFs), Transform Functions (UDTFs), and User-Defined Load (UDL)) have been reorganized. Each key class is described along with its APIs in supported languages, instead of separating the general and language-specific information. APIs are presented as in-page tabs, one per language, with links to those classes in the API reference documentation.

Eon Mode

Node-Level Support for CLEAR_DATA_DEPOT

The meta-function CLEAR_DATA_DEPOT can now clear depot data from a single node. Previously, you could remove depot data from a subcluster or from the entire database cluster.

Elastic K-safety

When your database has a K-safety level of 1 or greater, each shard in your database has two or more primary nodes that subscribe to it: a primary subscriber and one or more secondary subscribers. These redundant subscriptions help maintain shard coverage in your database. If the primary subscriber goes down, a secondary subscriber takes over processing the data in the shard.

In previous versions, Vertica did not alter shard subscriptions after the loss of a primary node. When one or more primary nodes went down, leaving a shard with a single subscriber, the remaining secondary subscriber became critical. If any critical nodes also went down, Vertica would shut down due to a loss of shard coverage.

In Vertica Version 11.0.0, a new feature called elastic K-safety helps prevent shutdowns due to the loss of shard coverage. If a primary node is lost, Vertica subscribes new primary nodes to cover the lost node's shard subscriptions. Once these subscriptions take effect, the database once again has two (or more, if K > 1) primary node subscribers for each shard. Adding these subscriptions helps reduce the chances of database shutdown due to loss of shard coverage. See Data Integrity and High Availability in an Eon Mode Database.

Subcluster Interior Connection Load Balancing

Eon Mode databases have a new default connection load balancing rule that automatically distributes client connections among the nodes in a subcluster. When a client opens a connection and opts into having its connection load balanced, the node it initially connects to checks for applicable connection load balancing rules. If no other load balancing rules apply and classic load balancing is not enabled, the node tries to apply the new rule to redirect the client's connection to a node within its subcluster.

This load balancing rule has a low priority, so any other load balancing rule overrides it. As with other connection load balancing rules, this rule has no effect if the client does not opt into load balancing.

For more information, see Connection Load Balancing Policies.

Enterprise to Eon Mode Migration Support for Azure

You can now migrate an Enterprise Mode database to an Eon Mode database running on Azure. This conversion stores the database's data communally in Azure Blob storage. See Migrating an Enterprise Database to Eon Mode.

Rescaling Depot Capacity

When a database is revived on an instance with greater or lesser disk space than it had previously, Vertica evaluates the depot size settings that were previously in effect. If depot size was specified as a percentage of available disk space, Vertica proportionately rescales depot capacity. To support this functionality, a new DISK_PERCENT column has been added to system table STORAGE_LOCATIONS.

For details, see Resizing Depot Caching Capacity.

Revive Primary Subcluster Only

Previously, you had to revive all subclusters that were in an Eon Mode database on shutdown. Now, you can choose to revive just the primary subclusters in the database. After the primary subclusters are revived, you can choose to revive some or all secondary subclusters individually. This feature gives you the flexibility to revive a minimal number of nodes in cases where you do not need the entire original database cluster.

Apache Kafka Integration

Encryption-only TLS Configuration

If your Kafka deployment only uses TLS for encryption without authentication (that is, when ssl.client.auth is none or requested), the following Kafka session parameters are now optional when integrating with Vertica:

For details on Kafka session parameters, see Kafka User-Defined Session Parameters.

This configuration also makes the scheduler keystore optional.

For details on configuring TLS for the scheduler, see Configuring Your Scheduler for TLS Connections.

Voltage SecureData Integration

Tweak Parameter

Analogous to a salt, you can specify an additional "tweak" value to further modify the ciphertext returned by VoltageSecureProtect. This allows you to create unique, more secure ciphertexts from the same plaintext.

"Tweaked" ciphertexts can only be decrypted by passing the same format and "tweak" value to VoltageSecureAccess. Support for this feature is set in the SDA.

Masking Parameter

You can now mask the plaintext returned by VoltageSecureAccess if the encryption format has masking enabled in the SDA. This is useful for certain kinds of data like phone numbers or SSNs, where trailing digits are often used for encryption and can be safely exposed to certain parties.

Format-Preserving Hash

VoltageSecureProtect can now return format-preserving hashes (FPH) when passed a FPH format defined in the SecureData Appliance (SDA).

Machine Learning

Models for Time Series Analytics

Autoregressive Models

You can now create and make predictions with autoregressive models for time series datasets.

Autoregressive models predict future values of a time series based on the preceding values. More specifically, the user-specified "lag" determines how many previous timesteps it takes into account during computation, and predicted values are linear combinations of the values at each lag.

Moving-average Models

You can now create and make predictions with moving-average models for time series datasets.

Moving average models use the errors of previous predictions to make future predictions. More specifically, the user-specified "lag" determines how many previous predictions and errors it takes into account during computation.

TensorFlow 2.x Support

You can now import and make predictions with TensorFlow 2.x models in your Vertica database.

XGBoost: Column Subsampling

You can perform more precise column subsampling with XGB_CLASSIFIER and XGB_REGRESSOR by specifying a column ratio with the following parameters:

Analogous to subsampling in Random Forest, these parameters can offer significant benefits in performance, control, and reducing overfitting, especially for datasets with many columns/features.

PMML Updates

Extended Model Support

You can now import PMML models of type:

New PMML Subtags

Vertica now supports the following tags and subtags:

For a full list of supported subtags, see PMML Features and Attributes.

Management Console

New Setup Path Options for Eon Mode on Amazon Web Services

MC provides a Quick Setup wizard for creating an Eon Mode database on AWS. The wizard's intuitive design provides a guided setup with recommendations for your cluster configuration.

The setup wizard available in previous versions is now the Advanced Setup. For details, see Creating an Eon Mode Database in AWS with MC.

Support for Microsoft Azure

Management Console (MC) supports Eon Mode databases on Microsoft Azure.

Launch an Azure image with a pre-configured MC environment. After you launch the MC instance, you can provision and create a database from the MC using the new wizard. You can start, stop, and terminate a database from MC.

MC on Azure has the following limitations:

Create Custom Alerts to Monitor Database

Create custom alerts to monitor fluctuations in database performance that are not monitored by the currently available pre-configured alerts. Custom alerts use a user-defined SQL query to trigger message notifications when the query result exceeds a defined threshold. You can add dynamic variables to your query to fine-tune the SQL query, even after you save the alert.

Configure and create alerts on the Alerts tab, which was previously the Thresholds tab. The Alerts tab offers an updated user interface with a modern, intuitive design. To access the Alerts tab, from the MC Overview page, go to Settings, then Alerts.

For details about custom alerts, see Creating a Custom Alert.

Projections

Partition Range Projections

Vertica now supports projections that specify a range of partition keys. Previously, projections were required to store all rows of partitioned table data. Over time, this requirement was liable to incur increasing overhead:

As data accumulated, increasing amounts of storage were required for large amounts of data that were typically queried infrequently, if at all.

Large projections deterred optimizations such as better encodings, or changes to the projection sort order or segmentation. Changes like these to the projection's DDL required you to refresh the entire projection. Depending on the projection size, this refresh operation could span hours or even days.

Now, CREATE PROJECTION can create projections that specify a range of partition keys from their anchor table.

Query Optimization

REFRESH_COLUMNS Optimization

When you call REFRESH_COLUMNS on a flattened table's SET USING (or DEFAULT USING) column, it executes the SET USING query by joining the target and source tables. By default, the source table is always the inner table of the join. In most cases, cardinality of the source table is less than the target table, so REFRESH_COLUMNS executes the join efficiently.

Occasionally—notably, when you call REFRESH_COLUMNS on a partitioned table—the source table can be larger than the target table. In this case, performance of the join operation can be suboptimal.

You can now address this issue by enabling the new configuration parameter RewriteQueryForLargeDim. When enabled, Vertica rewrites the query, by reversing the inner and outer join between the target and source tables.

SDK Updates

Scalar Functions Support Range Optimization in Fenced Mode

A user-defined scalar function (UDSF) written in C++ can optionally implement the getOutputRange function to return the range of possible values for a block of data. Implementing this function allows the execution engine to skip blocks that cannot satisfy a value-based predicate.

Previously, this function could only be used when running a UDSF in unfenced mode. It can now be used for both fenced and unfenced modes.

For details, see Improving Query Performance (C++ Only).

Security and Authentication

TLS CONFIGURATION

TLS management has been simplified and centralized in the new TLS CONFIGURATION object and can be managed with ALTER TLS CONFIGURATION. Vertica includes the following TLS CONFIGURATION objects by default, each of which manages the certificates, cipher suites, and TLSMODE for a particular TLS context:

  1. server: client-server TLS
  2. LDAPLink: using the LDAPLink service or its dry run functions to synchronize users and groups between Vertica and the LDAP server
  3. LDAPAuth: when a user with an ldap authentication method attempts to log into Vertica, Vertica attempts to bind the user to a matching user in the LDAP server. If the bind succeeds, Vertica allows the user to log in.

These TLS CONFIGURATIONs cannot be dropped.

Existing configurations that use the following parameters will be automatically ported to their equivalents in the TLS CONFIGURATION scheme on upgrade.

Mutual TLS for LDAPLink and LDAPAuth

You can now use mutual TLS for connections between Vertica and your LDAP server by providing a client certificate for the LDAPLink and LDAPAuth TLS CONFIGURATIONS.

New LDAP Link Function: LDAP_LINK_SYNC_CANCEL

You can now cancel in-progress synchronizations between Vertica and your LDAP server with LDAP_LINK_SYNC_CANCEL.

New Security Parameter: SystemCABundlePath

SystemCABundlePath lets you specify a CA bundle for Vertica to use for establishing TLS connections with external services.

New Security Parameter: DHParams

DHParams lets you specify an alternate Diffie-Hellman MODP group of at least 2048 bits to use during key exchange.

Unified Access Policies for External File Systems

By default, Vertica uses user-supplied credentials to access HDFS and cloud file systems and does not require USER storage locations. A new configuration parameter, UseServerIdentityOverUserIdentity, allows you to override this behavior and require USER storage locations. For more information about file-system credentials, see File Systems and Object Stores.

SQL Functions and Statements

Exporting Table Access Policies

The SQL scripts generated by Vertica meta-functions EXPORT_TABLES, EXPORT_OBJECTS, and EXPORT_CATALOG now include CREATE ACCESS POLICY statements, as applicable.

Node-Level Support for CLEAR_DATA_DEPOT

The CLEAR_DATA_DEPOT meta-function can now clear depot data from a single node. Previously, you could remove depot data from a subcluster or from the entire database cluster.

ARRAY_CAT Support for Complex Types

The ARRAY_CAT function now supports elements of complex types. Both input arrays must have the same element type; for example, ROW elements must have the same fields.

INFER_EXTERNAL_TABLE_DDL

The INFER_EXTERNAL_TABLE_DDL meta-function now supports both Parquet and ORC formats. The new syntax is updated to include a format parameter. The earlier syntax has been deprecated.

TO_JSON Supports Sets

The TO_JSON function now supports SET arguments. You no longer need to cast them to arrays.

EXPLODE Supports Complex Arrays and Sets

The EXPLODE function now supports arrays of complex types, multi-dimensional arrays, and sets.

RELOAD_ADMINTOOLS_CONF

The RELOAD_ADMINTOOLS_CONF meta-function updates admintools.conf on all UP nodes using the current catalog information. Use this function to confirm that the server updated the admintools.conf file for a node that recently came UP.

Supported Platforms

Operator and Automation Tools for Kubernetes

Vertica provides a Helm chart that packages an operator, admission controller, and custom resource definition (CRD) file to automate lifecycle tasks for Vertica on Kubernetes. Use the CRD to create a custom resource (CR) instance, then install the operator to monitor the CR to maintain its desired state. The operator uses the admission controller to verify changes to mutable states in the custom resource.

The operator automates the following administrative tasks:

Note: Autoscaling is not supported in Vertica on Kubernetes.

For additional information, see Operator for Lifecycle Automation and Containerized Vertica.

System Tables

TLS_CONFIGURATIONS

The new TLS_CONFIGURATIONS system table holds information on existing TLS CONFIGURATION objects and their certificates, cipher suites, and TLSMODE.

USER_CONFIGURATION_PARAMETERS

A new system table, USER_CONFIGURATION_PARAMETERS, can be queried for all user-level configuration settings.

New Counters and Event Type

The EXECUTION_ENGINE_PROFILES system table has three new counters to support range-based optimizations in scalar functions:

The QUERY_EVENTS system table has one new related event, PREDICTS_DISCARDED_FROM_SCAN.

The UDSF optimization is described in Improving Query Performance (C++ Only).

New Column in PROJECTION_REFRESHES

PROJECTION_REFRESHES has a new column, PERCENT_COMPLETE.

Tuple Mover

Disabling Mergeout

You can disable mergeout on individual tables with ALTER TABLE.

This is generally useful for avoiding mergeout-related overhead otherwise incurred by tables that serve a temporary purpose—for example, staging tables that are used to swap partitions between tables.

For details, see Disabling Mergeout.

Deprecated and Removed in 11.0.0

Vertica retires product functionality in two phases:

Deprecated

The following Vertica functionality was deprecated and will be retired in future versions:

FunctionalityNotes

AWS library functions:

  • AWS_GET_CONFIG
  • AWS_SET_CONFIG
  • S3EXPORT
  • S3EXPORT_PARTITION
Instead, use EXPORT TO DELIMITED.
INFER_EXTERNAL_TABLE_DDL (path, table) syntaxInstead, use new USING PARAMETERS syntax to specify format and other parameters.
HDFSUseWebHDFS configuration parameter and LibHDFS++Currently, a URL in the hdfs scheme uses LibHDFS++ to access data, and this configuration parameter allows you to force it to use WebHDFS instead. LibHDFS++ is deprecated and, in the future, hdfs URLs will automatically use WebHDFS. For details see HDFS File System.
DESIGN_ALL option for EXPORT_CATALOG()DESIGN option generates equivalent output.
Vertica Spark connector V1The old version of the Vertica Spark connector (now referred to as the connector V1) has been deprecated in favor of the new, open-source Spark connector. The old connector will remain available for now as the new connector is not compatible with older versions of Spark and Scala. See Integrating with Apache Spark

Removed

The following functionality was removed:

FunctionalityNotes
Reading structs from ORC files as expanded columnsUse the ROW type to declare structs. See Complex Types for Parquet and ORC Formats.
DISABLE_ELASTIC_CLUSTER() 
flatten_complex_type_nulls parameter to the ORC and Parquet parsers 

For more information see Deprecated and Retired Functionality in the Vertica documentation.

Resolved in 11.0.0-3

Updated 10/06/2021

This release addresses the issues below.

Issue Key ComponentDescription
VER-78945UI - Management ConsoleManagement Console returned errors when configuring email gateway aliases that included hyphen (-) characters. This issue has been resolved.
VER-78986Client Drivers - ADOIf you canceled a query and immediately called DataReader.close() without reading all rows that the server sent before the cancel took effect, the necessary clean-up work was not completed, and an exception was incorrectly propagated to the application. This issue has been resolved.
VER-78988OptimizerAttempts to add a column with a default value that included a TIMESERIES clause returned with a ROLLBACK message. This issue has been resolved.
VER-79042OptimizerPartition statistics now support partition expressions that include the date/time function date_trunc().
VER-79045OptimizerQueries with multiple distinct aggregates sometimes produced wrong results when inputs appeared to be segmented on the same columns as distinct aggregate arguments. The issue has been resolved.
VER-79065Kafka IntegrationWhen loading a certain amount of small messages, filters such as KafkaInsertDelimiters can return a VIAssert error. This issue has been resolved.

Resolved in 11.0.0-2

Updated 9/23/2021

This release addresses the issues below.

Issue Key ComponentDescription
VER-78927Security

In Vertica 11.0, TLS configurations were greatly simplified for both LDAP Link and LDAP Authentication. As part of that simplification, the LDAP StartTLS parameter is now set automatically based on the TLSMODE and no longer needs to be set separately via a configuration parameter.

Previously, StartTLS was incorrectly enabled when using the ldaps:// protocol regardless of the TLSMODE. This issue has been resolved.

Resolved in 11.0.0-1

Updated 9/14/2021

This release addresses the issues below.

Issue Key ComponentDescription
VER-78586Database Designer CoreIn recent releases, Database Designer generated SQL files with permissions of 600 instead of 666. This issue has been resolved.
VER-78663Execution EngineQueries on system table EXTERNAL_TABLE_DETAILS with complex predicates on the table_schema, table_name, or source_format columns either returned wrong results or caused the cluster to crash. This issue has been resolved.
VER-78668Backup/DRWhen performing a full restore, AWS configuration parameters such as AWSEndpoint,AWSAuth and AWSEnableHttps were overwritten on the restored database with their backup settings. This issue has been resolved: the restore leaves the parameter values unchanged.
VER-78712Optimizer

The configuration parameter MaxParsedQuerySizeMB knob is set in MB (as documented). The optimizer can require very large amounts of memory for a given query, much of it consumed by internal objects that the parser creates when it converts the query into a query tree for optimization.

Other issues were identified as contributing to excessive memory consumption, and these have been addressed, including freeing memory allocated to query tree objects when they are no longer in use.

Resolved in 11.0.0

Updated 8/11/2021

This release addresses the issues below.

Issue Key ComponentDescription
VER-68406Tuple MoverWhen Mergeout Cache is enabled, the dc_mergeout_requests system table now contains valid transaction ids instead of zero.
VER-71064Catalog Sync and Revive, Depot, EONPreviously, when a node belonging to a secondary subcluster restarted, it lost files in its depot. This issue has been fixed.
VER-72596Data load / COPY, SecurityThe COPY option REJECTED DATA to TABLE now properly distributes data between tables with identical names belonging to different schemas.
VER-73751Tuple MoverThe Tuple Mover logged a large number of PURGE requests on a projection while another MERGEOUT job was running on the same projection. This issue has been resolved.
VER-73773Tuple MoverPreviously, the Tuple Mover attempted to merge all eligible ROS containers without considering resource pool capacity. As a result, mergeout failed if the resource pool could not handle the mergeout plan size. This issue has been resolved: the Tuple Mover now takes into account resource pool capacity when creating a mergeout plan, and adjusts the number of ROS containers accordingly.
VER-74554Tuple MoverOccasionally, the Tuple Mover dequeued DVMERGEOUT and MERGEOUT requests simultaneously and executed only the DVMERGEOUT requests, leaving the MERGEOUT requests pending indefinitely. This issue has been resolved: now, after completing execution of any DVMERGEOUT job, the Tuple Mover always looks for outstanding MERGEOUT requests and queues them for execution.
VER-74615HadoopFixed a bug in predicate pushdown on parquet files stored on HDFS. The bug would cause a parquet file spanning multiple HDFS block to not have some of its rowgroups, specifically those located on blocks other than the starting HDFS block, pruned. In some corner cases as this one, the bug would actually cause the wrong rowgroup to get pruned, leading to incorrect results.
VER-74619HadoopDue to some compatibility issues between the different open source libraries, Vertica failed to read the ZSTD compressed parquet files generated by some external tools (such as Impala) with a column containing all NULLS. This is fixed and Vertica can correctly read such files without error.
VER-74814HadoopThe open source library used by Vertica to generate parquet files would buffer null values inefficiently in-memory. This caused high memory usage, especially in cases where the data being exported had a lot of nulls. The library has been patched to buffer null values in encoded format, resulting in optimized memory usage.
VER-74974Database Designer CoreUnder certain circumstances, Database Designer designed projections that could not be refreshed by refresh_columns(). This issue has been resolved.
VER-75139DDLAdding columns to large tables with many columns on an Eon-mode database was slow and incurred considerable resource overhead, which adversely affected other workloads. This issue has been resolved.
VER-75496DepotSystem tables continued to report that a file existed in the depot after it was evicted, which caused queries on that file to return "File not found" errors. This issue has been resolved.
VER-75715Backup/DRWhen restoring objects in coexist mode, the STDOUT now contains the correct schema name prefix.
VER-75778Execution EngineWith Vertica running on machines with very high core counts, complex memory-intensive queries featuring an analytical function that fed into a merge operation sometimes caused a crash if the query ran in a resource pool where EXECUTIONPARALLELISM was set to a high value. This issue has been resolved.
VER-75783OptimizerThe NO HISTOGRAM event was set incorrectly on the dc_optimizer_events table's hidden epoch column. As a result, the suggested_action column was also set incorrectly to run analyze_statistics. This issue is resolved: the NO HISTOGRAM event is no longer set on the epoch column.
VER-75806UI - Management ConsoleCOPY type of queries have been added to the list of queries displayed for Completed Queries on Query Monitoring Activity page.
VER-75864Data ExportPreviously during export to Parquet, Vertica wrote the time portion of each timestamp value as a negative number for all timestamps before POSTGRES EPOCH DATE (2000-01-01). Due to this some tools (e.g. Impala) could not load such timestamps from parquet files exported by Vertica. This is fixed now.
VER-75881SecurityVertica no longer takes a catalog lock during authentication, after the user's password security algorithm has been changed from MD5 to SHA512. This is due to removing the updating of the user's salt, which is not used for MD5 hash authentication.
VER-75898Execution EngineCalls to export_objects sometimes allocated considerable memory while user acesss privileges to the object were repeatedly checked. The accumulated memory was not freed until export_objects returned, which sometimes caused the database to go down with an out-of-memory error. This issue has been resolved: now memory is freed more promptly so it does not excessively accumulate.
VER-75933Catalog EngineThe export_objects meta-function could not be canceled. This issue has been resolved.
VER-76094Data Removal - Delete, Purge, PartitioningIf you created a local storage location for USER data on a cluster that included standby nodes, attempts to drop the storage location returned with an error that Vertica was unable to drop the storage location from standby nodes. This issue has been resolved.
VER-76125Backup/DRRemoved access permission check for S3 bucket root during backup/restore. Users having access to specific directories in the bucket and not the root can now perform backup/restore in their respective directories without getting an access denier error.
VER-76131Kafka IntegrationUpdated documentation to mention support for SCRAM-SHA-256/512
VER-76200Admin ToolsWhen adding a node to an Eon-mode database with Administration Tools, users were prompted to rebalance the cluster, even though this action is not supported for Eon. This issue was resolved: now Administration Tools skips this step for an Eon database.
VER-76244DepotFiles that might be candidates for pruning—for example, due to expression analysis or not read at all as with top-K queries—were unnecessarily read into the depot, and adversely affected depot efficiency and performance. This problem has been resolved: now, the depot only fetches from shared storage files that are read by a statement.
VER-76349OptimizerThe optimizer combines multiple predicates into a single-column Boolean predicate where subqueries are involved, to achieve predicate pushdown. The optimizer failed to properly handle cases where two NotNull predicates were combined into a single Boolean predicate, and returned an error. This issue has been resolved.
VER-76384Execution EngineIn queries that used variable-length-optimized joins, certain types of joins incurred a small risk of a crashing the database due to a problem when checking for NULL join keys. This issue has been resolved.
VER-76424Execution EngineIf a query includes 'count(s.*)' where s is a subquery, Vertica expects multiple outputs for s.*. Because Vertica does not support multi-valued expressions in this context, the expression tree represents s.* as a single record-type variable. The mismatch in the number of outputs can result in database failure.

In cases like this, Vertica now returns an error message that multi-valued expressions are not supported.
VER-76449SessionsVertica now better detects situations where multiple Vertica processes are started at the same time on the same node.
VER-76511Sessions, TransactionsPreviously, a single-node transaction sends commit message to all nodes even if it has no content to commit. This is now fixed (single-node transaction commits locally if it has no content to commit)
VER-76543Optimizer, SecurityFor a view A.v1, its base table B.t1, and an access policy on B.t1: users no longer require a USAGE privilege on schema B to SELECT view A.v1.
VER-76584SecurityVertica now automatically creates needed default key projections for a user with DML access when that user performs an INSERT into a table with a primary key and no projections.
VER-76815OptimizerUsing unary operators as GROUP BY or ORDER BY elements in WITH clause statements caused Vertica to crash. The issue is now resolved.
VER-76824OptimizerIf you called a view and the view's underlying query invoked a UDx function on a table with an argument of '*' (all columns), Vertica crashed if the queried table later changed--for example, columns were added to it. The issue has been resolved: the view now returns the same results.
VER-76851Data ExportAdded support for exporting UUID types via s3export. Before, exporting data with UUID types using s3export would sometimes crash the initiator node.
VER-76874OptimizerUpdating the result set of a query that called the volatile function LISTAGG resulted in unequal row counts among projections of the updated table. This issue has been resolved.
VER-76952DDL - ProjectionIn previous releases, users were unable to alter the metadata of any column in tables that had a live aggregate or Top-K projection, regardless of whether they participated in the projection itself. This issue has been resolved: now users can change the metadata of columns that do not participate in the table's live aggregate or Top-K projections.
VER-76961SpreadSpread now correctly detects old tokens as duplicates.
VER-77006Machine LearningThe PREDICT_SVM_CLASSIFIER function could cause the database to go down when provided an invalid value for its optional "type" parameter. The function now returns an error message indicating that the entered value was invalid and notes that valid values are "response" and "probability."
VER-77007Catalog EngineStandby nodes did not get changes to the GENERAL resource pool when it replaced a down node. This problem has been resolved.
VER-77026Execution EngineVertica was unable to optimize queries on v_internal tables, where equality predicates (with operator =) filtered on columns relname or nspname, in the following cases:
  • The predicate specified expressions with embedded characters such as underscore (_) or percentage (%). For example:
    SELECT * FROM v_internal.vs_columns WHERE nspname = 'x_yz';
  • The query contained multiple predicates separated by AND operators, and more than one predicate queried the same column, either nspname or relname. For example:
    SELECT * FROM v_internal.vs_columns WHERE nspname = 'xyz' AND nspname <> 'vs_internal';

    In this case, Vertica was unable to optimize equality predicate nspname = 'xyz'.

In all these cases, the queries are now optimized as expected.

VER-77173MonitoringStartup.log now contains a stage identifying when the node has received the initial catalog.
VER-77190OptimizerSELECT clause CASE expressions with constant conditions and string results that were evaluated to shorter strings sometimes produced an internal error when participating in joins with aggregation. This issue has been resolved.
VER-77199Kafka IntegrationThe Kafka Scheduler now allows an initial offset of -3, which indicates to begin reading from the consumer group offset.
VER-77227Admin ToolsPreviously, admintools reported it could not start the database because it was unable to read database catalogs, but did not provide further details. This issue has been resolved: the message now provides details on the failure's cause.
VER-77265Catalog Sync and ReviveWe add more detailed messages when permission is denied.
VER-77278Catalog EngineIf you called close_session() while running analyze_statistics() on a local temporary table, Vertica sometimes crashed. This issue has been resolved.
VER-77387Directed Query, OptimizerIf the CTE of a materialized WITH clause was unused and referenced an unknown column, Vertica threw an error. This behavior was inconsistent with the behavior of an unmaterialized WITH clause, where Vertica ignored unused CTEs and did not check them for errors. This problem has been resolved: in both cases, Vertica now ignores all unused CTEs, so they are never checked for errors such as unknown columns.
VER-77394Execution EngineIt was unsafe to reorder query predicates when the following conditions were true:
  • The query contained a predicate on a projection's leading sort order columns that restricted the leading columns to constant values, where the leading columns also were not run-length encoded.
  • A SIPS predicate from a merge join was applied to non-leading sort order columns of that projection.

This issue has been resolved: query predicates can no longer be reordered when it contains a predicate on a projection's leading sort order columns that restrict leading columns to constant values, where the leading columns are not run-length encoded.

VER-77584Execution EngineBefore evaluating a query predicate on rows, Vertica gets the min/max of the expression to determine what rows it can first prune from the queried dataset. An incorrect check on a block's null count caused Vertica to use the maximum value of an all-null block, and mistakenly prune rows that otherwise would have passed the predicate. This issue has been resolved.
VER-77695Admin ToolsIn earlier releases, starting a database with the start_db --force option could delete the data directory if the user lacked read/execute permissions on the data directory. Now, if the user lacks permissions to access the data directory, admintools cancels the start operation. If the user has correct permissions, admintools gives users 10 seconds to abort the start operation.
VER-77814OptimizerQueries that included the TABLESAMPLE option were not supported for views. This issue has been resolved: you can now query views with the TABLESAMPLE option.
VER-77904Admin ToolsIf admintools called create_db and the database creation process was lengthy, admintools sometimes prompted users to confirm whether to continue waiting. If the user did not answer the prompt--for example, when create_db was called by a script--create_db completed execution without creating all database nodes and properly updating the configuration file admintools.conf. In this case, the database was incomplete and unusable.

Now, the prompt times out after 120 seconds. If the user doesn't respond within that time period, create_db exits.
VER-77905Execution EngineA change in Vertica 10.1 prevented volatile functions from being called multiple times in an SQL macro. This change affected the throw_error function. The throw_error function is now marked immutable, so SQL macros can call it multiple times.
VER-77962Catalog EngineVertica now restarts properly for nodes that have very large checkpoint files.
VER-78251Data NetworkingIn rare circumstances, the socket on which Vertica accepts internal connections could erroneously close and send a large number of socket-related error messages to vertica.log. This issue has been fixed.

Known issues in 11.0.0

Updated 8/11/21

Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.

Issue Key ComponentDescription
VER-48020HadoopCanceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load.
VER-61069Execution EngineIn very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely.
VER-61420Data Removal - Delete, Purge, PartitioningPartition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM.
VER-62983HadoopWhen hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas.
VER-64352SDK

Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2:

CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python';
DROP LIBRARY lib1;
CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python';

Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails.

VER-69803HadoopThe infer_external_table_ddl meta-function will hang if you do not set authentication information in the current session when analyzing a file on GCP.
VER-78074Procedural Languages, UDXStored procedures that contain DML queries on tables with key constraints do not return a value.
VER-78310Client Drivers - JDBC

JDBC complex types return NULL arrays as empty arrays. For example, when executing this SQL statement:

SELECT ARRAY[null,ARRAY[1,2,3],null,ARRAY[4,5],null] as array;

The array column the server returns will be:

[[],[1,2,3],[],[4,5],[]]

Because of the null values in the string literal, it should return this value:

[null,[1,2,3],null,[4,5],null]

This is a work around due to a limitation in Simba.

 


Legal Notices

Warranty

The only warranties for products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein.

The information contained herein is subject to change without notice.

Restricted Rights Legend

Confidential computer software. Valid license from Micro Focus required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.

Copyright Notice

© Copyright 2006 - 2021 Micro Focus, Inc.

Trademark Notices

Adobe® is a trademark of Adobe Systems Incorporated.

Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.

UNIX® is a registered trademark of The Open Group.