IMPORTANT: Vertica for SQL on Hadoop Storage Limit

Vertica for SQL on Hadoop is licensed per node, on an unlimited number of central processing units or CPUs and an unlimited number of users. Vertica for SQL on Hadoop is for deployment on Hadoop nodes. Includes 1 TB of Vertica ROS formatted data on HDFS.

This 1 TB of ROS limit enforcement is currently only contractual, but it will begin to be technically enforced in Vertica 10.1. Starting with Vertica 10.1, if you are using Vertica for SQL on Hadoop, you will not be able to load more than 1 TB of ROS data into your Vertica database. If you were unaware of this limitation and already have more than 1 TB of ROS data in your database at this time, please make any necessary adjustments to stay below the limit, or contact our sales team to explore other licensing options.

IMPORTANT: Before Upgrading: Identify and Remove Unsupported Projections

With version 9.2, Vertica has removed support for pre-join and range segmentation projections. If a table's only superprojection is one of these projection types, the projection is regarded as unsafe.

Before upgrading to 9.2 or higher, you are strongly urged to check your database for unsupported projections. If the upgrade encounters these projections, it is liable to fail. You must then revert to the previous installation.

Solution: Run the pre-upgrade script

Vertica has provided a pre-upgrade script that examines your current database and sends to standard output its analysis and recommendations. The script identifies and lists any unsupported projections. If the script finds projection buddies with different SELECT and ORDER BY clauses, it generates a deploy script. Run this script to remedy projections so they comply with system K-safety.

https://www.vertica.com/pre-upgrade-script/

For more information, see Fixing Unsafe Buddy Projections in the Vertica documentation.

Updated: 9/23/2021

About Vertica Release Notes

What's New in Vertica 11.0.0

What's Deprecated in Vertica 11.0.0

Vertica 11.0.0-2: Resolved Issues

Vertica 11.0.0-1: Resolved Issues

Vertica 11.0.0: Resolved Issues

Vertica 11.0.0: Known Issues

About Vertica Release Notes

The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 11.0.x.

They also contain information about issues resolved in:

Downloading Major and Minor Releases, and Service Packs

The Premium Edition of Vertica is available for download at https://support.microfocus.com/downloads/swgrp.html.

The Community Edition of Vertica is available for download at https://www.vertica.com/download/vertica/community-edition.

The documentation is available at https://www.vertica.com/docs/11.0.x/HTML/index.htm.

Downloading Hotfixes

Hotfixes are available to Premium Edition customers only. Contact Vertica support for information on downloading hotfixes.

 

What's New in Vertica 11.0.0

Take a look at the Vertica 11.0.0 New Features Guide for a complete list of additions and changes introduced in this release.

admintools

set_ssl_params Updated for TLS CONFIGURATION

The tool set_ssl_params has been updated to set the certificate parameters for the server TLS CONFIGURATION, which act as direct equivalents to the deprecated parameters SSLPrivateKey, SSLCertificate, and SSLCA.

Because TLS CONFIGURATIONs are catalog objects, the database must now be up to configure client-server TLS.

For more information on client-server TLS configuration, see Configuring Client-server TLS.

Configuration

Viewing User-Level Configuration Settings

Two new features facilitate access to user-level configuration settings:

Database Designer

Database Designer Supports ZStandard Compression Encodings

The design and deployment scripts generated by Database Designer can now recommend one of the following Zstandard compression encodings for projection columns:

Similarly, the Database Designer function DESIGNER_DESIGN_PROJECTION_ENCODINGS also now recommends Zstandard compression encodings as appropriate.

Data Export

Export to ORC and Delimited Formats

You can export data from Vertica as ORC or delimited data. You can export a table or other query results, and you can partition the output. See EXPORT TO ORC, which is very similar to EXPORT TO PARQUET, and EXPORT TO DELIMITED.

You can export data to HDFS, S3, Google Cloud Storage (GCS), Azure Blob Storage, or an NFS mount on the local file system.

EXPORT TO DELIMITED deprecates S3EXPORT, which will be removed in a future release.

Export to Linux File System

When exporting data you can now write to the node-local Linux file system. Previously, exports to Linux were only supported for an NFS mount. For details, see Exporting to the Linux File System.

Data Types

ORC Complex Types

All complex types supported in the Parquet format are also now supported in the ORC format: arrays, structs, combinations of arrays and structs, and maps. Support for reading ORC structs as expanded columns has been removed. The documentation for Complex Types for Parquet and ORC Formats in Working with External Data has been refactored.

Maps in ORC and Parquet Data

You can use an array of structs in the definition of an external table to read and query ORC or Parquet data that uses the Map type. See Combinations of Structs and Arrays.

Null Structs

A null struct in ORC or Parquet data is now treated as a null ROW. Previously, a null struct was read as a ROW with null fields.

Comparison Operators for Complex Types

Collections of any element type and dimensionality support equality, inequality, and null-safe equality operators. One-dimensional collections of any element type also support comparsions (greater than, less than). See ARRAY and SET.

ROWs of any field type support equality, inequality, and null-safe equality operators. ROWs that contain only primitive fiels types or ROWs of primitive field types also support comparisons (greater than, less than).

Documentation Updates

Developing UDxs

The sections under Developing User-Defined Extensions (UDxs) for specific UDx types (Aggregate Functions (UDAFs), Analytic Functions (UDAnFs), Scalar Functions (UDSFs), Transform Functions (UDTFs), and User-Defined Load (UDL)) have been reorganized. Each key class is described along with its APIs in supported languages, instead of separating the general and language-specific information. APIs are presented as in-page tabs, one per language, with links to those classes in the API reference documentation.

Eon Mode

Node-Level Support for CLEAR_DATA_DEPOT

The meta-function CLEAR_DATA_DEPOT can now clear depot data from a single node. Previously, you could remove depot data from a subcluster or from the entire database cluster.

Elastic K-safety

When your database has a K-safety level of 1 or greater, each shard in your database has two or more primary nodes that subscribe to it: a primary subscriber and one or more secondary subscribers. These redundant subscriptions help maintain shard coverage in your database. If the primary subscriber goes down, a secondary subscriber takes over processing the data in the shard.

In previous versions, Vertica did not alter shard subscriptions after the loss of a primary node. When one or more primary nodes went down, leaving a shard with a single subscriber, the remaining secondary subscriber became critical. If any critical nodes also went down, Vertica would shut down due to a loss of shard coverage.

In Vertica Version 11.0.0, a new feature called elastic K-safety helps prevent shutdowns due to the loss of shard coverage. If a primary node is lost, Vertica subscribes new primary nodes to cover the lost node's shard subscriptions. Once these subscriptions take effect, the database once again has two (or more, if K > 1) primary node subscribers for each shard. Adding these subscriptions helps reduce the chances of database shutdown due to loss of shard coverage. See Data Integrity and High Availability in an Eon Mode Database.

Subcluster Interior Connection Load Balancing

Eon Mode databases have a new default connection load balancing rule that automatically distributes client connections among the nodes in a subcluster. When a client opens a connection and opts into having its connection load balanced, the node it initially connects to checks for applicable connection load balancing rules. If no other load balancing rules apply and classic load balancing is not enabled, the node tries to apply the new rule to redirect the client's connection to a node within its subcluster.

This load balancing rule has a low priority, so any other load balancing rule overrides it. As with other connection load balancing rules, this rule has no effect if the client does not opt into load balancing.

For more information, see Connection Load Balancing Policies.

Enterprise to Eon Mode Migration Support for Azure

You can now migrate an Enterprise Mode database to an Eon Mode database running on Azure. This conversion stores the database's data communally in Azure Blob storage. See Migrating an Enterprise Database to Eon Mode.

Rescaling Depot Capacity

When a database is revived on an instance with greater or lesser disk space than it had previously, Vertica evaluates the depot size settings that were previously in effect. If depot size was specified as a percentage of available disk space, Vertica proportionately rescales depot capacity. To support this functionality, a new DISK_PERCENT column has been added to system table STORAGE_LOCATIONS.

For details, see Resizing Depot Caching Capacity.

Revive Primary Subcluster Only

Previously, you had to revive all subclusters that were in an Eon Mode database on shutdown. Now, you can choose to revive just the primary subclusters in the database. After the primary subclusters are revived, you can choose to revive some or all secondary subclusters individually. This feature gives you the flexibility to revive a minimal number of nodes in cases where you do not need the entire original database cluster.

Apache Kafka Integration

Encryption-only TLS Configuration

If your Kafka deployment only uses TLS for encryption without authentication (that is, when ssl.client.auth is none or requested), the following Kafka session parameters are now optional when integrating with Vertica:

For details on Kafka session parameters, see Kafka User-Defined Session Parameters.

This configuration also makes the scheduler keystore optional.

For details on configuring TLS for the scheduler, see Configuring Your Scheduler for TLS Connections.

Voltage SecureData Integration

Tweak Parameter

Analogous to a salt, you can specify an additional "tweak" value to further modify the ciphertext returned by VoltageSecureProtect. This allows you to create unique, more secure ciphertexts from the same plaintext.

"Tweaked" ciphertexts can only be decrypted by passing the same format and "tweak" value to VoltageSecureAccess. Support for this feature is set in the SDA.

Masking Parameter

You can now mask the plaintext returned by VoltageSecureAccess if the encryption format has masking enabled in the SDA. This is useful for certain kinds of data like phone numbers or SSNs, where trailing digits are often used for encryption and can be safely exposed to certain parties.

Format-Preserving Hash

VoltageSecureProtect can now return format-preserving hashes (FPH) when passed a FPH format defined in the SecureData Appliance (SDA).

Machine Learning

Models for Time Series Analytics

Autoregressive Models

You can now create and make predictions with autoregressive models for time series datasets.

Autoregressive models predict future values of a time series based on the preceding values. More specifically, the user-specified "lag" determines how many previous timesteps it takes into account during computation, and predicted values are linear combinations of the values at each lag.

Moving-average Models

You can now create and make predictions with moving-average models for time series datasets.

Moving average models use the errors of previous predictions to make future predictions. More specifically, the user-specified "lag" determines how many previous predictions and errors it takes into account during computation.

TensorFlow 2.x Support

You can now import and make predictions with TensorFlow 2.x models in your Vertica database.

XGBoost: Column Subsampling

You can perform more precise column subsampling with XGB_CLASSIFIER and XGB_REGRESSOR by specifying a column ratio with the following parameters:

Analogous to subsampling in Random Forest, these parameters can offer significant benefits in performance, control, and reducing overfitting, especially for datasets with many columns/features.

PMML Updates

Extended Model Support

You can now import PMML models of type:

New PMML Subtags

Vertica now supports the following tags and subtags:

For a full list of supported subtags, see PMML Features and Attributes.

Management Console

New Setup Path Options for Eon Mode on Amazon Web Services

MC provides a Quick Setup wizard for creating an Eon Mode database on AWS. The wizard's intuitive design provides a guided setup with recommendations for your cluster configuration.

The setup wizard available in previous versions is now the Advanced Setup. For details, see Creating an Eon Mode Database in AWS with MC.

Support for Microsoft Azure

Management Console (MC) supports Eon Mode databases on Microsoft Azure.

Launch an Azure image with a pre-configured MC environment. After you launch the MC instance, you can provision and create a database from the MC using the new wizard. You can start, stop, and terminate a database from MC.

MC on Azure has the following limitations:

Create Custom Alerts to Monitor Database

Create custom alerts to monitor fluctuations in database performance that are not monitored by the currently available pre-configured alerts. Custom alerts use a user-defined SQL query to trigger message notifications when the query result exceeds a defined threshold. You can add dynamic variables to your query to fine-tune the SQL query, even after you save the alert.

Configure and create alerts on the Alerts tab, which was previously the Thresholds tab. The Alerts tab offers an updated user interface with a modern, intuitive design. To access the Alerts tab, from the MC Overview page, go to Settings, then Alerts.

For details about custom alerts, see Creating a Custom Alert.

Projections

Partition Range Projections

Vertica now supports projections that specify a range of partition keys. Previously, projections were required to store all rows of partitioned table data. Over time, this requirement was liable to incur increasing overhead:

As data accumulated, increasing amounts of storage were required for large amounts of data that were typically queried infrequently, if at all.

Large projections deterred optimizations such as better encodings, or changes to the projection sort order or segmentation. Changes like these to the projection's DDL required you to refresh the entire projection. Depending on the projection size, this refresh operation could span hours or even days.

Now, CREATE PROJECTION can create projections that specify a range of partition keys from their anchor table.

Query Optimization

REFRESH_COLUMNS Optimization

When you call REFRESH_COLUMNS on a flattened table's SET USING (or DEFAULT USING) column, it executes the SET USING query by joining the target and source tables. By default, the source table is always the inner table of the join. In most cases, cardinality of the source table is less than the target table, so REFRESH_COLUMNS executes the join efficiently.

Occasionally—notably, when you call REFRESH_COLUMNS on a partitioned table—the source table can be larger than the target table. In this case, performance of the join operation can be suboptimal.

You can now address this issue by enabling the new configuration parameter RewriteQueryForLargeDim. When enabled, Vertica rewrites the query, by reversing the inner and outer join between the target and source tables.

SDK Updates

Scalar Functions Support Range Optimization in Fenced Mode

A user-defined scalar function (UDSF) written in C++ can optionally implement the getOutputRange function to return the range of possible values for a block of data. Implementing this function allows the execution engine to skip blocks that cannot satisfy a value-based predicate.

Previously, this function could only be used when running a UDSF in unfenced mode. It can now be used for both fenced and unfenced modes.

For details, see Improving Query Performance (C++ Only).

Security and Authentication

TLS CONFIGURATION

TLS management has been simplified and centralized in the new TLS CONFIGURATION object and can be managed with ALTER TLS CONFIGURATION. Vertica includes the following TLS CONFIGURATION objects by default, each of which manages the certificates, cipher suites, and TLSMODE for a particular TLS context:

  1. server: client-server TLS
  2. LDAPLink: using the LDAPLink service or its dry run functions to synchronize users and groups between Vertica and the LDAP server
  3. LDAPAuth: when a user with an ldap authentication method attempts to log into Vertica, Vertica attempts to bind the user to a matching user in the LDAP server. If the bind succeeds, Vertica allows the user to log in.

These TLS CONFIGURATIONs cannot be dropped.

Existing configurations that use the following parameters will be automatically ported to their equivalents in the TLS CONFIGURATION scheme on upgrade.

Mutual TLS for LDAPLink and LDAPAuth

You can now use mutual TLS for connections between Vertica and your LDAP server by providing a client certificate for the LDAPLink and LDAPAuth TLS CONFIGURATIONS.

New LDAP Link Function: LDAP_LINK_SYNC_CANCEL

You can now cancel in-progress synchronizations between Vertica and your LDAP server with LDAP_LINK_SYNC_CANCEL.

New Security Parameter: SystemCABundlePath

SystemCABundlePath lets you specify a CA bundle for Vertica to use for establishing TLS connections with external services.

New Security Parameter: DHParams

DHParams lets you specify an alternate Diffie-Hellman MODP group of at least 2048 bits to use during key exchange.

Unified Access Policies for External File Systems

By default, Vertica uses user-supplied credentials to access HDFS and cloud file systems and does not require USER storage locations. A new configuration parameter, UseServerIdentityOverUserIdentity, allows you to override this behavior and require USER storage locations. For more information about file-system credentials, see File Systems and Object Stores.

SQL Functions and Statements

Exporting Table Access Policies

The SQL scripts generated by Vertica meta-functions EXPORT_TABLES, EXPORT_OBJECTS, and EXPORT_CATALOG now include CREATE ACCESS POLICY statements, as applicable.

Node-Level Support for CLEAR_DATA_DEPOT

The CLEAR_DATA_DEPOT meta-function can now clear depot data from a single node. Previously, you could remove depot data from a subcluster or from the entire database cluster.

ARRAY_CAT Support for Complex Types

The ARRAY_CAT function now supports elements of complex types. Both input arrays must have the same element type; for example, ROW elements must have the same fields.

INFER_EXTERNAL_TABLE_DDL

The INFER_EXTERNAL_TABLE_DDL meta-function now supports both Parquet and ORC formats. The new syntax is updated to include a format parameter. The earlier syntax has been deprecated.

TO_JSON Supports Sets

The TO_JSON function now supports SET arguments. You no longer need to cast them to arrays.

EXPLODE Supports Complex Arrays and Sets

The EXPLODE function now supports arrays of complex types, multi-dimensional arrays, and sets.

RELOAD_ADMINTOOLS_CONF

The RELOAD_ADMINTOOLS_CONF meta-function updates admintools.conf on all UP nodes using the current catalog information. Use this function to confirm that the server updated the admintools.conf file for a node that recently came UP.

Supported Platforms

Operator and Automation Tools for Kubernetes

Vertica provides a Helm chart that packages an operator, admission controller, and custom resource definition (CRD) file to automate lifecycle tasks for Vertica on Kubernetes. Use the CRD to create a custom resource (CR) instance, then install the operator to monitor the CR to maintain its desired state. The operator uses the admission controller to verify changes to mutable states in the custom resource.

The operator automates the following administrative tasks:

Note: Autoscaling is not supported in Vertica on Kubernetes.

For additional information, see Operator for Lifecycle Automation and Containerized Vertica.

System Tables

TLS_CONFIGURATIONS

The new TLS_CONFIGURATIONS system table holds information on existing TLS CONFIGURATION objects and their certificates, cipher suites, and TLSMODE.

USER_CONFIGURATION_PARAMETERS

A new system table, USER_CONFIGURATION_PARAMETERS, can be queried for all user-level configuration settings.

New Counters and Event Type

The EXECUTION_ENGINE_PROFILES system table has three new counters to support range-based optimizations in scalar functions:

The QUERY_EVENTS system table has one new related event, PREDICTS_DISCARDED_FROM_SCAN.

The UDSF optimization is described in Improving Query Performance (C++ Only).

New Column in PROJECTION_REFRESHES

PROJECTION_REFRESHES has a new column, PERCENT_COMPLETE.

Tuple Mover

Disabling Mergeout

You can disable mergeout on individual tables with ALTER TABLE.

This is generally useful for avoiding mergeout-related overhead otherwise incurred by tables that serve a temporary purpose—for example, staging tables that are used to swap partitions between tables.

For details, see Disabling Mergeout.

What's Deprecated in Vertica 11.0.0

This section describes the two phases Vertica follows to retire Vertica functionality:

Deprecated

The following Vertica functionality was deprecated and will be retired in future versions:

Release Functionality Notes
11.0

AWS library functions:

  • AWS_GET_CONFIG
  • AWS_SET_CONFIG
  • S3EXPORT
  • S3EXPORT_PARTITION
Instead, use EXPORT TO DELIMITED.
INFER_EXTERNAL_TABLE_DDL (path, table) syntax Instead, use new USING PARAMETERS syntax to specify format and other parameters.
HDFSUseWebHDFS configuration parameter and LibHDFS++ Currently, a URL in the hdfs scheme uses LibHDFS++ to access data, and this configuration parameter allows you to force it to use WebHDFS instead. LibHDFS++ is deprecated and, in the future, hdfs URLs will automatically use WebHDFS. For details see HDFS File System.
DESIGN_ALL option for EXPORT_CATALOG() DESIGN option generates equivalent output.
Vertica Spark connector V1 The old version of the Vertica Spark connector (now referred to as the connector V1) has been deprecated in favor of the new, open-source Spark connector. The old connector will remain available for now as the new connector is not compatible with older versions of Spark and Scala. See Integrating with Apache Spark

Removed

The following functionality was removed:

Functionality Notes
Reading structs from ORC files as expanded columns Use the ROW type to declare structs. See Complex Types for Parquet and ORC Formats.
DISABLE_ELASTIC_CLUSTER()  
flatten_complex_type_nulls parameter to the ORC and Parquet parsers  

For more information see Deprecated and Retired Functionality in the Vertica documentation.

Vertica 11.0.0-2: Resolved Issues

Release Date: 9/23/2021

This release addresses the issues below.

Issue Component Description
VER‑78927 Security

In Vertica 11.0, TLS configurations were greatly simplified for both LDAP Link and LDAP Authentication. As part of that simplification, the LDAP StartTLS parameter is now set automatically based on the TLSMODE and no longer needs to be set separately via a configuration parameter.

Previously, StartTLS was incorrectly enabled when using the ldaps:// protocol regardless of the TLSMODE. This issue has been resolved.

Vertica 11.0.0-1: Resolved Issues

Release Date: 9/14/2021

This release addresses the issues below.

Issue Component Description
VER‑78586 Database Designer Core In recent releases, Database Designer generated SQL files with permissions of 600 instead of 666. This issue has been resolved.
VER‑78663 Execution Engine Queries on system table EXTERNAL_TABLE_DETAILS with complex predicates on the table_schema, table_name, or source_format columns either returned wrong results or caused the cluster to crash. This issue has been resolved.
VER‑78668 Backup/DR When performing a full restore, AWS configuration parameters such as AWSEndpoint,AWSAuth and AWSEnableHttps were overwritten on the restored database with their backup settings. This issue has been resolved: the restore leaves the parameter values unchanged.
VER‑78712 Optimizer

The configuration parameter MaxParsedQuerySizeMB knob is set in MB (as documented). The optimizer can require very large amounts of memory for a given query, much of it consumed by internal objects that the parser creates when it converts the query into a query tree for optimization.

Other issues were identified as contributing to excessive memory consumption, and these have been addressed, including freeing memory allocated to query tree objects when they are no longer in use.

Vertica 11.0.0: Resolved Issues

Release Date: 8/11/2021

This release addresses the issues below.

Issue Component Description
VER‑68406 Tuple Mover When Mergeout Cache is enabled, the dc_mergeout_requests system table now contains valid transaction ids instead of zero.
VER‑71064 Catalog Sync and Revive, Depot, EON Previously, when a node belonging to a secondary subcluster restarted, it lost files in its depot. This issue has been fixed.
VER‑72596 Data load / COPY, Security The COPY option REJECTED DATA to TABLE now properly distributes data between tables with identical names belonging to different schemas.
VER‑73751 Tuple Mover The Tuple Mover logged a large number of PURGE requests on a projection while another MERGEOUT job was running on the same projection. This issue has been resolved.
VER‑73773 Tuple Mover Previously, the Tuple Mover attempted to merge all eligible ROS containers without considering resource pool capacity. As a result, mergeout failed if the resource pool could not handle the mergeout plan size. This issue has been resolved: the Tuple Mover now takes into account resource pool capacity when creating a mergeout plan, and adjusts the number of ROS containers accordingly.
VER‑74554 Tuple Mover Occasionally, the Tuple Mover dequeued DVMERGEOUT and MERGEOUT requests simultaneously and executed only the DVMERGEOUT requests, leaving the MERGEOUT requests pending indefinitely. This issue has been resolved: now, after completing execution of any DVMERGEOUT job, the Tuple Mover always looks for outstanding MERGEOUT requests and queues them for execution.
VER‑74615 Hadoop Fixed a bug in predicate pushdown on parquet files stored on HDFS. The bug would cause a parquet file spanning multiple HDFS block to not have some of its rowgroups, specifically those located on blocks other than the starting HDFS block, pruned. In some corner cases as this one, the bug would actually cause the wrong rowgroup to get pruned, leading to incorrect results.
VER‑74619 Hadoop Due to some compatibility issues between the different open source libraries, Vertica failed to read the ZSTD compressed parquet files generated by some external tools (such as Impala) with a column containing all NULLS. This is fixed and Vertica can correctly read such files without error.
VER‑74814 Hadoop The open source library used by Vertica to generate parquet files would buffer null values inefficiently in-memory. This caused high memory usage, especially in cases where the data being exported had a lot of nulls. The library has been patched to buffer null values in encoded format, resulting in optimized memory usage.
VER‑74974 Database Designer Core Under certain circumstances, Database Designer designed projections that could not be refreshed by refresh_columns(). This issue has been resolved.
VER‑75139 DDL Adding columns to large tables with many columns on an Eon-mode database was slow and incurred considerable resource overhead, which adversely affected other workloads. This issue has been resolved.
VER‑75496 Depot System tables continued to report that a file existed in the depot after it was evicted, which caused queries on that file to return "File not found" errors. This issue has been resolved.
VER‑75715 Backup/DR When restoring objects in coexist mode, the STDOUT now contains the correct schema name prefix.
VER‑75778 Execution Engine With Vertica running on machines with very high core counts, complex memory-intensive queries featuring an analytical function that fed into a merge operation sometimes caused a crash if the query ran in a resource pool where EXECUTIONPARALLELISM was set to a high value. This issue has been resolved.
VER‑75783 Optimizer The NO HISTOGRAM event was set incorrectly on the dc_optimizer_events table's hidden epoch column. As a result, the suggested_action column was also set incorrectly to run analyze_statistics. This issue is resolved: the NO HISTOGRAM event is no longer set on the epoch column.
VER‑75806 UI - Management Console COPY type of queries have been added to the list of queries displayed for Completed Queries on Query Monitoring Activity page.
VER‑75864 Data Export Previously during export to Parquet, Vertica wrote the time portion of each timestamp value as a negative number for all timestamps before POSTGRES EPOCH DATE (2000-01-01). Due to this some tools (e.g. Impala) could not load such timestamps from parquet files exported by Vertica. This is fixed now.
VER‑75881 Security Vertica no longer takes a catalog lock during authentication, after the user's password security algorithm has been changed from MD5 to SHA512. This is due to removing the updating of the user's salt, which is not used for MD5 hash authentication.
VER‑75898 Execution Engine Calls to export_objects sometimes allocated considerable memory while user acesss privileges to the object were repeatedly checked. The accumulated memory was not freed until export_objects returned, which sometimes caused the database to go down with an out-of-memory error. This issue has been resolved: now memory is freed more promptly so it does not excessively accumulate.
VER‑75933 Catalog Engine The export_objects meta-function could not be canceled. This issue has been resolved.
VER‑76094 Data Removal - Delete, Purge, Partitioning If you created a local storage location for USER data on a cluster that included standby nodes, attempts to drop the storage location returned with an error that Vertica was unable to drop the storage location from standby nodes. This issue has been resolved.
VER‑76125 Backup/DR Removed access permission check for S3 bucket root during backup/restore. Users having access to specific directories in the bucket and not the root can now perform backup/restore in their respective directories without getting an access denier error.
VER‑76131 Kafka Integration Updated documentation to mention support for SCRAM-SHA-256/512
VER‑76200 Admin Tools When adding a node to an Eon-mode database with Administration Tools, users were prompted to rebalance the cluster, even though this action is not supported for Eon. This issue was resolved: now Administration Tools skips this step for an Eon database.
VER‑76349 Optimizer The optimizer combines multiple predicates into a single-column Boolean predicate where subqueries are involved, to achieve predicate pushdown. The optimizer failed to properly handle cases where two NotNull predicates were combined into a single Boolean predicate, and returned an error. This issue has been resolved.
VER‑76384 Execution Engine In queries that used variable-length-optimized joins, certain types of joins incurred a small risk of a crashing the database due to a problem when checking for NULL join keys. This issue has been resolved.
VER‑76424 Execution Engine If a query includes 'count(s.*)' where s is a subquery, Vertica expects multiple outputs for s.*. Because Vertica does not support multi-valued expressions in this context, the expression tree represents s.* as a single record-type variable. The mismatch in the number of outputs can result in database failure.

In cases like this, Vertica now returns an error message that multi-valued expressions are not supported.
VER‑76449 Sessions Vertica now better detects situations where multiple Vertica processes are started at the same time on the same node.
VER‑76511 Sessions, Transactions Previously, a single-node transaction sends commit message to all nodes even if it has no content to commit. This is now fixed (single-node transaction commits locally if it has no content to commit)
VER‑76543 Optimizer, Security For a view A.v1, its base table B.t1, and an access policy on B.t1: users no longer require a USAGE privilege on schema B to SELECT view A.v1.
VER‑76584 Security Vertica now automatically creates needed default key projections for a user with DML access when that user performs an INSERT into a table with a primary key and no projections.
VER‑76815 Optimizer Using unary operators as GROUP BY or ORDER BY elements in WITH clause statements caused Vertica to crash. The issue is now resolved.
VER‑76824 Optimizer If you called a view and the view's underlying query invoked a UDx function on a table with an argument of '*' (all columns), Vertica crashed if the queried table later changed--for example, columns were added to it. The issue has been resolved: the view now returns the same results.
VER‑76851 Data Export Added support for exporting UUID types via s3export. Before, exporting data with UUID types using s3export would sometimes crash the initiator node.
VER‑76874 Optimizer Updating the result set of a query that called the volatile function LISTAGG resulted in unequal row counts among projections of the updated table. This issue has been resolved.
VER‑76952 DDL - Projection In previous releases, users were unable to alter the metadata of any column in tables that had a live aggregate or Top-K projection, regardless of whether they participated in the projection itself. This issue has been resolved: now users can change the metadata of columns that do not participate in the table's live aggregate or Top-K projections.
VER‑76961 Spread Spread now correctly detects old tokens as duplicates.
VER‑77006 Machine Learning The PREDICT_SVM_CLASSIFIER function could cause the database to go down when provided an invalid value for its optional "type" parameter. The function now returns an error message indicating that the entered value was invalid and notes that valid values are "response" and "probability."
VER‑77007 Catalog Engine Standby nodes did not get changes to the GENERAL resource pool when it replaced a down node. This problem has been resolved.
VER‑77026 Execution Engine Vertica was unable to optimize queries on v_internal tables, where equality predicates (with operator =) filtered on columns relname or nspname, in the following cases:

* The predicate specified expressions with embedded characters such as underscore (_) or percentage (%). For example:
SELECT * FROM v_internal.vs_columns WHERE nspname = 'x_yz';
* The query contained multiple predicates separated by AND operators, and more than one predicate queried the same column, either nspname or relname. For example:
SELECT * FROM v_internal.vs_columns WHERE nspname = 'xyz' AND nspname <> 'vs_internal'
In this case, Vertica was unable to optimize equality predicate nspname = 'xyz'.

In all these cases, the queries are now optimized as expected.
VER‑77173 Monitoring Startup.log now contains a stage identifying when the node has received the initial catalog.
VER‑77190 Optimizer SELECT clause CASE expressions with constant conditions and string results that were evaluated to shorter strings sometimes produced an internal error when participating in joins with aggregation. This issue has been resolved.
VER‑77199 Kafka Integration The Kafka Scheduler now allows an initial offset of -3, which indicates to begin reading from the consumer group offset.
VER‑77227 Admin Tools Previously, admintools reported it could not start the database because it was unable to read database catalogs, but did not provide further details. This issue has been resolved: the message now provides details on the failure's cause.
VER‑77265 Catalog Sync and Revive We add more detailed messages when permission is denied.
VER‑77278 Catalog Engine If you called close_session() while running analyze_statistics() on a local temporary table, Vertica sometimes crashed. This issue has been resolved.
VER‑77387 Directed Query, Optimizer If the CTE of a materialized WITH clause was unused and referenced an unknown column, Vertica threw an error. This behavior was inconsistent with the behavior of an unmaterialized WITH clause, where Vertica ignored unused CTEs and did not check them for errors. This problem has been resolved: in both cases, Vertica now ignores all unused CTEs, so they are never checked for errors such as unknown columns.
VER‑77394 Execution Engine It was unsafe to reorder query predicates when the following conditions were true:
* The query contained a predicate on a projection's leading sort order columns that restricted the leading columns to constant values, where the leading columns also were not run-length encoded.
* A SIPS predicate from a merge join was applied to non-leading sort order columns of that projection.

This issue has been resolved: query predicates can no longer be reordered when it contains a predicate on a projection's leading sort order columns that restrict leading columns to constant values, where the leading columns are not run-length encoded.
VER‑77584 Execution Engine Before evaluating a query predicate on rows, Vertica gets the min/max of the expression to determine what rows it can first prune from the queried dataset. An incorrect check on a block's null count caused Vertica to use the maximum value of an all-null block, and mistakenly prune rows that otherwise would have passed the predicate. This issue has been resolved.
VER‑77695 Admin Tools In earlier releases, starting a database with the start_db --force option could delete the data directory if the user lacked read/execute permissions on the data directory. Now, if the user lacks permissions to access the data directory, admintools cancels the start operation. If the user has correct permissions, admintools gives users 10 seconds to abort the start operation.
VER‑77814 Optimizer Queries that included the TABLESAMPLE option were not supported for views. This issue has been resolved: you can now query views with the TABLESAMPLE option.
VER‑77904 Admin Tools If admintools called create_db and the database creation process was lengthy, admintools sometimes prompted users to confirm whether to continue waiting. If the user did not answer the prompt--for example, when create_db was called by a script--create_db completed execution without creating all database nodes and properly updating the configuration file admintools.conf. In this case, the database was incomplete and unusable.

Now, the prompt times out after 120 seconds. If the user doesn't respond within that time period, create_db exits.
VER‑77905 Execution Engine A change in Vertica 10.1 prevented volatile functions from being called multiple times in an SQL macro. This change affected the throw_error function. The throw_error function is now marked immutable, so SQL macros can call it multiple times.
VER‑77962 Catalog Engine Vertica now restarts properly for nodes that have very large checkpoint files.
VER‑78251 Data Networking In rare circumstances, the socket on which Vertica accepts internal connections could erroneously close and send a large number of socket-related error messages to vertica.log. This issue has been fixed.

Known issues Vertica 11.0.0

Updated: 8/11/21

Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.

Issue Component Description
VER‑48020 Hadoop Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load.
VER‑61069 Execution Engine In very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely.
VER‑61420 Data Removal - Delete, Purge, Partitioning Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM.
VER‑62983 Hadoop When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas.
VER‑64352 SDK

Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2:

CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python'; DROP LIBRARY lib1;

CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python';

Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails.

VER‑69803 Hadoop The infer_external_table_ddl meta-function will hang if you do not set authentication information in the current session when analyzing a file on GCP.
VER‑78074 Procedural Languages, UDX Stored procedures that contain DML queries on tables with key constraints do not return a value.
VER‑78310 Client Drivers - JDBC

JDBC complex types return NULL arrays as empty arrays. For example, when executing this SQL statement:

SELECT ARRAY[null,ARRAY[1,2,3],null,ARRAY[4,5],null] as array;

The array column the server returns will be:

[[],[1,2,3],[],[4,5],[]]

Because of the null values in the string literal, it should return this value:

[null,[1,2,3],null,[4,5],null]

This is a work around due to a limitation in Simba.

 


Legal Notices

Warranty

The only warranties for products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein.

The information contained herein is subject to change without notice.

Restricted Rights Legend

Confidential computer software. Valid license from Micro Focus required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.

Copyright Notice

© Copyright 2006 - 2021 Micro Focus, Inc.

Trademark Notices

Adobe® is a trademark of Adobe Systems Incorporated.

Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.

UNIX® is a registered trademark of The Open Group.