Updated 6/30/2022
Important: Before Upgrading Vertica for SQL on Hadoop Storage Limit Vertica for SQL on Hadoop is licensed per node, on an unlimited number of central processing units or CPUs and an unlimited number of users. Vertica for SQL on Hadoop is for deployment on Hadoop nodes. Includes 1 TB of Vertica ROS formatted data on HDFS. Starting with Vertica 10.1, if you are using Vertica for SQL on Hadoop, you cannot load more than 1 TB of ROS data into your Vertica database. If you were unaware of this limitation and currently have more than 1 TB of ROS data in your database, before upgrading to Vertica 10.1 or higher please make the necessary adjustments to stay below the 1 TB limit, or contact our sales team to explore other licensing options. Identify and Remove Unsupported Projections With version 9.2, Vertica has removed support for pre-join and range segmentation projections. If a table's only superprojection is one of these projection types, the projection is regarded as unsafe.Before upgrading to 9.2 or higher, you are strongly urged to check your database for unsupported projections. If the upgrade encounters these projections, it is liable to fail. You must then revert to the previous installation. Solution: Run the pre-upgrade script Vertica has provided a pre-upgrade script that examines your current database and sends to standard output its analysis and recommendations. The script identifies and lists any unsupported projections. If the script finds projection buddies with different SELECT and ORDER BY clauses, it generates a deploy script. Run this script to remedy projections so they comply with system K-safety. https://www.vertica.com/pre-upgrade-script/ For more information, see Fixing Unsafe Buddy Projections in the Vertica documentation. |
Contents
Vertica 12.0.0
About Vertica Release Notes
The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 12.0.x.
They also contain information about issues resolved in:
- Hotfixes
- Service Packs
- Major Releases
- Minor Releases
Downloading Major and Minor Releases, and Service Packs
The Premium Edition of Vertica is available for download at https://support.microfocus.com/downloads/swgrp.html.
The Community Edition of Vertica is available for download at https://www.vertica.com/download/vertica/community-edition.
The documentation is available at https://www.vertica.com/docs/12.0.x/HTML/index.htm.
Downloading Hotfixes
Hotfixes are available to Premium Edition customers only. Contact Vertica support for information on downloading hotfixes.
Deprecated and Removed in Vertica 12.0.0
Vertica retires product functionality in two phases:
- Deprecated: Vertica announces deprecated features and functionality in a major or minor release. Deprecated features remain in the product and are functional. Published release documentation announces deprecation on this page. When users access this functionality, it may return informational messages about its pending removal.
- Removed: Vertica removes a feature in a major or minor release that follows the deprecation announcement. Users can no longer access the functionality, and this page is updated to verify removal. Documentation that describes this functionality is removed, but remains in previous documentation versions.
Deprecated
No deprecated functionality in this release.
Removed
No removed functionality in this release:
Resolved in 12.0.0
Updated 06/17/2022
This release addresses the issues below.
Issue Key | Component | Description |
---|---|---|
VER-61257 | Data load / COPY |
The load option "ON EACH NODE" was developed. If data to be loaded resides under the same file path on the filesystem of multiple nodes, a query can be run with "ON EACH NODE" instead of "path ON node1, path ON node2, ...". This will load data from the same file path on every node where the data is available. |
VER-67109 | Catalog Engine |
In a database with large number of projections, querying system table vs_segment or views of vs_segment somettimes returned with an error that the MaxParsedQuerySizeMB limit had been exceeded. This issue has been resolved. |
VER-73674 | Admin Tools |
Adding two nodes to a one-node enterprise database required the database to be rebalanced and made K-safe. Attempts to rebalance and achieve K-safety on the admintool command line ended in an error, while attempts to update K-safety in the admintools GUI also failed. These issues have been resolved. |
VER-79756 | Client Drivers - VSQL, Security |
When TLSMode is set to require, vsql did not allow connections if the SSL certificates were expired. This issue has been resolved. |
VER-80448 | UI - Management Console |
Some cookie features were not set properly in HTTP headers. This issue has been resolved. |
VER-80619 | Backup/DR |
vbr calls to the AWS function DeleteObjects() did not gracefully handle SlowDown errors. This issue has been resolved by changing the boto3 retry logic, so SlowDown errors are less likely to occur. |
VER-80890 | Catalog Engine |
Vertica 10.1 added support for bounds to array types. The addition of these bounds made changes to "typemod," which has represented the details of a column (e.g. VARCHAR length, timestamp precision). The changes to "typemod" were not correctly accounted for in the "odbc_columns" and "jdbc_columns" metadata tables, which resulted in inconsistent TIMESTAMP column definitions. This issue has been fixed. |
VER-81033 | Security |
OpenSSL has been upgraded to 1.1.1n. |
VER-81074 | Execution Engine |
'= ANY' expressions on arrays expect an array literal expression argument. If the argument was a string_to_array function, Vertica tried to interpret it as an array literal and produced a wrong result. This issue has been resolved: before passing a string_to_array function as an argument to an ' = ANY' expression, Vertica now constant-folds the function into an array literal, so the '= ANY' expression' can read the argument as expected. |
VER-81121 | Admin Tools |
Attempts to replace a down node in the admintools GUI or on the command line by calling db_replace_node or restart_node with the --force option failed. This issue has been resolved. |
VER-81214 | UI - Management Console |
Previously, when the MC was configured to use LDAP authentication and the MC restarted or the machine rebooted, MC users would be unable to log in to the MC with LDAP authentication. The issue has been resolved. |
VER-81248 | Admin Tools |
In previous releases, the default setting to rotate admintools.log was set to monthly, which was too infrequent for many clients, given the rapid growth of log size. This issue has been resolved: the default rotation frequency for admintools.log is now set to daily. A minimum log size limit is also set, which prevents rotation of logs if their size is less than 512kb. |
VER-81318 | UI - Management Console |
In some circumstances, the MC browser session would timeout after a period of inactivity. This issue has been resolved. |
VER-81358 | Execution Engine |
Addition of configuration parameter EnablePredicateRemoval introduced performance regressions in certain queries. This regression has been corrected. |
VER-81418 | EON, Subscriptions |
When an Eon Mode database was in read-only mode, unsegmented projected queries failed. Now, unsegmented projected queries succeed if there are active replica shard subscriptions on at least one node. |
VER-81470 | Tuple Mover |
Mergeout plans converted StorageMerge to StorageUnion+Sort, which slowed down mergeout and adversely affeced performance. This issue has been resolved: StorageMerge is no longer converrted to StorageUnion+Sort. |
VER-81474 | Execution Engine |
Changed misleading counter name from "number of bytes read from communal storage" to "number of bytes read from persistent storage" |
VER-81526 | Backup/DR |
When vbr failed to launch vertica-download-file or vertica-download-file ended with error, vbr raised an error because it tried to load the error message as json. This issue has been resolved with more error handling. |
VER-81572 | Kubernetes |
In Kubernetes environments, it was possible to report the memory usage in the memory_usage view as a negative value. This occurred because the overall memory size in the system was reduced, but the other memory usage components, like bytes free, were not reduced. Because memory usage was reported at the host level, we reported the usage as a negative percentage when free memory was greater than total memory. This issue has been resolved. Now, Kubernetes environments gather all memory information from cgroup information (/sys/fs/cgroup/memory) rather than /proc/memory. |
VER-81639 | Execution Engine |
Running analyze_statistics sometimes caused significant, unexpected slowdowns on concurrent queries. This slowdown was most noticeable on subsecond queries with StorageMerge operators in the plan. This issue has been resolved. |
VER-81684 | Client Drivers - ODBC |
TIMESTAMPTZ and TIMETZ values map to SQL_TYPE_TIMESTAMP and SQL_TYPE_TIME types in the ODBC Specification and do not include the timezone. The ODBC driver handles them identically to the TIMESTAMP and TIMETZ types. |
VER-81692 | Client Drivers - ODBC |
Previously, the ODBC driver could crash when using OCaml due to a bug in the Simba SDK. Version 12.0 Vertica ODBC driver uses an updated version of the Simba SDK that fixes this issue. |
VER-81694 | Catalog Engine |
Queries on system view V_MONITOR.PARTITIONS occasionally returned with an error that the limit set by MaxParsedQuerySizeMB had been exceeded. This issue has been resolved. |
VER-81729 | UI - Management Console | Management Console was unable to start a database if the URL for communal storage ended with a backslash ('/'). This issue has been resolved. |
VER-81751 | Execution Engine |
There was a performance issue when you executed a query on the VARBINARY(16) data type. This issue has been resolved. |
VER-81830 | Security |
Users synced from LDAP with an obsolete DN are orphaned. Previously, if you dropped these orphaned users and recreated them with LDAP Link, their roles would not be preserved. This issue has been fixed; these types of users now retain their roles on recreation with LDAP Link. |
VER-81920 | Tuple Mover |
For projections with partition grouping, Reshard Mergeout can now efficiently merge both resharded and regular containers placed in different layers of strata. |
VER-81986 | Depot |
In Eon Mode, if the service shard was a non-participating primary subscriber, it could not fetch data from communal storage and cache it in the depot. This issue has been resolved. Now, if a node is down and another node temporarily served the shard, it fetches the file from communal storage and caches it in the depot without evicting any data. |
VER-82102 | Security |
Previously, running LDAP_LINK_DRYRUN_SEARCH and LDAP_LINK_DRYRUN_SYNC without the optional parameter 'LDAPLinkJoinAttr' would cause the database to go down. This issue has been fixed. |
VER-82114 | Catalog Engine, DDL, UDX |
Previously, CREATE PROCEDURE would not properly check for duplicate formal parameters in cases where they only differed by letter case (that is, upper or lower). This issue has been fixed. |
Known Issues in 12.0.0
Updated 06/17/2022
Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.
Issue Key | Component | Description |
---|---|---|
VER-78310 | Client Drivers - JDBC |
JDBC complex types return NULL arrays as empty arrays. For example, when executing this SQL statement: SELECT ARRAY[null,ARRAY[1,2,3],null,ARRAY[4,5],null] as array; The array column the server returns will be: [[],[1,2,3],[],[4,5],[]] Because of the null values in the string literal, it should return this value: [null,[1,2,3],null,[4,5],null] This is a work around to a limitation in Simba. |
VER-78074 | Procedural Languages, UDX |
Stored procedures that contain DML queries on tables with key constraints do not return a value. |
VER-69803 | Hadoop |
The infer_external_table_ddl meta-function will hang if you do not set authentication information in the current session when analyzing a file on GCP. |
VER-64916 | Kafka Integration |
When Vertica exports data collector information to Kafka via notifier, the serialization logic does not properly handle Vertica internal data type translation. When importing or copying that data back into the Management Console, Vertica uses the raw data in Kafka and inserts garbled data into the extended monitoring database. |
VER-64352 | SDK |
Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2: CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python'; DROP LIBRARY lib1; CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python'; Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails. |
VER-62983 | Hadoop |
When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas. |
VER-61420 | Data Removal - Delete, Purge, Partitioning |
Partition operations such as move_partitions_to_table must split storage containers that have partitions that match and do not match the operation. Version 9.1 introduced an inefficiency where such a split can split a storage container into an extra storage container. In this case, the tuple mover eventually merged the extra container. |
VER-61069 | Execution Engine |
In very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely. |
Legal Notices
Warranty
The only warranties for products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein.
The information contained herein is subject to change without notice.
Restricted Rights Legend
Confidential computer software. Valid license from Micro Focus required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
Copyright Notice
© Copyright 2006 - 2022 Micro Focus, Inc.
Trademark Notices
Adobe® is a trademark of Adobe Systems Incorporated.
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.