IMPORTANT: Before Upgrading: Identify and Remove Unsupported Projections
With version 9.2, Vertica has removed support for pre-join and range segmentation projections. If a table's only superprojection is one of these projection types, the projection is regarded as unsafe.
Before upgrading to 9.2 or higher, you are strongly urged to check your database for unsupported projections. If the upgrade encounters these projections, it is liable to fail. You must then revert to the previous installation.
Solution: Run the pre-upgrade script
Vertica has provided a pre-upgrade script that examines your current database and sends to standard output its analysis and recommendations. The script identifies and lists any unsupported projections. If the script finds projection buddies with different SELECT and ORDER BY clauses, it generates a deploy script. Run this script to remedy projections so they comply with system K-safety.
https://www.vertica.com/pre-upgrade-script/
For more information, see Fixing Unsafe Buddy Projections in the Vertica documentation.
Updated: 1/11/2022
What's Deprecated in Vertica 9.3.1
Vertica 9.3.1-31: Resolved Issues
Vertica 9.3.1-30: Resolved Issues
Vertica 9.3.1-28: Resolved Issues
Vertica 9.3.1-27: Resolved Issues
Vertica 9.3.1-26: Resolved Issues
Vertica 9.3.1-25: Resolved Issues
Vertica 9.3.1-24: Resolved Issues
Vertica 9.3.1-23: Resolved Issues
Vertica 9.3.1-22: Resolved Issues
Vertica 9.3.1-21: Resolved Issues
Vertica 9.3.1-20: Resolved Issues
Vertica 9.3.1-19: Resolved Issues
Vertica 9.3.1-18: Resolved Issues
Vertica 9.3.1-17: Resolved Issues
Vertica 9.3.1-16: Resolved Issues
Vertica 9.3.1-15: Resolved Issues
Vertica 9.3.1-14: Resolved Issues
Vertica 9.3.1-13: Resolved Issues
Vertica 9.3.1-12: Resolved Issues
Vertica 9.3.1-11: Resolved Issues
Vertica 9.3.1-10: Resolved Issues
Vertica 9.3.1-9: Resolved Issues
Vertica 9.3.1-8: Resolved Issues
Vertica 9.3.1-7: Resolved Issues
Vertica 9.3.1-6: Resolved Issues
Vertica 9.3.1-5: Resolved Issues
Vertica 9.3.1-4: Resolved Issues
Vertica 9.3.1-3: Resolved Issues
Vertica 9.3.1-2: Resolved Issues
Vertica 9.3.1-1: Resolved Issues
Vertica 9.3.1: Resolved Issues
What's Deprecated in Vertica 9.3
Vertica 9.3.0-2: Resolved Issues
About Vertica Release Notes
The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 9.3.x.
They also contain information about issues resolved in:
- Hotfixes
- Service Packs
- Major Releases
- Minor Releases
Downloading Major and Minor Releases, and Service Packs
The Premium Edition of Vertica is available for download at https://support.microfocus.com/downloads/swgrp.html.
The Community Edition of Vertica is available for download at https://www.vertica.com/download/vertica/community-edition.
The documentation is available at https://www.vertica.com/docs/9.3.x/HTML/index.htm.
Downloading Hotfixes
Hotfixes are available to Premium Edition customers only. Contact Vertica support for information on downloading hotfixes.
What's New in Vertica 9.3.1
Take a look at the Vertica 9.3.1 New Features Guide for a complete list of additions and changes introduced in this release.
Backup, Restore, Recovery, and Replication
Support for Eon Mode On-Premise
An Eon Mode database can run entirely in the cloud (on AWS) or on-premise. You can now back up an on-premise database to AWS or to a Pure Storage FlashBlade appliance. You can perform full backups, full restores, and object restores from full backups.
To perform a cross-endpoint backup (or restore), you must set some additional environment variables. The vbr configuration file does not change. For more information, see Cross-Endpoint Backups in Eon Mode.
Configuration
Enforcement of Valid Parameter Input
Beginning with this release, Vertica will be more rigorous in how it validates user input for configuration parameters.
For example, in past releases you could set a configuration parameter to a value with invalid trailing data. If you set parameter MaxClientSessions to 50.83 or 42plus, in both cases Vertica stripped off the invalid data.
Now, Vertica validates user input for MaxClientSessions, checking whether the value's format is valid for an unsigned 32-bit integer. Otherwise, it returns an error.
Flex Tables
Avro Parser Support for Deflate Compression
The favroparser now supports deflate compression. This new feature allows you to load Avro data files compressed with the deflate codec into a flex table. See Loading Avro Data for more information.
Kafka
The Kafka integration in Vertica 9.3.1 has been tested with Kafka versions 2.2.1, 2.1, and 2.0. See Integrating with Apache Kafka for more information.
Machine Learning
Addition of Bisecting K-means
Vertica now provides the ability to cluster data using the bisecting k-means algorithm. See Clustering Data Hierarchically Using bisecting k-means for an introduction with an extended example. The following functions have been added:
- BISECTING_KMEANS
- APPLY_BISECTING_KMEANS
Parquet Export
EXPORT TO PARQUET Logs Output Events in UDX_EVENTS
EXPORT TO PARQUET previously logged information about the files it exports in the Vertica log. Now it additionally logs information in the UDX_EVENTS system table, allowing you to see information from all participating nodes in one place. See Monitoring Exports for more information about how to use this table.
Export to Google Cloud Storage Supported
You can use EXPORT TO PARQUET to export data to GCS. The requirements are similar to those for AWS S3; you must specify an authentication token in a configuration parameter, and data is exported directly to the target directory instead of being written to a temporary location first. For more information, see Exporting to S3 and GCS.
Partitioning
Null Handling Support
Partition clauses now support the function ZEROIFNULL. This function can check a PARTITION BY expression for null values and evaluate them to 0.
Projections
Operations on Top-K Projections
You can now perform the following operations on anchor tables with Top-K projections:
- DELETE
- UPDATE
- MERGE
Refreshing Top-K Projections
You can now use the following metafunctions on anchor tables with Top-K projections:
- REFRESH
- REFRESH_COLUMNS with the refresh-mode REBUILD
SQL Functions and Statements
S3EXPORT Null Handling
S3EXPORT now supports a new null_as parameter, which specifies how to export null values from the source data. If this parameter is included, S3EXPORT replaces all null values with the specified string.
New Metafunction: GET_PRIVILEGES_DESCRIPTION
Because privileges on database objects can come from several different sources such as explicit grants, roles, and inheritance, they can be difficult to monitor. To address this, the new GET_PRIVILEGES_DESCRIPTION metafunction has been introduced. It provides a unified view of their effective privileges across all sources on a specified database object.
Helper Functions for Parquet Data
The GET_METADATA function inspects a Parquet file and returns its metadata, including columns, row groups, and sizes. You can use this information to help you define external tables and to confirm the results of exports from Vertica.
The INFER_EXTERNAL_TABLE_DDL function inspects a Parquet file and returns a starting point for the definition of an external table. This function can be especially helpful for tables with many columns of mostly primitive types. For some types, the function cannot infer the data type and you must edit the output.
COMMENT ON COLUMN Table Compatibility
You can now comment on table columns with COMMENT ON COLUMN. For more information, see COMMENT ON TABLE COLUMN.
Security and Authentication
SSLCA Now Accepts Multiple Certificate Authorities
You can now trust more than one CA with the SSLCA parameter. For more information, see Security Parameters.
LDAP Dry Run Metafunctions
The LDAP_LINK_DRYRUN family of metafunctions allows you to tweak your LDAP Link settings before syncing with Vertica. Each dry run metafunction takes LDAP Link parameters as arguments and tests a separate part of LDAP Link:
- LDAP_LINK_DRYRUN_CONNECT - Connecting to the LDAP server
- LDAP_LINK_DRYRUN_SEARCH - Searching for LDAP users and groups
- LDAP_LINK_DRYRUN_SYNC - Mapping and synchronizing LDAP users and groups to their equivalents in Vertica, creating and orphaning them accordingly
These metafunctions are meant to be used and tested in succession, and their arguments are cumulative. That is, the parameters you use for configuring LDAP_LINK_DRYRUN_CONNECT are used for LDAP_LINK_DRYRUN_SEARCH, and the arguments for those functions are used for LDAP_LINK_DRYRUN_SYNC.
For detailed instructions on using these metafunctions, see Configuring LDAP Link with Dry Runs.
New LDAP Link Parameters
New parameters have been added for configuring LDAP Link.
- LDAPLinkConfigFile - If set to the path of a .LDIF file, LDAP Link will use the file as the source tree
- LDAPLinkTLSCADir - If set to the path of a directory containing CA certificates, LDAP Link will use the CA files to connect to the LDAP server.
See LDAP Link Parameters for more information.
Statistics
ANALYZE_STATISTICS Support for Global Temporary Tables
You can now call ANALYZE_STATISTICS on global temporary tables. As with local temporary tables, you can only obtain statistics on a global temporary table that is created with the option ON COMMIT PRESERVE ROWS. Otherwise, Vertica deletes table content on committing the current transaction, so no table data is available for analysis.
Supported Data Types
Arrays and Maps for External Tables Using Parquet Data
When working with Parquet files containing complex types, you can now define tables using arrays and maps of primitive types, allowing you to read a larger range of existing data.
You can query arrays, use them in joins and other operations, and use Array Functions on both array columns and literals. See Reading Arrays and the ARRAY data type for more information.
You can define maps, which allows you to read files containing them, but you cannot query them. See Reading Maps and the MAP data type for more information.
Arrays and maps may contain only primitive types. They cannot contain rows, arrays, or maps.
System Tables
MERGEOUT_PROFILES
The new MERGEOUT_PROFILES system table returns information about automatic mergeout operations, making them easier to monitor and troubleshoot.
New LOAD_SOURCES Columns
System table LOAD_SOURCES now includes four new columns that return how much time (in microseconds) is consumed by various user-defined load functions:
- CLOCK_TIME_SOURCE
- CLOCK_TIME_FILTERS
- CLOCK_TIME_CHUNKER
- CLOCK_TIME_PARSER
LDAP Dry Run Table
The LDAP_LINK_DRYRUN_EVENTS system table contains the results from the new LDAP dry run metafunctions.
UDX_EVENTS
The new UDX_EVENTS system table records events logged from user-defined functions, if they logged any. See Logging in Extending Vertica for information about how to log events in UDxs that you write.
Text Search
StringTokenizerDelim Optimizations and Changes
The preconfigured tokenizer StringTokenizerDelim has been optimized. It has also seen a slight change in behavior: an empty OVER() clause returns a single column for the tokenized values. Previously, the function would return an additional column for the original input. To use the old behavior, include a PARTITION BY clause in OVER().
For more information on this change and other preconfigured tokenizers, see Preconfigured Tokenizers.
admintools
create_db: New Behavior and Options
Previously, create_db would rollback the entire operation on failure, deleting log files and making it difficult to diagnose problems. The following changes have been made to modify this behavior.
- By default, create_db now preserves log directories on failure.
- --force-cleanup-on-failure removes existing directories when create_db fails.
- --force-removal-at-creation removes existing directories before attempting to create the database.
What's Deprecated in Vertica 9.3.1
This section describes the two phases Vertica follows to retire Vertica functionality:
- Deprecated: Vertica announces deprecated features and functionality in a major or minor release. Deprecated features remain in the product and are functional. Published release documentation announces deprecation on this page. When users access this functionality, it may return informational messages about its pending removal.
- Removed: Vertica removes a feature in a major or minor release that follows the deprecation announcement. Users can no longer access the functionality, and this page is updated to verify removal. Documentation that describes this functionality is removed, but remains in previous documentation versions.
Deprecated
The following Vertica functionality was deprecated and will be retired in future versions:
No deprecated functionality
Removed
The following functionality is no longer accessible as of the releases noted:
- Starting with release 9.3.0, support for various WOS-related functionality will be phased out from new databases. See Phase-out of WOS Support for details.
For more information see Deprecated and Retired Functionality in the Vertica documentation.
Vertica 9.3.1-31: Resolved Issues
Release Date: 01/11/2022
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-80161 | UI - Management Console | The CVE-2021-45105 security vulnerability was found in earlier versions of the Apache log4j library used by the MC. The library has been updated to resolve this issue. |
VER-80172 | Kafka Integration | Security vulnerabilities CVE-2021-45105 and CVE-2021-44832 were found in earlier versions of the log4j library used by the Vertica/Apache Kafka integration. The library has been updated to resolve this issue. |
VER-80245 | UI - Management Console | The CVE-2021-44832 security vulnerability was found in earlier versions of the Apache log4j library used by the MC. The library has been updated to resolve this issue. |
Vertica 9.3.1-30: Resolved Issues
Release Date: 12/23/2021
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-80114 | Kafka Integration | This release updates the Kafka integration’s Log4j library. The updated library addresses the CVE-2021-44228 and CVE-2021-45046 security vulnerabilities found in earlier versions. |
VER-80113 | UI - Management Console | This release updates the Management Console’s Log4j library. The updated library addresses the CVE-2021-44228 and CVE-2021-45046 security vulnerabilities found in earlier versions. |
Vertica 9.3.1-28: Resolved Issues
Release Date: 10/20/2021
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-78126 | Catalog Engine | Vertica now restarts properly for nodes that have very large checkpoint files. |
VER-79048 | Optimizer | Queries with multiple distinct aggregates sometimes produced wrong results when inputs appeared to be segmented on the same columns as distinct aggregate arguments. This issue has been resolved. |
Vertica 9.3.1-27: Resolved Issues
Release Date: 9/27/2021
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-78948 | UI - Management Console | Management Console returned errors when configuring email gateway aliases that included hyphen (-) characters. This issue has been resolved. |
Vertica 9.3.1-26: Resolved Issues
Release Date: 8/11/2021
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-77636 | Data Export | Added support for exporting UUID types via s3export. Before, exporting data with UUID types using s3export would sometimes crash the initiator node. |
VER-77846 | Kafka Integration | The Kafka Scheduler now allows an initial offset of -3, which indicates to begin reading from the consumer group offset. |
VER-78134 | Kafka Integration | The Kafka scheduler no longer hangs in rare cases. |
VER-78349 | Data Networking | In rare circumstances, the socket on which Vertica accepts internal connections could erroneously close and send a large number of socket-related error messages to vertica.log. This issue has been resolved. |
VER-78368 | ILM, Tuple Mover |
Several Vertica partition management operations support a force-split option: copy_partitions_to_table, drop_partitions, move_partitions_to_table, and swap_partitions_between_tables. In all cases, if force-split is true, and the operation specifies a range of partition keys that span multiple containers or part of a single container, Vertica executes the operation in two steps: 1. Breaks open the ROS containers 2. Reorganizes/merges the ROS containers as per operation requirements Occasionally, the Tuple Mover would attempt to merge the split ROS containers before step 2 was complete, and the operation would fail. This issue has been resolved: the two stages are now part of a single transaction. Until the transaction is complete, the Tuple Mover does not queue merge requests on the split containers. |
Vertica 9.3.1-25: Resolved Issues
Release Date: 6/10/2021
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-76628 | Recovery | If a user restarts a node with --force on a node a with missing or corrupted DFS file, Vertica now recovers the DFS file an available UP node. If a valid DFS file cannot be recovered from an UP node, it will be dropped with a warning message. |
VER-77284 | Security | VoltageSecureProtect() now guarantees a properly sized buffer for containing ciphertexts created with Unicode FPE formats. |
VER-77297 | Backup/DR | Vertica vbr can now delete restore points that are already partially deleted. |
VER-77411 | Hadoop | Vertica now properly parses uncompressed parquet files. |
VER-77525 | Security | Vertica now automatically creates needed default key projections for a user with DML access when that user performs an INSERT into a table with a primary key and no projections. |
Vertica 9.3.1-24: Resolved Issues
Release Date: 5/21/2021
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-77193 | Execution Engine |
Vertica was unable to optimize queries on v_internal tables, where equality predicates (with operator =) filtered on columns relname or nspname, in the following cases:
In all these cases, the queries are now optimized as expected. |
VER-77183 | Spread | Vertica now properly detects duplicate tokens with the same arq ID. |
VER-77156 | Tuple Mover | Previously, the Tuple Mover attempted to merge all eligible ROS containers without considering resource pool capacity. As a result, mergeout failed if the resource pool could not handle the mergeout plan size. This issue has been resolved: the Tuple Mover now takes into account resource pool capacity when creating a mergeout plan, and adjusts the number of ROS containers accordingly. |
VER-76974 | Backup/DR | Object restore now functions properly for objects >1 TB on Eon Mode clusters running on S3. |
VER-76657 | UI - Management Console | The MC now properly allows users to complete creating a database design for a non-K-Safe database. |
VER-76623 | Installation Program, UI - Management Console | The Vertica Management Console installer has been optimized to reduce its storage footprint. |
Vertica 9.3.1-23: Resolved Issues
Release Date: 4/20/2021
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-76860 | Optimizer | Incorrect usage of meta-function EXPORT_TABLES in the SELECT statement of EXPORT TO VERTICA caused the database to crash. This issue has been resolved: incorrect usage of meta-functions now returns with an error. |
Vertica 9.3.1-22: Resolved Issues
Release Date: 4/5/2021
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-75789 | Kafka Integration | Data ingested through Kafka during a timezone change could result in duplicate records. This issue has been resolved. |
VER-75903 | Tuple Mover | When Mergeout Cache is enabled, the dc_mergeout_requests system table now contains valid transaction ids instead of zero. |
VER-75941 | Tuple Mover | The Tuple Mover logged a large number of PURGE requests on a projection while another MERGEOUT job was running on the same projection. This issue has been resolved. |
VER-75975 | Hadoop | Fixed a bug in Parquet Predicate Pushdown where sometimes the correct rowgroups were not pruned from the Parquet file based on predicates if the file was on HDFS. |
VER-76102 | Tuple Mover | Occasionally, the Tuple Mover dequeued DVMERGEOUT and MERGEOUT requests simultaneously and executed only the DVMERGEOUT requests, leaving the MERGEOUT requests pending indefinitely. This issue has been resolved: now, after completing execution of any DVMERGEOUT job, the Tuple Mover always looks for outstanding MERGEOUT requests and queues them for execution. |
VER-76126 | Monitoring | The output of get_expected_recovery_epoch is no longer truncated in clusters with more than 20 nodes. |
VER-76254 | Data Export | Parquet files exported by Vertica now properly display date values when imported to Impala. |
VER-76258 | Hadoop | Vertica now properly handles queries of external tables of Parquet files with ZSTD compression. |
VER-76402 | Kafka Integration | Kafka UDXs now properly support sasl.mechanism types of SCRAM-SHA-512 and SCRAM-SHA-256. |
VER-76478 | Execution Engine | With Vertica running on machines with very high core counts, complex memory-intensive queries featuring an analytical function that fed into a merge operation sometimes caused a crash if the query ran in a resource pool where EXECUTIONPARALLELISM was set to a high value. This issue has been resolved. |
Vertica 9.3.1-21: Resolved Issues
Release Date: 2/9/2021
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-75640 | Optmizer | Queries with subqueries on tables with TopK projections sometimes returned with an error. This issue has been resolved. |
VER-75785 | Optimizer | Setting a projection column comment to an empty string with COMMENT ON COLUMN caused the database to crash. This issue has been resolved. |
VER-75787 | Machine Learning | The RF_CLASSIFIER machine learning function no longer triggers an out of memory error in response to some parameter combinations. |
Vertica 9.3.1-20: Resolved Issues
Release Date: 1/22/2021
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-75394 | Execution Engine | When a subquery was used in an expression and implied a correlated join with relations in the parent query, and also aggregated output, it sometimes returned multiple rows for each matched join key and returned unexpected results. This issue has been resolved. |
VER-75505 | Admin Tools, Spread | When quickly stopping and then starting a different database, a new database no longer fails to start after attempting to connect to Spread processes associated with the stopped database. |
VER-75512 | Security | Vertica LDAP Link now supports ranges of LDAP attribute values. |
VER-75533 | Security, UI - Agent | Client initiated SSL/TLS renegotiation is now disabled on port 5444. |
VER-75534 | Optimizer | Analytic functions triggered an Optimizer error when used with a LIMIT OVER clause. This issue has been resolved. |
Vertica 9.3.1-19: Resolved Issues
Release Date: 12/14/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-74582 | Execution Engine | Under some circumstances, the size of the METADATA resource pool was calculated incorrectly. This issue has been resolved. |
VER-74963 | Storage and Access Layer | Under certain conditions, vertica would crash while cancelling an insert or copy transaction on EON. An instance has been corrected. |
VER-75012 | S3 | Vertica no longer displays a VIAssert error when clearing the session level parameter AWSSessionToken. |
VER-75013 | Tuple Mover | Enabling both mergeout caching (deprecated) and reflexive mergeout adversely affected performance on a new or restarted database. This issue has been resolved: when reflexive mergeout is enabled, mergeout caching is automatically disabled. |
VER-75016 | Hadoop | A Vertica node no longer fails when querying an external table that copies data from partitioned parquet file with an incorrect filter data type. |
VER-75094 | Execution Engine | Enabling inter-node encryption adversely affected query performance across a variety of workloads. In many cases, this issue has been addressed with improved performance. |
VER-75248 | Security | Previously, CREATE OR REPLACE VIEW could only replace an existing view if the current user was the existing view's owner. Now, DROP privilege is sufficient to replace the view (in addition to CREATE on the SCHEMA, as before). |
Vertica 9.3.1-18: Resolved Issues
Release Date: 11/19/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-74649 | UI - Management Console | If there are multiple IP addresses available on a Vertica cluster, Vertica now prioritizes reachable IP host addresses when trying JDBC connections to the database. |
VER-74714 | UI - Management Console | If multiple IP addresses are available on a Vertica cluster host, Vertica now confirms that the Agent IP address is reachable and is on a reachable subnet from the MC host. |
Vertica 9.3.1-17: Resolved Issues
Release Date: 10/5/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-73872 | Execution Engine | Querying UUID data types with an IN operator ran significantly slower than an equivalent query using OR. This problem has been resolved. |
VER-74017 | Backup/DR | Users can now restore full database backups into an Eon mode cluster where one of the nodes has a changed IP address. |
VER-74169 | Data Load / COPY | Added support guidance parameter 'UDLMaxDataBufferSize' with default value 256 x 1024 x 1024 (256 MB). Its value can be increased to avoid 'insufficient memory' errors during User Define Load. |
Vertica 9.3.1-16: Resolved Issues
Release Date: 9/15/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-74003 | Execution Engine | The query_profiles table now reports the correct number of rows processed when querying an external table stored in parquet files on HDFS. |
VER-74134 | Execution Engine | Queries sometimes threw a Sort Order Violation error when join output that was sorted like the target projection was consumed by a single Analytic function that only partially sorted the data as required by the target projection. This issue has been resolved. |
Vertica 9.3.1-15: Resolved Issues
Release Date: 8/27/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-73871 | Tuple Mover | If mergeout on a purge request failed because the target table was locked, the Tuple Mover was unable to execute mergeout on later purge requests for the same table. This issue has been resolved. |
VER-73875 | Backup/DR | Added a meaningful error message for debugging when a user terminated a database snapshot session and objects were deleted. |
VER-73870 | Storage and Access Layer | After reading data from Google Cloud Storage via external table or a COPY statement, Vertica now closes TCP connections when users close their sessions. |
Vertica 9.3.1-14: Resolved Issues
Release Date: 8/12/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-73366 | Optimizer | In some cases, common sub-expressions in the SELECT clause of an INSERT...SELECT statement were not reused, which caused performance degradation. Also, the EXPLAIN-generated query plan occasionally rendered common sub-expressions incorrectly. These issues have been resolved. |
VER-73465 | Optimizer | In some cases, querying certain projections failed to find appropriate partition-level statistics. This issue has been resolved; queries on projections now use partition-level statistics as needed. |
VER-73469 | DDL | If you renamed a schema, the change was not propagated to table DEFAULT expressions that specified sequences of that schema. This issue has been resolved: all sequences of a renamed schema are now updated with the new schema name. |
VER-73471 | Data Load / COPY | Under certain circumstances, the FJsonParser rejecting rows could cause the database to panic. This issue has been resolved. |
VER-73481 | Complex Types, Optimizer | The presence of directed queries sometimes caused "Unknown Expr" warnings with queries involving arrays. This issue has been resolved. |
VER-73484 | Tuple Mover |
A ROS of inactive partitions was excluded from mergeout if it was very large in relation to the size of other ROS's to be merged. Now, a large ROS container of inactive partitions is merged with smaller ROS containers when the following conditions are true:
|
VER-73723 | Data Load / COPY | Copy of parquet files was very slow when parquet files were poorly written (generates by GO) and had wide varchar columns. This is now fixed. |
Vertica 9.3.1-13: Resolved Issues
Release Date: 7/17/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-73137 | Hadoop | Previously, when accessing data on hdfs Vertica would sometimes connect to the wrong hdfs cluster in cases when several namenodes from different clusters had the same hostname. This is now fixed. |
VER-73260 | DDL - Projection | Users without the required table and schema privileges were able to create projections if the CREATE PROJECTION statement used the createType hint with an argument of L, D, or P. This problem has been resolved. |
VER-73263 | Execution Engine | In some cases, cancelling a query caused subsequent queries in the same transaction to return with an error that they also had been cancelled. This issue has been resolved. |
VER-73266 | Control Networking | When a wide table had many storage containers, adding a column could sometimes cause the server to fail. The issue is fixed. |
Vertica 9.3.1-12: Resolved Issues
Release Date: 7/1/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-72905 | Data Load / COPY, SDK | The protocol used to run a user-defined parser in fenced mode had a bug which would occasionally cause the parser to process the same data multiple times. This issue has been resolved. |
VER-72909 | Security | The LDAPLinkSearchTimeout configuration parameter has been restored. |
VER-72948 | Backup/DR | Vertica backups were failing to delete old restore points when the number of backup objects exceeded 10,000 objects. This issue has been resolved. |
VER-72968 | Backup/DR | Vertica no longer fails if an object is being restored at the same time that an object is being created and the objects share the same name in different character cases. |
VER-72974 | Kafka Integration, Security | When omitting a CA alias setting up the scheduler's TLS configuration (parameter --ssl-ca-alias), the scheduler loads all certificates into the trust store. When given an alias the scheduler still loads only that alias. |
VER-72976 | Kafka Integration | The scheduler now supports CA bundles at the UDX and vkconfig level. |
VER-72997 | Cloud - Amazon, Security | Vertica instances can now only access the S3 bucket that the user specified for communal storage. |
VER-73011 | Backup/DR | The error message "Trying to delete untracked object" has been replaced with the more useful " Trying to delete untracked object: This is likely caused by inconsistent backup metadata. Hint: Running quick-repair can resolve the backup metadata inconsistency. Running full-check task can provide more thorough information, and guidance on how to fix this issue. |
Vertica 9.3.1-11: Resolved Issues
Release Date: 6/9/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-72839 | Catalog Engine, Execution Engine | Dropping a column no longer corrupts partition-level statistics, causing subsequent runs of the ANALYZE_STATISTICS_PARTITION metafunction on the same partition range to fail. Future column drops column no longer corrupt partition-level stats. If you have existing corrupted partition-level statistics, drop the statistics and run ANALYZE_STATISTICS_PARTITION recreate them. |
Vertica 9.3.1-10: Resolved Issues
Release Date: 5/21/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-72707 | DDL | Setting NULL on a table column's DEFAULT expression is equivalent to setting no default expression on that column. So, if a column's DEFAULT/SET USING expression was already NULL, then changing the column's data type with ALTER TABLE...ALTER COLUMN...SET DATA TYPE removes its DEFAULT/SET USING expression. |
Vertica 9.3.1-9: Resolved Issues
Release Date: 5/18/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-72569 | DDL - Table | ALTER TABLE...RENAME was unable to rename multiple tables if projection names of the target tables were in conflict. This issue has been resolved by maintaining local snapshots of the renamed objects. |
VER-72607 | Data Load / Copy | With EnableFastConstLoad mode ON, inserting via Insert Select while not specifying columns with default values which precede specified columns in the table, and while expecting to sort on Select output column, would sometimes result in Sort Order Violation. This issue has been resolved. |
VER-72630 | DDL - Table | Concurrent execution of CREATE TABLE...AS SELECT statements sometimes loaded duplicate data into the same table. This issue has been resolved. |
VER-72637 | Tuple Mover | In previous releases, the TM resource pool always allocated two mergeout threads to inactive partitions, no matter how many threads were specified by its MAXCONCURRENCY parameter. The inability to increase the number of threads available to inactive partitions sometimes caused ROS pushback. Now, the Tuple Mover can allocate up to half of the MAXCONCURRENCY-specified mergeout threads to inactive partitions. |
VER-72710 | DDL | Vertica no longer displays an error when a user creates or renames a table in a user-defined schema with the same name as existing non super unsegmented projection. |
Vertica 9.3.1-8: Resolved Issues
Release Date: 5/7/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-72106 | Optimizer | In some cases, the optimizer required an unusual amount of time to generate plans for queries on tables with complex live aggregate projections. This issue has been resolved. |
VER-72255 | Execution Engine | In some cases, malformed queries on specific columns, such as constraint definitions with unrecognized values or tranformations with improper formats, caused the database to fail. This problem has been resolved. |
VER-72346 | Optimizer | Queries with a mix of single-table predicates and expressions over several EXISTS queries in their WHERE clause sometimes returned incorrect results. This issue has been resolved. |
VER-72352 | UI - Management Console | Previously, an exception occurred if a database that had no password was used as the production database for Extended Monitoring, This issue has been resolved. |
VER-72417 | Hadoop | On some HDFS distributions, if a datanode is killed during a Vertica query to HDFS, the namenode fails to respond for a long time causing Vertica to time out and roll back the transaction. In such cases Vertica used to log a confusing error message, and if the timeout happened during the SASL handshaking process, Vertica would hang without the possibility of being canceled. Now we log a better error message (saying that dfs.client.socket-timeout value should be increased on the HDFS cluster), and the hang during SASL handshaking is now cancelable. |
Vertica 9.3.1-7: Resolved Issues
Release Date: 4/16/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-72066 | Execution Engine | Loading Kafka messages larger than 5MB no longer causes an insufficient memory error. |
VER-72107 | Hadoop | The Vertica function INFER_EXTERNAL_TABLE_DDL is now compatible with Parquet files that use 2-level encoding for lists. |
VER-72109 | DDL - Table | If a column had a constraint and its name contained ASCII and non-ASCII characters, attempts to insert values into this column sometimes caused the database to fail. This issue has been resolved. |
VER-72112 | Kafka Integration, Supported Platforms | Vertica now supports rdkafka in RHEL 8.0,Ubuntu19.10, and Debian 10. |
VER-72113 | Data Removal - Delete, Purge, Partitioning | Vertica now provides clearer messages when drop_partitions fails due to an insufficient resource pool. |
VER-72115 | Hadoop | Previously, Vertica queries accessing HDFS would fail immediately if an HDFS operation returned a 403 error code (such as Server too busy). Vertica now retries the operation. |
VER-72117 | Optimizer, Recovery | Projection data now remains consistent following a MERGE during node recovery in Enterprise mode. |
Vertica 9.3.1-6: Resolved Issues
Release Date: 4/10/2020
This hotfix was internal to Vertica.
Vertica 9.3.1-5: Resolved Issues
Release Date: 4/2/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-71833 | EON | The clean_communal_storage meta-function is now up to 200X faster. |
VER-71866 | Subscriptions | In Eon Mode, when a subscription or node state changes, primary and ETL session nodes now realign to match session participation. |
Vertica 9.3.1-4: Resolved Issues
Release Date: 3/13/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-71622 | Backup/DR | Vbr no longer includes the dbadmin password in logging files when debug level is set to 3 and the dbadmin password is stored in a password file. |
VER-71688 | License | In rare cases, Vertica no longer fails if you audit a flextable created using SET USING. |
Vertica 9.3.1-3: Resolved Issues
Release Date: 3/5/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-71128 | Depot | The tables DEPOT_FILES, DEPOT_EVICTIONS, and DEPOT_FETCHES now contain the column STORAGE_OID. This column contains the storage container oid associated with every file. |
VER-71139 | Optimizer | Moving CONSTRAINTS as a part of CREATE TABLE statement from an ALTER TABLE statement now moves only the constraints of TEMP tables to the CREATE TABLE statement. |
VER-71362 | Admin Tools | Log rotate now functions properly in databases upgraded from 9.2.x to 9.3.x. |
VER-71363 | Kafka Integration | In rare cases, rdkafka that could create an infinite loop. This issue has been resolved. For more information, refer to https://github.com/edenhill/librdkafka/issues/2108 |
VER-71488 | Supported Platforms | The vertica_agent.service can now stop gracefully on Red Hat Enterprise Linux version 7.7. |
Vertica 9.3.1-2: Resolved Issues
Release Date: 2/20/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-71253 | Admin Tools | The Database Designer no longer produces an error when it targets a schema other than the "public" schema. |
VER-71067 | Third Party Tools Integration | NULL input to VoltageSecureProtect and VoltageSecureAccess now returns NULL value. |
Vertica 9.3.1-1: Resolved Issues
Release Date: 2/6/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-70802 | Build and Release | The TZ environment variable now uses the latest time zone data from the Internet Assigned Numbers Authority (IANA). |
VER-70962 | Third Party Tools Integration | The Vertica VoltageSecureProtect function now correctly sizes output when specifying custom Voltage number formats. |
VER-71003 | Catalog Engine | In Eon Mode, projection checkpoint epochs on down nodes now become consistent with the current checkpoint epoch when the nodes resume activity. |
Vertica 9.3.1: Resolved Issues
Release Date: 1/14/2020
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-42720 | Execution Engine | Vertica occasionally generated misleading error messages in certain distributed query executions. This issue has been resolved. |
VER-63714 | Cloud - Google | Previously, exporting parquet files to Google Cloud Storage caused an error. This is now fixed. |
VER-66216 | Admin Tools | Passwords are now validated when creating databases. |
VER-66903 | Kafka Integration | Users can now use kafka_conf to override the kafka ssl setting to allow SASL+SSL authentication to work. |
VER-67029 | Client Drivers - JDBC | The maxconnection limitation check no longer fails after running a distribute query. |
VER-67201 | Client Drivers - ODBC | Previously, the names of files used in copy local operations were not correctly represented within the Windows version of the ODBC/OLEDB drivers; as a result, non-ASCII characters could not be used in these filenames. The same problem existed for the names of rejection and exception filenames specified in a copy local operation. This problem has now been corrected, and non-ASCII characters can be used in all three types of files. |
VER-67497 | Client Drivers - VSQL | Previously, the VSQL options -B (connection backup server/port) and -k/-K (Kerberos service and host name) were incompatible, and the latter would be ignored. Now, these options can be set together. |
VER-67511 | Tuple Mover | Configuration parameter MergeOutInterval specifies in seconds (by default 600) how often the Tuple Mover checks the mergeout request queue for pending requests. If the queue is empty, the Tuple Mover processes storage location move requests. In previous releases, Tuple Mover threads that were triggered by DML operations ignored the MergeOutInterval parameter and processed all pending requests, including storage location move requests. This sometimes resulted in frequent checks for storage location move requests, which could adversely affect performance. To resolve this issue, Tuple Mover threads that are spawned by DML operations are now subject to MergeOutInterval. |
VER-67701 | LocalPlanner | Vertica occasionally failed when running COUNT DISTINCT queries with predicates on lead columns in the projection sort order. This issue has been resolved. |
VER-67964 | Execution Engine | In rare circumstances, if a cluster's partitioning was short-lived due to network problems, then the whole cluster may have gone down. This issue has been resolved. |
VER-67966 | Backup/DR | If a user performed a copycluster task and a swap partition task concurrently, the data that participated in the swap partition could end up missing on the target cluster. This issue has been resolved. |
VER-67980 | Client Drivers - ODBC | Error messages containing Japanese column names now display correctly. |
VER-68414 | SAL | Queries could fail when Vertica 9.2 tried to read a ROS file format from Vertica 6.0 or earlier. Vertica now properly handles files created in this format. |
VER-68549 | UI - Management Console | A problem occurred when an MC Admin user chose to bind to LDAP anonymously, where the setting did not take effect despite clicking 'Apply' and 'Done'. This problem has now been fixed. |
VER-68633 | Security | To ensure that username case is consistent between SESSIONS and other system tables, the SESSIONS system tables are now populated with the username value from the catalog. |
VER-68644 | Tuple Mover | In previous releases, Vertica considered unsafe projections when it tried to advance AHM. Now, Vertica ignores unsafe projections when it determines the AHM.
|
VER-68720 | UDX | GRANT and REVOKE statements now support ALTER and DROP privileges for user-defined functions. |
VER-68726 | Kafka Integration | Vertica no longer fails when it encounters improper Kafka SSL key/certificate setup error information on RedHat Linux. |
VER-68790 | Backup/DR | If a user deleted an object during a backup on EON mode, the backup could potentially fail. This issue has been resolved. |
VER-68853 | Subscriptions | A failure in one EON Mode subcluster no longer has the ability to affect another subcluster. |
VER-68898 | Data Removal - Delete, Purge, Partitioning | DROP_PARTITIONS requires that the range of partition keys to drop exclusively occupy the same ROS container. If the target ROS container contains partition keys that are outside the specified range of partition keys, the force-split argument must be set to true, otherwise the function returns with an error.
In rare cases, DROP_PARTITIONS executes at the same time as a mergeout operation on the same ROS container. When this happens, the function cannot execute the force-split operation and returns with an error. This error message has been modified to better identify the source of the problem, and also provides a hint to try calling DROP_PARTITIONS again. |
VER-69184 | Execution Engine | In cases where COUNT (DISTINCT) and aggregates such as AVG were involved in a query of a numeric datatype input, the GroupGeneratorHashing Step was causing a memory conflict when the size of the input datatype (numeric) was greater than the size of output datatype (float for average), producing incorrect results. This issue has been resolved. |
VER-69217 | Data load / COPY | Sending excessively long inputs to a NUMERIC column in a COPY statement could cause Vertica to crash. This issue has been resolved. |
VER-69229 | Execution Engine | In some cases, queries failed to sort UUIDs correctly if the ORDER BY clause did not specify the UUID column first. This problem has been resolved. |
VER-69292 | Execution Engine, UDX, Vertica Log Text Search | Previously, the v_txtindex.StringTokenizerDelim UDx made copies of the input for each token, which could lead to memory issues. This function no longer makes copies of the original input and now returns one column containing the tokens.
To produce the old behavior that maps the tokens to the original input, include a "PARTITION BY <input_col>" clause inside the OVER() clause of the UDx call. |
VER-69383 | Optimizer - Statistics and Histogram | ANALYZE_ROW_COUNT can no longer change STATISTICS_TYPE from FULL to ROWCOUNT. |
VER-69431 | Data Export | S3EXPORT now supports a null_as parameter that specifies how to export null values from the source data. If this parameter is included, S3EXPORT replaces all null values with the specified string. |
VER-69486 | Security | Previously, LDAP Link required a global catalog lock during the entire operation. This would sometimes cause the database to fail if the LDAP server took too long to respond. Now, LDAP Link only requires a global catalog lock when it maps LDAP entries to Vertica users and roles. |
VER-69489 | Data load / COPY | COPY statements could automatically generate rejected data file names that were too long for Vertica's file system, causing the COPY to error out. Vertica will now truncate these file names to prevent this from happening. |
VER-69535 | Data load / COPY | A bug in tracking disk usage of rejected data tables would rarely cause Vertica to crash. This issue has been resolved. |
VER-69606 | Catalog Engine | ETL queries no longer fail with the error "Active subscriptions changed during query planning" if you run the query while running REBALANCE_SHARDS on another subcluster. |
VER-69702 | Communications/messaging | Queries performed against a virtual table no longer display duplicate error messages if performed while one or more nodes is down. |
VER-70487 | Admin Tools | Starting a subcluster in an EON Mode database no longer writes duplicate entries to the admintools.log file. |
Known issues Vertica 9.3.1
Updated: 1/5/20
Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.
Known Issues
Issue |
Component |
Description |
VER-41895 | Admin Tools |
On some systems, admintools fails to parse output while running SSH commands on hosts in the cluster. Workaround: If the admintools operation needs to run on just one node, users can avoid this issue by using SSH to connect to the target node and running the admintools command on that node directly. |
VER-48020 | Hadoop | Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load. |
VER-48041 | Admin Tools | On some systems, occasionally admintools cannot parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient. |
VER-60409 | AP-Advanced, Optimizer |
APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also outputs many columns may fail with error "Request size too big" due to additional memory requirement in parsing. Workaround: Increase configuration parameter MaxParsedQuerySizeMB or reduce the number of output columns. For the cases of APPLY_SVD and APPLY_PCA, you can limit the number of output columns either by setting the parameter num_components or by setting the parameter cutoff. In practice, cutoff=0.9 is usually enough. Please note that If you increase MaxParsedQuerySizeMB to a larger value, for example 4096, each query you run may use 4 GB of memory during parsing. This means running multiple queries at the same time could cause out-of-memory (OOM) if your total memory is limited. Please refer to Vertica documentation for more information about MaxParsedQuerySizeMB. |
VER-61069 | Execution Engine |
In very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely. Workaround: Halt the remaining processes using admin tools. |
VER-61420 | Data Removal - Delete, Purge, Partitioning | Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM. |
VER-62061 | Catalog Sync and Revive | If you revive a database on a one-node cluster, Vertica does not warn users against reviving the same database elsewhere. This can result in two instances of the same database running concurrently on different hosts, which can corrupt the database catalog. |
VER-62983 | Hadoop | When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas. |
VER-64352 | SDK |
Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2: CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python'; DROP LIBRARY lib1; CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python'; Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails. Workaround: One of the following:
|
VER-67228 | AMI, License | An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing. |
VER-70139 | Metadata Tables Security |
The LDAP Link dry run metafunctions do not yet have full support for the LDAPLinkStartTLS and LDAPLinkTLSReqCert parameters. Workaround: When using the LDAP Link dry run metafunctions, pass "1" and "allow" for LDAPLinkStartTLS and LDAPLinkTLSReqCert arguments, respectively |
VER-70238 | Backup/DR, Security |
A vbr restore operation can fail if your database is configured for SSL and has a server.key and a server.crt file present in the catalog directory. Workaround: Move your SSL authentication related files (server.crt, server.csr and server.key) from the catalog directory before performing the restore. |
VER-70468 | Nimbus |
Creating a load balance group for a subcluster with a ROUNDROBIN load balance policy only has an effect if you set the global load balancing policy to ROUNDROBIN as well. This is the case, even though the LOAD_BALANCE_GROUPS system table shows the group's policy as ROUNDROBIN. Workaround: Set the global load balancing policy using the SET_LOAD_BALANCE_POLICY function: SELECT set_load_balance_policy('roundrobin'); |
VER-70549 | Spread |
When quickly stopping and then starting a different database, the new database may fail to start after attempting to connect to Spread processes associated with the stopped database. Workaround: After stopping a database, ensure that all old Spread processes have stopped on the affected nodes before starting a different database. This typically takes no more than one minute, but may vary under certain circumstances. |
VER-76267 | Optimizer | Executing EXPLAIN COPY on a new table fails. |
What's New in Vertica 9.3
Take a look at the Vertica 9.3 New Features Guide for a complete list of additions and changes introduced in this release.
Vertica on the Cloud
New Support for AWS Instance Types
Vertica has added two Amazon Web Service (AWS) instance types to the list of supported types available in MC:
Optimization | Type | Supports EBS Storage | Supports Ephemeral Storage |
---|---|---|---|
Computing |
c5.xlarge c5.large |
Yes Yes |
No No |
Constraints
Exported DDL of Table Constraints
Previously, Vertica meta-functions that exported DDL, such as EXPORT_TABLES
, exported all table constraints as ALTER
statements, whether they were part of the original CREATE
statement, or added later with ALTER TABLE
. This was problematic for global temporary tables, which cannot be modified with new constraints other than foreign keys. Thus, the DDL that was exported for a temporary table with constraints could not be used to reproduce that table. This issue has been resolved: Vertica now exports table constraints (except foreign keys) as part of the table's CREATE
statement.
Supported Data Types
External Tables and Structs
Columns in Parquet and ORC files can contain complex data types. One complex data type is the struct, which stores (typed) property-value pairs. Vertica previously supported reading structs only as expanded columns. In addition, you can now preserve the original structure by reading a struct as a single column. For more information, see Reading Structs as Inline Types.
Eon Mode
Improved Subcluster Feature
Subclusters help you isolate workloads to a smaller group of nodes in your database cluster. Queries only run on nodes within the subcluster that contains the initiator node. In previous versions of Vertica, you defined subclusters using the fault groups feature. Starting in 9.3.0, subclusters are their own unique feature and have become more useful.
All nodes in the database must belong to a subcluster. When you upgrade an Eon Mode database from a previous version to 9.3.0, Vertica converts any existing fault groups to subclusters. If there are nodes in the database that are not part of a fault group, Vertica creates a default subcluster and adds these nodes to it. When you create a new Vertica database, Vertica creates a default subcluster and adds the initial group of nodes to it. When adding a node to the database, Vertica adds the node to a default subcluster unless you specify a subcluster.
See Subclusters for more information.
Subcluster Conversion During Eon Mode Database Upgrades
When you upgrade an Eon Mode database to version 9.3.0, Vertica converts all of the fault groups in the database to subclusters. Any nodes in the default groups are automatically assigned to the converted subclusters. Any nodes that are not part of a fauilt group are assigned to a default subcluster that Vertica creates.
Primary and Secondary Subclusters
Subclusters come in two types:
- Primary subclusters are intended to be permanently-running groups of nodes that form the core of your database. Vertica only considers the nodes in primary subclusters when determining whether the database has full shard coverage, and whether a majority of the nodes are up so the database can continue to run safely.
- Secondary subclusters are designed to be ephemeral. You can start and stop these subclusters or individual nodes within them without impacting the stability of your database or the stabilty of your primary subclusters. Vertica does not consider them when determining whether the database has the full shard coverage and the majority of up nodes it needs in order to continue running.
Nodes in Eon Mode databases are also either primary or secondary, based on the type of subcluster that contains them. See Subcluster Types and Elastic Scaling for more information on primary and secondary nodes and subclusters.
Connection Load Balancing Policy Changes
Connection load balancing is now aware of subclusters. You can define connection load balancing groups based on subclusters. See About Connection Load Balancing Policies for more information.
When Vertica upgrades an Eon Mode database to version 9.3.0 or beyond, it does not convert load balancing groups based on fault groups into groups based on the converted subclusters. You must redefine these load balance groups to be based on the newly-created subclusters yourself.
Changes to ADMIN_TOOLS_EXE for Subcluster
The ADMIN_TOOLS_EXE command line interface has new tools to manipulate subclusters:
- The
db_add_subcluster
tool adds new subclusters. - The
db_remove_subcluster
tool removes subclusters - The
restart_subcluster
tool starts subclusters. - The
stop_subcluster
tool stops subclusters.
In addition, the add_node
tool has a new --subcluster
argument that lets you select the subcluster that Vertica adds the new node or nodes to.
For more information on the option, see Writing Administration Tools Scripts.
Changes to System Tables for Subclusters
There are several new system tables, as well as changes to existing tables for subclusters:
- The new table V_CATALOG.SUBCLUSTERS lists the subclusters defined in the database, sa well as the nodes that each contains.
- The new table V_MONITOR.CRITICAL_SUBCLUSTERS lists any subclusters whose loss or shutdown would cause the database to perform a safety shutdown.
- The NODES table has two added columns : IS_PRIMARY indicates whether the node is a primary node, and SUBCLUSTER_NAME indicates the subcluster that contains the node.
- The CLUSTER_LAYOUT table has two new columns: SUBCLUSTER_NAME and SUBLUSTER_OID to identify the subcluster that contains a node.
Depot Warming Can be Canceled or Performed in the Background
Before a newly-added node begins processing queries, it warms its depot by fetching data into it based on what other nodes in the subcluster have in their depots. You can now choose to cancel depot warming entirely, or have the node process queries while it continues to warm its depot. See Canceling or Backgrounding Depot Warming for more information.
Changes to Depot Monitoring Tables
The PLAN_ID columns in the DEPOT_FETCH_QUEUE and DEPOT_FETCHES system tables have been replaced with the TRANSACTION_ID column. This change makes it easier to determine the transaction that caused a node to fetch a file.
Query-Level Depot Fetching
Queries in Vertica mode now support the /*+DEPOT_FETCH*/
hint, which specifies whether to fetch data from communal storage when the depot does not have the queried data.
Depot Maximum Size Limit
Vertica lets you set a size for the depot. This size is set to 60% by default. Vertica also uses the filesystem containing the depot for other purposes such as temporary storage while loading data. To make sure there is enough space on this filesystem for these other needs, Vertica now limits you to setting aside 80% of the filesystem space for the depot. If you attempt to allocate more than 80% of the filesystem to the depot, Vertica returns an error.
New S3EXPORT Parameters
Vertica function S3EXPORT
now supports the following parameters:
compression
can specify thebzip2
filter to compress exported data.enclosed_by
specifies the character used to enclose string and date/time data.escaped_by
specifies the character used to escape values in exported data.
Clearing the Fetch Queue for a Specific Transaction
The CLEAR_FETCH_QUEUE function now accepts an optional transaction ID parameter. Supplying this parameter limits the clearing of the fetch queue to just those entries for the transaction.
Kafka
Vertica version 9.3.0 has been tested with the following versions of Apache Kafka:
- 2.0
- 2.1
Vertica may work with other versions of Kafka. See Vertica Integration for Apache Kafka for more information.
Vertica now uses version 0.11.6 of the rdkafka library to communicate with Kafka. This change could affect you if you directly set Kafka library options. See Directly Setting Kafka Library Options for more information.
Parquet Export
Improved Stability
Memory allocation for EXPORT TO PARQUET is now part of the Vertica resource pool instead of being separately managed, reducing memory-allocation errors from large exports.
Projections
Better Correlation Between Table and Projection Names
When you rename a table with ALTER TABLE or copy an existing one with CREATE TABLE LIKE…INCLUDING PROJECTIONS, Vertica propagates the new table name to its projections. For details, see Projection Naming.
Support for UPDATE and DELETE on Live Aggregate Projections
You can now run DML operations on tables with live aggregate projections. For more details, see Live Aggregate Projections.
Spread
Vertica now uses Spread 5.
Supported Platforms
Pure Storage FlashBlade
Vertica now supports Pure Storage Flashblade storage for Eon Mode on premise.
What's Deprecated in Vertica 9.3
This section describes the two phases Vertica follows to retire Vertica functionality:
- Deprecated: Vertica announces deprecated features and functionality in a major or minor release. Deprecated features remain in the product and are functional. Published release documentation announces deprecation on this page. When users access this functionality, it may return informational messages about its pending removal.
- Removed: Vertica removes a feature in a major or minor release that follows the deprecation announcement. Users can no longer access the functionality, and this page is updated to verify removal. Documentation that describes this functionality is removed, but remains in previous documentation versions.
Deprecated
The following Vertica functionality was deprecated and will be retired in future versions:
Release | Functionality | Notes |
---|---|---|
9.3 |
7.2_upgrade vbr task |
This task remains available in earlier versions. |
9.3 | FIPS | FIPS is temporarily unsupported due to incompatibility with OpenSSL OpenSSL 1.1x. FIPS is still available in Vertica 9.2.x, and FIPS will be reinstated in a future release. |
Removed
The following functionality is no longer accessible as of the releases noted:
Release | Functionality | Notes |
---|---|---|
9.3 |
|
|
9.3 | Configuration parameter ReuseDataConnections |
For more information see Deprecated and Retired Functionality in the Vertica documentation.
Vertica 9.3.0-2: Resolved Issues
Release Date: 12/18/2019
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-69250 | UDX | GRANT and REVOKE statements now support ALTER and DROP privileges for user-defined functions. |
VER-69666 | Backup/DR | If user performed a copycluster task and a swap partition task concurrently, the data that participated in the swap partition could end up missing on the target cluster. This issue has been resolved. |
VER-69783 | Execution Engine | In cases where COUNT (DISTINCT) and aggregates such as AVG were involved in a query of a numeric datatype input, the GroupGeneratorHashing Step was causing a memory conflict when the size of the input datatype (numeric) was greater than the size of output datatype (float for average), producing incorrect results. This issue has been resolved. |
VER-69787 | Cloud - Google | Previously, exporting parquet files to Google Cloud Storage caused an error. This is now fixed. |
VER-70085 | Backup/DR | If a user deleted an object during a backup on Eon mode, the backup could potentially fail. This issue has been resolved. |
VER-70144 | Communications/ messaging |
Queries performed against a virtual table no longer display duplicate error messages if performed while one or more nodes is down. |
VER-70148 | Client Drivers - VSQL | Previously, the VSQL options -B (connection backup server/port) and -k/-K (Kerberos service and host name) were incompatible, and the latter would be ignored. Now, these options can be set together. |
VER-70150 | Catalog Engine | ETL queries no longer fail with the error "Active subscriptions changed during query planning" if you run the query while running REBALANCE_SHARDS on another subcluster. |
VER-70151 | Subscriptions | A failure in one Eon Mode subcluster no longer has the ability to affect another subcluster. |
VER-70235 | Data Removal - Delete, Purge, Partitioning |
DROP_PARTITIONS requires that the range of partition keys to drop exclusively occupy the same ROS container. If the target ROS container contains partition keys that are outside the specified range of partition keys, the force-split argument must be set to true, otherwise the function returns with an error. In rare cases, DROP_PARTITIONS executes at the same time as a mergeout operation on the same ROS container. When this happens, the function cannot execute the force-split operation and returns with an error. This error message has been modified to better identify the source of the problem, and also provides a hint to try calling DROP_PARTITIONS again. |
VER-70460 | AP-Advanced | In rare situations, certain User-Defined Aggregate functions would cause a query to return an error "Data Area overflow". This issue has been resolved. |
VER-70466 | Admin Tools | Starting a subcluster in an Eon Mode database no longer writes duplicate entries to the admintools.log file. |
Vertica 9.3.0-1: Resolved Issues
Release Date: 10/31/2019
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-69388 | SAL | Queries could fail when Vertica 9.2 tried to read a ROS file format from Vertica 6.0 or earlier. Vertica now properly handles files created in this format. |
Vertica 9.3: Resolved Issues
Release Date: 10/14/2019
This release addresses the issues below.
Issue |
Component |
Description |
VER-67087 |
Admin Tools |
Sometimes during database revive, admintools treated s3 and hdfs user storage locations as local filesystem paths. This led to errors during revive. This issue has been resolved. |
VER-68531 |
Admin Tools |
Previously, environment variables used by admintools during SSH operations were set incorrectly on remote hosts. This issue has been resolved. |
VER-64171 |
Backup/DR |
If the hardlink failed during a hardlink backup, vbr switched to copying data instead of failing the backup and threw an error. This issue has been resolved. |
VER-66956 |
Backup/DR |
Dropping the user that owns an object involved in a replicate or restore operation during that replicate or restore operation can cause the nodes involved in the operation to fail. |
VER-62334 |
Catalog Engine |
Previously, if a DROP statement on an object failed and was rolled back, Vertica would generate a NOTICE for each dependent object. This was problematic in cases where the DROP operation had a large number of dependencies. This issue has been resolved: Vertica now generates up to 10 messages for dependent objects, and then displays the total number of dependencies. |
VER-67734 |
Catalog Engine |
In Eon mode, queries sometimes failed if they were submitted at the same time as a new node was added to the cluster. This issue has been resolved. |
VER-68188 |
Catalog Engine |
Exporting a catalog on a view that references itself could cause Vertica to fail. This issue has been resolved. |
VER-68603 |
Catalog Sync and Revive |
Database startup occasionally failed when catalog truncation failed because the disk was full. This issue has been resolved. |
VER-68494 |
Client Drivers - Misc |
Some kinds of node failures did not reset the list of available nodes for connection load balancing. These failures now update the available node list. |
VER-67342 |
Data Removal - Delete, Purge, Partitioning Client Drivers - ODBC |
Queries use the query epoch (current epoch -1) to determine which storage containers to use. A query epoch is set when the query is launched, and is compared with storage container epochs during the local planning stage. Occasionally, a swap partition operation was committed in the interval between query launch and local planning. In this case, the epoch of the post-commit storage containers was higher than the query epoch, so the query could not use them, and it returned empty results. This problem was resolved by making SWAP_PARTITIONS_BETWEEN_TABLES atomic. |
VER-66546 |
Cloud - Amazon |
The verticad script restarts the Vertica process on a node when the operating system starts. On Amazon Linux, the script sometimes was unable to detect the operating system and returned an error. This issue has been resolved. |
VER-53370 |
DDL - Projection |
Previously, projections of a renamed table retained their original names, which were often derived from the table's previous name. This issue has been resolved: now, when you rename a table, all projections whose names are prefixed by the anchor table name are renamed with the new table name. |
VER-66882 |
DDL - Table |
The database statistics tool SELECT ANALYZE_STATISTICS no longer acquires a GCL-X lock when running against local temp tables. |
VER-67105 |
S3 Data Export |
In previous releases, s3export could not export files in CSV format. The function now supports two new parameters, enclosed_by and escape_as, that enable exporting files in CSV format. |
VER-68619 |
Hadoop Data Export |
Date Columns now includes the Parquet Logical type, enabling other tools to recognize this column as a Date type. |
VER-66272 |
Data Removal - Delete, Purge, Partitioning |
Queries use the query epoch (current epoch -1) to determine which storage containers to use. A query epoch is set when the query is launched, and is compared with storage container epochs during the local planning stage. Occasionally, a swap partition operation was committed in the interval between query launch and local planning. In this case, the epoch of the post-commit storage containers was higher than the query epoch, so the query could not use them, and it returned empty results. This problem was resolved by making SWAP_PARTITIONS_BETWEEN_TABLES atomic. |
VER-65427 |
Data load / COPY |
Vertica could crash if its client disconnected during a COPY LOCAL with REJECTED DATA. This issue has been resolved. |
VER-65659 |
Data load / COPY |
Occasionally, a copy or external table query could crash a node. This has been resolved. |
VER-68033 |
Data load / COPY |
A bug in the Apache Parquet C++ library sometimes caused Vertica to fail when reading Parquet files with large Varchar statistics. This issue has been resolved. |
VER-53981 |
Database Designer Core |
In some cases, DESIGNER_DESIGN_PROJECTION_ENCODINGS mistakenly removed comments from the target projections. This issue has been resolved. |
VER-62628 |
Execution Engine |
If a subquery that is used as an expression returns more than one row, Vertica returns with an error. In past releases, Vertica used this error message: ERROR 4840: Subquery used as an expression returned more than one row In cases where the query contained multiple subqueries, this message did not help users identify the source of the problem. With this release, the error message now provides the Join label and localplan_id. For example: ERROR 4840: Subquery used as an expression returned more than one row DETAIL: Error occurred in Join [(public.t1 x public.t2) using t1_super and subquery (PATH ID: 1)] with localplan_id=[4] |
VER-66224 |
Execution Engine |
On rare occasions, a query that exceeded its runtime cap automatically restarted instead of reporting the timeout error. This issue has been resolved. |
VER-67069 |
Execution Engine |
Some queries with complex predicates ignored cancel attempts, manual or via the runtime cap, and continued to run for a long time. Cancel attempts themselves also caused the query to run longer than it would have otherwise. This issue has been resolved. |
VER-67102 |
Execution Engine |
Users were unable to cancel meta-function RUN_INDEX_TOOL. This problem has been resolved. |
VER-69066 |
Execution Engine |
In some cases, queries failed to sort UUIDs correctly if the ORDER BY clause did not specify the UUID column first. This problem has been resolved. |
VER-64680 |
Kafka Integration |
Previously, the version of the kafkacat utility distributed with Vertica had an issue that prevented it from working when TLS/SSL encryption was enabled. This issue has been corrected, and the version of kafkacat bundled with Vertica can now make TLS/SSL encrypted connections. |
VER-68192 |
License |
Upgrading 8.1.1-x databases with Autopass license installed to 9.0 or later can lead to license tampering startup issues. This problem has been resolved. |
VER-67573 |
Nimbus |
In the past, DDL transactions remained open until all pending file deletions were complete. In Eon mode, this dependency could cause significant delays. This issue has been resolved: now, DDL transactions can complete while file deletions continue to execute in the background. |
VER-26260 |
Optimizer |
Vertica can now optimize queries on system tables where the queried columns are guaranteed to contain unique values. In this case, the optimizer prunes away unnecessary joins on columns that are not queried. |
VER-66423 |
Optimizer |
The Optimizer's is better able to derive transitive selection predicates from subqueries. |
VER-66902 |
Optimizer |
EXPORT TO VERTICA returned an error if the table to export was a flattened table that already existed in the source and target databases. This issue has been resolved. |
VER-66933 |
Optimizer |
Previously, export operations such as export_tables() exported all table constraints as ALTER statements, whether they were part of the original CREATE statement, or added later with ALTER TABLE. This was problematic for global temporary tables, which cannot be modified with any constraints other than foreign keys; attempts to add constraints on a temporary table with ALTER TABLE return with an error. Thus, the DDL that was exported for a temporary table with constraints could not be used to reproduce that table. This problem has been resolved: now, export operations always export DDL for all constraints (except foreign keys) as part the CREATE statement. This change applies to all tables, temporary and otherwise. |
VER-66968 |
Optimizer |
Queries that perform an inner join and group the results now return consistent results. |
VER-67138 |
Optimizer |
When flattening subqueries, Vertica could sometimes move subqueries to the ON clause which is not supported. This issue has been resolved. |
VER-67443 |
Optimizer |
ALTER TABLE...ALTER CONSTRAINT returned an error when a node was down. This issue has been resolved: you can now enable enable or disable table constraints when nodes are down. |
VER-67740 |
Optimizer |
Vertica sometimes crashed when certain combinations of analytic functions were applied to the output of a merge join. This issue has been resolved. |
VER-67908 |
Optimizer |
Prepared statements do not support WITH clause materialization. Previously, Vertica threw an error when it tried to materialize a WITH clause for prepared statement queries. Now, Vertica throws a warning and processes the WITH clause without materializing it. |
VER-68306 |
Optimizer |
Queries with multiple analytic functions over complex expression arguments and different PARTITION BY/ORDER BY column sets would sometimes produce incorrect and inconsistent results between Enterprise and Eon. This issue has been resolved. |
VER-68379 |
Optimizer |
Calls to REFRESH_COLUMNS can now be monitored in the new "dc_refresh_columns" table, which logs, for every call to REFRESH_COLUMNS, the time of the call, the name of the table, the refreshed columns, the mode used, the minimum and maximum key values, and the epoch. |
VER-68594 |
Optimizer |
Queries sorting on subquery outputs sometimes produced inconsistent results. This issue has been resolved. |
VER-68828 |
Optimizer |
When a session's transaction isolation mode was set to serializable, MERGE statements sometimes returned with the error message 'Can't run historical queries at epochs prior to the Ancient History Mark'. This issue has been resolved. |
VER-49136 |
Optimizer Optimizer - Projection Chooser |
TRUNCATE TABLE caused all existing table- and partition-level statistics to be dropped. This issue has been resolved. |
VER-68826 |
Optimizer - Statistics and Histogram |
If you partitioned a table by date and used EXPORT_STATISTICS_PARTITION to export results to a file, the function wrote empty content to the target file. This issue has been resolved. |
VER-67647 |
Build and Release Performance tests |
The ST_IsValid() geospatial function, when used with the geography data type, has seen a slight performance degradation of around 10%. |
VER-66648 |
QA |
When a query returned many rows with wide columns, Vertica threw the following error message: "ERROR 8617: Request size too big." This message incorrectly suggested that query output consumed excessive memory. This issue has been resolved. |
VER-64783 |
Recovery |
When applying partition events during recovery, Vertica determined that recovery was complete only after all table projections were recovered. This rule did not differentiate between regular and live aggregate projections of the same table, which typically recovered in different stages. If recovery was interrupted after all regular projections of a table were recovered but before recovery of its live aggregate projections was complete, Vertical returned an error. This problem has been resolved: Vertica now determines that recovery is complete when all regular projections are recovered, and disregards the recovery status of live aggregate projections. |
VER-68504 |
Execution Engine S3 |
If you refreshed a large projection in Eon mode and the refresh operation used temp space on S3, the refresh operation occasionally caused the node to crash. This issue has been resolved. |
VER-60036 |
SDK |
All C++ UDXs require c++11 standard now. That can be done by setting -std=c++11 flag at compilation. |
VER-61780 |
Scrutinize |
Scrutinize previously generated a UnicodeEncodeError if the system locale was set to a language with non-ASCII characters. This issue has been resolved. |
VER-66988 |
Scrutinize |
When running scrutinize, the database password would be written to scrutinize_collection.log. This has been resolved: the "__db_password__" entry has been removed from the log file. |
VER-62276 |
Security |
Changes to the SSLCertificate, SSLPrivateKey, and SSLCA parameters take effect for all new connections and no longer require a restart. |
VER-65168 |
Security |
Before release 9.1, the default Linux file system permissions on scripts generated by the Database Designer were 666 (rw-rw-rw-). Beginning with release 9.1, default permissions changed to 600 (rw-------). With this release, default permissions have reverted to 666 (rw-rw-rw). |
VER-65890 |
Security |
Two system tables have been added to monitor inherited privileges on tables and views: inheriting_objects shows which catalog objects have inheritance enabled, and inherited_privileges shows what privileges users and roles inherit on those objects. |
VER-68616 |
Security |
When the only privileges on a view came via ownership and schema-inherited privileges instead of GRANT statements, queries on the view by non-owners bypassed the privilege check for the view owner on the anchor relation(s). Queries by non-owners with inherited privileges on a view now correctly ensure that the view owner has SELECT WITH GRANT OPTION on the anchor relation(s). |
VER-67658 |
Spread |
In unstable networks, some UDP-based Vertica control messages could be lost. This could result in hanging sessions that could not be cancelled. This issue has been resolved. |
VER-47639 |
Supported Platforms |
Vertica now supports the XFS file storage system. |
VER-67234 |
Tuple Mover |
The algorithm for prioritizing mergeout requests sometimes overlooked slow-loading jobs, especially when these competed with faster jobs that loaded directly into ROS. This could cause mergeout requests to queue indefinitely, leading to ROS pushback. This problem was resolved by changing the algorithm for prioritizing mergeout requests. |
VER-67275 |
Tuple Mover |
Previously, the mergeout thread dedicated to processing active partition jobs ignored eligible jobs in the mergeout queue, if a non-eligible job was at the top of the queue. This issue has been resolved: now the thread scans the entire queue for eligible mergeout jobs. |
VER-68383 |
Tuple Mover |
Mergeout did not execute purge requests on storage containers for partitioned tables if the requests had invalid partition keys. At the same time, the Tuple Mover generated and queued purge requests without validating their partition keys. As a result, mergeout was liable to generate repeated purge requests that it could not execute, which led to rapid growth of the Vertica log. This issue has been resolved. |
VER-65744 |
UDX |
Flex table views now properly show UTF-8 encoded multi-byte characters. |
VER-62048 |
UI - Management Console |
Certain design steps in the Events history tab of the MC Design page were appearing in the wrong order and with incorrect time stamps. This issue has been resolved. |
VER-65561 |
UI - Management Console |
Under some conditions in MC, a password could appear in the scrutinize collection log. This issue has been resolved. |
VER-67903 |
UI - Management Console |
MC was not correctly handling design tables when they contained Japanese characters. This issue has been resolved. |
VER-68357 |
UI - Management Console |
When Clockskew was detected, MC was incorrectly updating the alert during the subsequent checks with the timestamp of the last check, instead of the timestamp when the Clockskew was first detected. This issue was fixed, and also the status of the alert (resolved/unresolved) was added to the display. |
VER-68448 |
UI - Management Console |
MC catches Vertica’s SNMP traps and generates an alert for each one. MC was not generating an alert for the Vertica SNMP trap: CRC Mismatch. This issue has been resolved. |
Known issues Vertica 9.3
Updated: 12/5/19
Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.
Known Issues
Issue |
Component |
Description |
VER-45474 | Optimizer | When a node down, DELETE and UPDATE query performance can slow due to non-optimized query plans. |
VER-68463 | Cloud - Amazon, Data Export, Hadoop | Export to parquet with partitions fails if any of the exported columns in the outer-most select statement is specified using "schema.table.column" notation. The workaround is to specify it using just "column" notation without "schema.table" part. |
VER-67228 | AMI, License | An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing. |
VER-64997 | Backup/DR, Security | A vbr restore operation can fail if your database is configured for SSL and has a server.key and a server.crt file present in the catalog directory. |
VER-64916 | Kafka Integration | When Vertica exports data collector information to Kafka via notifier, the sterilization logic does not properly handle Vertica internal data type translation. When importing or copying that data back into the Management Console, Vertica uses the raw data in Kafka and inserts garbled data into the extended monitoring database. |
VER-64352 | SDK | Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2:
CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python'; DROP LIBRARY lib1; CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python'; Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails. |
VER-63720 | Recovery | When fewer nodes than the node number of the cluster are specified in a vbr configuration file, the catalog of the restored library will not be installed on nodes that were not specified in the vbr configuration file. |
VER-62983 | Hadoop | When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas. |
VER-62061 | Catalog Sync and Revive | If you revive a database on a one-node cluster, Vertica does not warn users against reviving the same database elsewhere. This can result in two instances of the same database running concurrently on different hosts, which can corrupt the database catalog. |
VER-61584 | Nimbus, Subscriptions | It happens only while node(s) is shutting down or in unsafe status. No workaround. |
VER-61420 | Data Removal - Delete, Purge, Partitioning | Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM. |
VER-61069 | Execution Engine | In very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely. |
VER-60409 | AP-Advanced, Optimizer | APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also outputs many columns may fail with error "Request size too big" due to additional memory requirement in parsing. |
VER-58168 | Recovery | A transaction that started before a node began recovery is referred to as a dirty transaction in the context of recovery. A recovering node must wait for or cancel such a transaction in order to recover tables modified by the transaction. In some rare instances such transactions may be hung and can not be canceled. Tables locked by such a transaction can not be recovered and therefore a recovering node can not transition to 'UP'. Usually the hung transaction can be stopped by restarting the node on which the transaction was initiated, assuming this is not a critical node. In extreme cases or instances where the initiator node is a critical node, restart the cluster. |
VER-57126 | Data Removal - Delete, Purge, Partitioning | Partition operations that use a range, for example, copy_partitions_to_table, must split storage containers that span the range in order to complete. For tables partitioned using a GROUP BY expression such split plans can require a relatively large amount of memory, especially when the table has a large number of columns. In some cases the partition operation may error with "ERROR 3587: Insufficient resources to execute plan on pool <poolname>". |
VER-48020 | Hadoop | Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load. |
Legal Notices
Warranty
The only warranties for products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein.
The information contained herein is subject to change without notice.
Restricted Rights Legend
Confidential computer software. Valid license from Micro Focus required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
Copyright Notice
© Copyright 2006 - 2019 Micro Focus, Inc.
Trademark Notices
Adobe® is a trademark of Adobe Systems Incorporated.
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.