With version 9.1, Vertica has removed support for projection buddies with different SELECT and ORDER BY clauses. All projection buddies must specify columns in the same order. The Vertica database regards projections with non-compliant buddies as unsafe.
Before upgrading to 9.1 or higher, you are strongly urged to check your database for unsupported projections. If the upgrade encounters these projections, it is liable to fail. You must then revert to the previous installation.
Solution: Run the pre-upgrade script
Vertica has provided a pre-upgrade script that examines your current database and sends to standard output its analysis and recommendations. The script identifies and lists any unsupported projections. If the script finds projection buddies with different SELECT and ORDER BY clauses, it generates a deploy script. Run this script to remedy projections so they comply with system K-safety.
https://www.vertica.com/pre-upgrade-script/
For more information, see Fixing Unsafe Buddy Projections in the Vertica documentation.
The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 9.1.x.
They also contain information about issues resolved in:
The Premium Edition of Vertica is available for download at www.vertica.com.
The Community Edition of Vertica is available for download at the following sites:
The documentation is available at https://www.vertica.com/docs/9.1.x/HTML/Content/Home.htm.
Hotfixes are available to Premium Edition customers only. Each software package on the http://www.vertica.com/downloads site is labeled with its latest hotfix version.
Take a look at the Vertica 9.1.x New Features Guide for a complete list of additions and changes introduced in this release.
Take a look at the Vertica 9.1.1 New Features Guide for a complete list of additions and changes introduced in this release.
When nodes are down, the Optimizer can now generate plans that support late materialization. Plan creation is much simpler and incurs less overhead. Execution time of nodes-down query plans is significantly better, equivalent now to running query plans when all nodes are up.
You can now transfer schema ownership with ALTER SCHEMA:
=> ALTER SCHEMA [database.]schema OWNER TO user‑name [CASCADE]
Vertica now supports explicit coercion (casting) from CHAR and VARCHAR data types to either BINARY or VARBINARY data types. For all supported coercion options, see Data Type Coercion Chart in the SQL Reference Manual.
If Management Console is running in an AWS environment, you can now upgrade Management Console to the newest Vertica version through the MC interface .
You can upgrade using the new step-by-step upgrade wizard, available through Management Console Settings. Simply enter your AWS credentials, and then choose a new version and settings for the instance on which MC will run. A new version of MC will be provisioned onto a new AWS instance.
After you upgrade MC, you can automatically upgrade any database running in Eon Mode to the same version simply by reviving the database from the newly upgraded MC.
For more information see Using Management Console.
The new system table QUERY_CONSUMPTION provides a summary view of query resource usage, including data on queries that fail. For details, see Profiling Query Resource Consumption in the Administrator's Guide.
New features include:
For more information, see Machine Learning for Predictive Analytics.
The new support for internode encryption allows you to use SSL to secure communication between nodes in a cluster. For more information, see Internode Communication and Encryption.
You can now set active partition counts for individual tables, using CREATE TABLE and ALTER TABLE. The table-specific settings of ActivePartitionCount supersede the global setting.
For more information, see Active and Inactive Partitions in the Administrator's Guide.
CREATE RESOURCE POOL and ALTER RESOURCE POOL now support MAXQUERYMEMORYSIZE
, which caps the amount of memory that a resource pool can allocate to process any query.
You can now use vbr to perform object replication concurrently with many other tasks. The replicate task can run concurrently with backup and with object replicate tasks in either direction. Replication cannot run concurrently with tasks that require that the database be down (full restore and copy cluster). Each concurrent task must have a unique snapshot name.
For more information about object replication, see Replicating Tables and Schemas to an Alternate Database.
Parquet Export Supports S3
You can now export data to S3, in addition to HDFS and the local disk with EXPORT TO PARQUET.
For more information, see Exporting to S3.
HCatalog Connector Supports Custom Hive Partitions
By default, Hive stores partition information under the path for the table definition, which might be local to Hive. However, a Hive user can choose to store partition information elsewhere, such as in a shared location like S3. Now, during query execution Vertica can query Hive for this information. Specify whether to look for custom partitions when creating the HCatalog schema.
For more information, see Defining a Schema Using the HCatalog Connector.
Supported Versions
For Vertica 9.1.1, the supported versions of Kafka are:
Vertica can work with older versions of Kafka. However, version 0.9 and earlier use an older revision of the Kafka protocol. To connect Vertica 9.1.1 or later to a Kafka cluster running 0.9 or earlier, you must change a setting in the rdkafka library that Vertica uses to communicate with Kafka.
For more information, see Configuring Vertica for Apache Kafka Version 0.9 and Earlier.
message.max.bytes
The meaning of Kafka's message.max.bytes setting has been changed for Kafka version 0.11. This change could cause performance issues when loading data using a streaming job scheduler that was created using Vertica version 9.1.0 or earlier.
For more information, see Changes to the message.max.bytes Setting in Kafka Version 0.11 and Later.
Consumer Groups
Vertica supports the Kafka consumer group feature. With Kafka, use this feature to balance the load of reading messages across consumers and ensure that consumers read messages only once. The Vertica streaming job scheduler prevents re-reading messages by managing message offsets on its own, and manages spreading the load across the entire Vertica cluster. The main use case for consumer groups with Vertica is to allow third-party applications to monitor its progress as it consumes messages.
For more information, see Monitoring Vertica Message Consumption with Consumer Groups.
Library Options
Version 9.1.1 adds the ability to pass options directly to the rdkafka library that Vertica uses to communicate with Kafka. This feature lets you change settings you cannot directly set in Vertica.
For more information, see Directly Setting Kafka Library Options.
Log files
In Vertica 9.1.1, the Apache Kafka integration logs messages to the standard vertica.log file.
For more information about viewing log messages, see Monitoring Log Files.
KafkaAvroParser and KafkaJSONParser
KafkaAvroParser and KafkaJSONParser now natively support the UUID data type and can parse UUID values into the Vertica UUID data type.
The following Vertica functionality was deprecated in this release. This functionality will be retired in a future Vertica version:
For more information see Deprecated and Retired Functionality in the Vertica documentation.
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-70644 | Optimizer - Statistics and Histogram | ANALYZE_ROW_COUNT can no longer change STATISTICS_TYPE from FULL to ROWCOUNT. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-69643 | Kafka Integration | Vertica no longer fails when it encounters improper Kafka SSL key/certificate setup error information on RedHat Linux. |
VER-69668 | Backup/DR | If customer a copycluster task and a swap partition task ran concurrently, the data that participated in the swap partition could end up missing on the target cluster. This issue has been resolved. |
VER-69675 | Monitoring | In earlier releases, users could set configuration parameter SaveDCEEProfileThresholdUS to a value greater than its maximum range (2^31-1), which led to integer overflow. Vertica now returns an error if the input value is greater than 2^31-1. |
VER-69785 | Execution Engine | In cases where COUNT (DISTINCT) and aggregates such as AVG were involved in a query of a numeric datatype input, the GroupGeneratorHashing Step was causing a memory conflict when the size of the input datatype (numeric) was greater than the size of output datatype (float for average), producing incorrect results. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-69192 | Security | Before release 9.1, the default Linux file system permissions on scripts generated by the Database Designer were 666 (rw-rw-rw-). Beginning with release 9.1, default permissions changed to 600 (rw-------). This hotfix release reverts default permissions to 666 (rw-rw-rw). |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-68709 | Data Export, Hadoop | Date Columns now includes the Parquet Logical type, enabling other tools to recognize this column as a Date type. |
VER-68704 | UI - Management Console | Certain design steps in the Events history tab of the MC Design page were appearing in the wrong order and with incorrect time stamps. This issue has been fixed. |
VER-68576 | UI - Management Console | MC was failing to add tables to a design when table contained Japanese characters. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-68430 | License | Upgrading 8.1.1-x databases with Autopass license installed to 9.0 or later can lead to license tampering startup issues. This problem has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-68137 | Tuple Mover | TM_MOVESTORAGE ran in an infinite loop if the default storage location was labeled and a non-data usage storage location was unlabeled. This issue has been resolved. Now Vertica never moves ROS containers to a non-data usage storage location. |
VER-67939 | Data load / COPY | Vertica could fail if its client disconnected during a COPY LOCAL with REJECTED DATA. This issue has been fixed. |
VER-67874 | Catalog Engine | Previously, if a DROP statement on an object failed and was rolled back, Vertica would generate a NOTICE for each dependent object. This was problematic in cases where the DROP operation had a large number of dependencies. This issue has been resolved: Vertica now generates up to 10 messages for dependent objects, and then displays the total number of dependencies. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-67634 | Execution Engine | Some queries with complex predicates ignored cancel attempts, manual or via the runtime cap, and continued to run for a long time. Cancel attempts themselves also caused the query to run longer than it would have otherwise. This issue has been fixed. |
VER-67594 | Client Drivers - ODBC, Data Removal - Delete, Purge, Partitioning | Queries use the query epoch (current epoch -1) to determine which storage containers to use. A query epoch is set when the query is launched, and is compared with storage container epochs during the local planning stage. Occasionally, a swap partition operation was committed in the interval between query launch and local planning. In this case, the epoch of the post-commit storage containers was higher than the query epoch, so the query could not use them, and it returned empty results. This problem was resolved by making SWAP_PARTITIONS_BETWEEN_TABLES atomic. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-67340 | Backup/DR | If you drop the user who owns objects in an ongoing replicate or restore operation, Vertica now cancels the operation with an error message. |
VER-67413 | Installation Program | If a standby node had been replaced in past with failed nodes with deprecated projections. The identify_unsupported_projections.sh script could not reflect on the standby node. When upgrading to 9.1 or later, the standby node may not start due to failure of the 'U_DeprecateNonIdenticallySortedBuddies' and/or 'U_DeprecatePrejoinRangeSegProjs' task. This issue has been resolved. |
VER-67415 | Optimizer | Queries that grouped join results and had predicates on group keys would on some occasions produce inconsistent results. This issue have been fixed. |
VER-67417 | FlexTable | Flex table views now properly show UTF-8 encoded multi-byte characters. |
VER-67419 | Execution Engine | On rare occasions, a query that exceeded its runtime cap automatically restarted instead of reporting the timeout error. This issue has been fixed. |
VER-67420 | Optimizer | When flattening subqueries, Vertica could sometimes move subqueries to the ON clause which is not supported. This issue has been resolved. |
VER-67441 | DDL - Table | The database statistics tool SELECT ANALYZE_STATISTICS no longer acquires a GCL-X lock when running against local temp tables. |
VER-67510 | Backup/DR | Vertica no longer displays redundant "Missing info files: skipping..." warnings in the vbr log when performing backup tasks. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-67061 | Execution Engine | Partition pruning did not work when querying a table's TIMESTAMP column, where the query predicate specified a TIMESTAMPTZ constant. This issue has been resolved. |
VER-67040 | Security | Passwords with a semicolon are no longer printed in the log. |
VER-66603 | Scrutinize | The scrutinize command previously used the /tmp directory to store files during collection. It now uses the specified temp directory. |
VER-66501 | Optimizer | Partition pruning now works properly for queries that use the AT TIME ZONE parameter. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-66809 | Backup/DR | Altering the owner of a replicated schema with the CASCADE clause caused a node crash on the target database. This issue has been fixed |
VER-66766 | Hadoop | Export to Parquet previously crashed when the export included a combination of Select statements. This issue has been fixed. |
VER-67004 | UI - Management Console | While logged into the MC as a non-admin user, the table in the Load page's "Continuous tab" would periodically not show all the current running Vertica schedulers that are in use. This issue has been resolved. |
VER-66959 | Client Drivers - ADO | Previously, if an integer column value was split across multiple reads by the ADO.NET driver, it was not correctly captured. This has been corrected. |
VER-66857 | Optimizer | During an optimized merge, Vertica would attempt to perform planning of the DELETE portion of the MERGE query twice, sometimes triggering an error. This issue has been resolved. |
VER-66811 | Optimizer - Statistics and Histogram | The database statistics tool SELECT ANALYZE_STATISTICS no longer fails if you call it when the cluster is in a critical state. |
VER-66717 | Data Removal - Delete, Purge, Partitioning | The SWAP_PARTITIONS_BETWEEN_TABLES meta-function is now atomic. |
VER-66528 | Execution Engine | Some long 'like' patterns with many non-ASCII characters (e.g., Cyrillic) would crash the server. This has been corrected |
VER-66499 | Data load / COPY | Rejected data and exceptions files during a COPY statement are created with file permissions 666. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-66461 | Depot | FetchFileTask now reserves memory from the resource manager before fetching files from the depot. As a result, it no longer triggers out of disk errors due to incorrect disk accounting. |
VER-66706 | Data load / COPY | The FJsonParser no longer fails when loading JSON records with more than 4KB of keys and the option reject_on_duplicate_key=true. |
VER-66731 | UI - Management Console | Running the 'Explain" query option for an unsegmented projection on the MC Query Plan page could trigger the error: "There is no metadata available for this projection". This issue has been resolved. |
VER-66743 | UI - Management Console | The MC would not properly display the license chart on the license tab when license usage exceeded 10TB. This issue has been resolved. |
VER-66751 | Kafka Integration | Vertica now properly supports TLS certificate chains for use with Kafka Scheduler and UDx. |
VER-66758 | Client Drivers - ADO | In some cases the ADO.NET driver created a memory leak when it tracked statements associated with a connection to ensure that the statements were closed when the connection closed. This problem has been corrected. |
VER-66772 | Client Drivers - JDBC Security | JDBC could fail with a stack overflow error when receiving more than 512 MB of data from Vertica. This issue was due to an incompatibility between Java's and OpenSSL's implementations of TLS renegotiation. Vertica now contains the tls_renegotiation_limit parameter. You can set this parameter to 0 to disable SSL/TLS renegotiation and avoid the issue. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-65852 | Security |
Setting a row-based access policy on an underlying table of a view denies the user access to the view. This issue is fixed. |
VER-65823 | Data load / COPY |
Occasionally, an empty response from S3 (response code 0) could cause some database operations to fail. This issue has been fixed. |
VER-65773 | License |
Due to a miscalculation, license auditor running in sampling mode showed a much larger standard error than it actually was (this is the number you see after the +/- sign on the data size line in license compliance messages). This issue has been fixed. |
VER-65735 | Backup/DR |
Running the remove task using a vbr config file with an incorrect "snapshotName" removed entries from the backup manifest file. This problem has been resolved. |
VER-65558 | UI - Management Console |
License usage was showing 0% when Autopass license is used. This issue has been fixed. Vertica server hotfix upgrade is also required for this fix to take effect (same release number). |
VER-65504 | UI - Management Console |
When the Management Console on premises attempted to download the Vertica AMI repository file, an error occurred. The error did not cause any functionality failures since the repository file is not used for MC on premises. This issue has been fixed. |
VER-65304 | UI - Management Console |
Management Console on AWS did not display the AMI version which may have caused user confusion if the MC was not hotfixed on a Vertica hotfix AMI. This issue has been fixed. |
VER-65300 | Hadoop |
Hive file formats partition pruning requires all partition directory names to be in lower-case. Previously, Export to Parquet sometimes generated partition directory names in mixed-case. This caused the partition pruning optimization to be skipped on a load. This issue has been fixed. |
VER-65161 | Catalog Engine |
If vertica.conf has a parameter set to a value that starts with hash (#), Vertica ignores that parameter and logs a warning. This issue has been fixed. |
VER-65028 | Admin Tools |
Adding large numbers of nodes in a single operation could lead to an admintools error parsing output. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-64810 | Cloud - Amazon, UI - Management Console |
At times, Management Console failed to add a new host to the database after a long wait. This issue has been fixed. |
VER-64814 | UI - Management Console |
The Database Designer wizard in Management Console did not correctly set the default value for design k-safety. This issues has been fixed. |
VER-64785 | Execution Engine |
In some regex scalar functions, Vertica would crash for certain long input strings. This issue has been fixed. |
VER-64846 | UI - Management Console |
Download Event history was not working in Internet Explorer. This issue has been fixed. |
VER-64932 | Catalog Engine |
Having frequent create/insert/drop tables caused a memory leak in the catalog. This issue has been fixed. |
VER-65144 | UI - Management Console |
In the Management Console, the Eon mode database could not be provisioned due to inconsistent versions between the Management Console AMI and the Vertica database. This issue has been fixed. |
VER-64826 | UI - Management Console |
Management Console could not import multiple clusters when the host had the same private IP address as the private IP address of a previously imported cluster. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-64676 | Hadoop |
When HCatalogConnectorUseHiveServer2 was set to 0, the database failed to start. This issue has been fixed. |
VER-64636 | Optimizer |
Very large expressions could run out of stack and crash the node. This issue has been fixed. |
VER-64496 | Catalog Engine, Spread |
If a control node and one of its child nodes went down, attempts to restart the second (child) node sometimes failed. This issue has been fixed. |
VER-64766 | Data load / COPY, FlexTable |
At times, FJsonParser array parsing behavior returned inconsistent results. This issue has been fixed. |
VER-64493 | Data load / COPY |
The SKIP keyword of a COPY statement was not properly supported with the FIXEDWIDTH data format. This issue has been fixed. |
VER-64570 | Tuple Mover | When executing heavy workloads over an extended period of time, the Tuple Mover was liable to accumulate significant memory until its session ended and the memory was released. This issue has been fixed. |
VER-64067 | AP-Advanced |
If you ran APPROXIMATE_COUNT_DISTINCT_SYNOPSIS on a database table that contained NULL values, the synopsis object that it returned sometimes was larger than the one it returned after the NULL values were removed. This issue has been resolved. |
VER-64596 | SDK |
A new Aggregate function, LISTAGG denormalizes rows into a string of comma-separated values or other human-readable formats. |
VER-64579 | Backup/DR |
Vertica now supports full backup and restore for Eon Mode databases. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-63830 | Data Export, Execution Engine, Hadoop |
Occasionally, the EXPORT TO PARQUET command wrote data to incorrect partition directories of data types timestamp, date, and time. This issue has been fixed. |
VER-64254 | Optimizer |
Query performance was sometimes sub-optimal if the query included a subquery that joined multiple tables in a WHERE clause, and the parent query included this subquery in an outer join that spanned multiple tables. This issue has been fixed. |
VER-64305 | Optimizer |
When a projection was created with only the "PINNED" keyword, Vertica incorrectly considered it a segmented projection. This caused optimizer internal errors and incorrect results when loading data into tables with these projections. This issued has been fixed. IMPORTANT: The fix only applies to newly created pinned projections. Existing pinned projection in the catalog are still incorrect and need to be dropped and recreated manually. |
VER-64060 | UDX |
Previously, export to Parquet generated boolean partition values that could not be loaded correctly. This issue has been fixed. |
VER-64288 | UI - Management Console |
Long queries were failing in the Management Console Query Execution page. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-63992 | DDL |
Add Column Not Null gets transformed into two different statements: Alter Table Add Column and Alter Table Add Constraint. The second statement can fail and results in a new column in your table but missing the constraint. It also produces a confusing rollback message that doesn't indicate that the column was added anyway. The fix is to deal with Add Column Not Null in a same statement. |
VER-63897 | Data load / COPY, Hadoop |
Queries involving a join of two external tables loading parquet files sometimes caused Vertica to crash. The crash happened in a very rare situation due to memory misalignment. This issue has been fixed. |
VER-63895 | Execution Engine, Hadoop |
If a Parquet file metadata is very large, Vertica can consume more memory than reserved and crash when the system runs out of memory. This issue has been fixed. |
VER-63863 | Hadoop |
Certain large Parquet files generated from ParquetExport could contain a lot of metadata leading to out of memory issues. A new parameter 'fileSizeMB' has been added to ParquetExport to limit the file sizes being exported, thereby limiting the metadata size. |
VER-63604 | Data load / COPY |
In a COPY statement, excessively long invalid inputs to any date or time columns could cause stack overflows, resulting in a crash. This issue has been fixed. |
VER-64356 | DDL - Table |
Previously, query label only applied to CTAS statements, but was missing in the implicit INSERT SELECT statement. With this fix, query label will be applied to both statements. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-63524 | Front end - Parse & Analyze |
Vertica crashed when a function expression was assigned an alias with the same name as one of the ILM operations: copy_table, copy_partitions_to_table, move_partitions_to_table and swap_partitions_between_tables. This issue is fixed. |
VER-63693 | S3 |
Before this release, S3Export was not thread safe when the data contains time/date values. This means S3Export should not be used with PARTITION BEST when exporting time/date values before this fix. |
VER-63546 | Data load / COPY | Occasionally, a copy or external table query could crash a node. This has been fixed. |
VER-63799 | AMI, UI - Management Console |
If the Vertica Management Console was deployed in AWS using the Basic or Advanced CFT and a non-zero CIDR (e.g., 2.22.222.22/32), then using the MC to revive a database failed with the error This issue has been resolved. |
To see a complete list of additions and changes introduced in this release, refer to the Vertica 9.1.1 New Features Guide.
Issue |
Component |
Description |
VER-53889 | Error Handling, Execution Engine |
If you tried to insert data that was too wide for a VARCHAR or VARBINARY column, the error message did not specify the column name. This error message now includes the column name. |
VER-62550 | Data load / COPY |
COPY statements using the default parser could erroneously reject rows if the input data contained overflowing floating point values in one column and the maximum 64-bit integer value in another column. This issue has been fixed. |
VER-61361 | DDL - Projection | If you create projections in a K-safe database using the OFFSET syntax, Vertica does not automatically create buddies for these projections. If primary key constraints are enforced, Vertica automatically creates buddy projections that satisfy system K-safety and enforce primary key constraints. If a table has some projections that are K-safe and others that are not, the Optimizer might be unable to find a K-safe projection to process queries. In this case the Optimizer assumes some nodes are down and generates an error. This issue has been fixed: Vertica now delays creation of projections for a table that enforces key constraints until the table has at least one K-safe superprojection. Until then, queries on that table return an error that projections for processing queries are unavailable. |
VER-59297 | ResourceManager | In the system table RESOURCE_ACQUISITIONS, the difference between QUEUE_ENTRY_TIMESTAMP and ACQUISITION_TIMESTAMP now correctly shows how long a request was queued in a resource pool before acquiring the resources it needed to execute. |
VER-59147 | AP-Advanced, Sessions | Using a Machine Learning function, which accepts a model as a parameter, in the definition of a view, and then feeding that view into another Machine Learning function as its input relation may cause failure in some special cases. |
VER-60828 | Optimizer | Some instances of type casting on a datetime column that contained a value of 'infinity' triggered segmentation faults that caused the database to fail. This issue has been resolved. |
VER-59645 | Backup/DR | Object replication to a target database running a newer version of Vertica sometimes incorrectly attempted to connect to the target database using its internal addresses. This issue has been resolved. |
VER-59235 | UI - Management Console | With LDAP authentication turned on, users added before LDAP "default search path" was modified were not able to logon. This issue has been fixed. |
VER-26440 | DDL - Projection | In previous Vertica versions, the configuration parameter SegmentAutoProjection could only be set at the database level. You can now set this parameter for individual sessions. |
VER-58665 | Optimizer | Vertica stored incorrectly encoded min/max values in statistics for binary data types. This problem is now fixed. |
VER-62710 | Execution Engine | Loads and copies from HDFS occasionally failed if Vertica was reading data through libhdfs++ when the NameNode abruptly disconnected from the network. libhdfs++ was being compiled with a debug flag on, which caused network errors to be caught in debug assertions rather than normal error handling and retry logic. |
VER-62740 | DDL - Projection | Some versions of Vertica allowed projections to be defined with a subquery that referenced a view or another projection. These malformed projections occasionally caused the database to shut down. Vertica now detects these errors when it parses the DDL and returns with a message that projection FROM clauses must only reference tables. |
VER-62255 | Data load / COPY | INSERT DIRECT failed when the configuration parameter ReuseDataConnections was set to 0 (disabled). When this parameter was disabled, no buffer was allocated to accept inserted data. Now, a buffer is always allocated to accept data regardless of how ReuseDataConnections is set. |
VER-62358 | Execution Engine, Optimizer | A bug in the way null values were detected for float data type columns led to inconsistent results for some queries with median and percentile functions. This issue has been fixed. |
VER-61903 | Execution Engine | In some workloads, the memory consumed by a long-running session could grow over time. This has been resolved. |
VER-60943 | Vertica Log Text Search | Dependent index tables did not have their reference indices updated when the base table's columns were altered but not removed. This caused an internal error on reference. This issue has been fixed so that the indices are updated when you alter the base table's columns. |
VER-60715 | Optimizer - GrpBy & Pre Pushdown | Including certain functions like nvl in a grouping function argument resulted in a "Grouping function arguments need to be group by expressions" error. This issue has been fixed. |
VER-62902 | Optimizer - Query Rewrite | Applying an access policy on some columns in a table significantly impacted execution of updates on that table, even where the query excluded the restricted columns. Further investigation determined that the query planner materialized unused columns, which adversely affected overall performance. This problem has been resolved. |
VER-62680 | Optimizer - Join & Data Distribution | Optimized MERGE was sometimes not used with NULL constants in varchar or numeric columns. This issue has been fixed. |
VER-62222 | Optimizer | Attempts to swap partitions between flattened tables that were created in different versions of Vertica failed due to minor differences in how source and target SET USING columns were internally defined. Recent patches for these versions have resolved this issue. |
VER-61899 | Diagnostics | Scrutinize was collecting the logs only on the node where it was executed, not on any other nodes. This issue has been fixed. |
VER-63608 | UI - Management Console | The jackson-databind library used by MC has been upgraded to 2.9.5 to fix security vulnerabilities. |
VER-53329 | Data load / COPY | Data loading into table with enforced key constraints used to be much slower when some nodes are down in the cluster. The issue has been improved in Vertica 9.1.1 and the difference between nodes-down and nodes-up cases is minimal. |
VER-62271 | Client Drivers - JDBC | When dropping a user that owns roughly 10,000 or more tables using DROP USER, the Vertica JDBC driver would receive many notices and throw a StackOverflow exception. The driver now provides all notices and throws the correct exception (SQLDataException: [Vertica][VJDBC](3128) ROLLBACK: DROP failed due to dependencies). |
VER-63158 | Optimizer | An internal error sometimes occured in queries with subqueries that contained a mix of outer and inner joins. This issue has been fixed. |
VER-61969 | UDX, Vertica Log Text Search | The AdvancedTextSearch package leaks memory. This issue has been fixed. |
Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.
Issue |
Component |
Description |
VER-45474 | Optimizer | When a node down, DELETE and UPDATE query performance can slow due to non-optimized query plans. |
VER-61299 | Depot |
Sometimes CLEAR_DATA_DEPOT() does not clear all the data on one or more nodes. |
VER-60797 | License |
AutoPass format licenses do not work properly when installed in Vertica 8.1 or older. To replace these licenses with legacy Vertica license, users need to set the following parameter:
|
VER-60409 | AP-Advanced, Optimizer |
APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also returns many columns may fail with the error "Request size too big" due to additional memory requirement in parsing. Workaround: Increase the value of configuration parameter MaxParsedQuerySizeMB or reduce the number of output columns. For APPLY_SVD and APPLY_PCA, you can limit the number of output columns either by setting the parameter num_components or by setting the parameter cutoff. In practice, cutoff=0.9 is usually enough. Note that If you increase MaxParsedQuerySizeMB to a larger value, for example 4096, each query you run may use 4 GB of memory during parsing. This means that running multiple queries at the same time could cause out-of-memory (OOM) issues if your total memory is limited. Please refer to the Vertica documentation for more information about MaxParsedQuerySizeMB. |
VER-57126 | Data Removal - Delete, Purge, Partitioning |
Partition operations that use a range, for example, copy_partitions_to_table, must split storage containers that span the range in order to complete. For tables partitioned using a GROUP BY expression, such split plans can require a relatively large amount of memory, especially when the table has a large number of columns. In some cases, the partition operation may error with "ERROR 3587: Insufficient resources to execute plan on pool <poolname>". Workaround: Increase the memorysize or decrease the plannedconcurrency of <poolname>. Hint: A best practice is to group partitions so that it is never necessary to split storage containers. Following this guidance greatly improves the performance of most partition operations. |
VER-60468 | Recovery |
A transaction that started before a node began recovery is referred to as a dirty transaction in the context of recovery. A recovering node must wait for, or cancel, such a transaction to recover tables modified by the transaction. In some rare instances such transactions may hang and cannot be canceled. Tables locked by such a transaction can not be recovered and therefore a recovering node cannot transition to 'UP'. Usually, the hung transaction can be stopped by restarting the node on which the transaction was initiated, assuming this is not a critical node. In extreme cases or instances where the initiator node is a critical node, the cluster must be restarted. |
VER-61362 | Subscriptions | During cluster formation, when one of the up-to--date nodes is missing libraries and attempts to recover them, the recovery fails with a cluster shutdown. |
VER-48041 | Admin Tools |
On some systems, occasionally admintools will not be able to parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient. There is no known work-around. |
VER-41895 | Admin Tools |
On some systems admintools cannot parse output while running SSH commands on hosts in the cluster. In some situations, if the admintools operation needs to run on just one node, there is a workaround. Users can avoid this issue by using SSH to connect to the target node and running the admintools command on that node directly. |
VER-61584 | Subscriptions | It happens only while a node(s) is shutting down or in unsafe status. No workaround. |
VER-54924 | Admin Tools |
On databases with hundreds of storage locations, admintools SSH communication buffers can overflow. The overflow can interfere with database operations like startup and adding nodes. There is no known workaround. |
VER-56679 | SAL | When generating an annotated query, the Optimizer does not recognize that the ETS flag is ON and produces the annotated query as if ETS is OFF. If the resulting annotated query is then run when ETS is ON, some hints might not be feasible. |
VER-63077 | Catalog Sync and Revive |
When trying to revive the cluster, sometimes Vertica returns a "file not found" error. In most cases, this is just an intermittent error so the workaround is to try revive again. |
VER-62897 | Backup/DR |
A PANIC happens when an object restore transaction involves the following: - Has flatten table(s) to be restored. - Is canceled at the middle of the transaction. |
VER-62983 | Hadoop |
When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions is disabled on hcatalogconnector schemas. |
VER-62414 | Hadoop | Loading ORC and Parquet files with a very small stripe or rowgroup size can lead to a performance degradation or cause an out of memory condition. |
VER-63215 | Catalog Engine, Elastic Cluster | If a cluster is running with K-safety as 0, it is possible that some oids are regenerated which leads to potential conflict and eventual crash. |
VER-62855 | Tuple Mover |
TM operations, such as moveout or re-partitioning, of an unsegmented projection may error out when a concurrent re-balance operation removes the projection from the node. |
VER-62884 | Storage and Access Layer | A node crash after a DML may leave index files (.pidx files) under storage locations if the background reaper service didn't get an opportunity clean them. |
VER-63551 | Security, Spread |
If the EncryptSpreadComm parameter is set to 'vertica' on a SUSE Linux Enterprise Server (SLES) 11 cluster, Vertica fails to start. Vertica has deprecated support for SLES 11 and will remove support in a later release. Workaround: Delete the setting for EncryptSpreadComm. |
VER-61351 | Admin Tools | Adding large numbers of nodes in a single operation could lead to an admintools error parsing output. |
VER-61289 | Execution Engine, Hadoop | If a Parquet file metadata is very large, Vertica can consume more memory than reserved and crash when the system runs out of memory. |
VER-55257 | Client Drivers - ODBC |
Issuing a query that returns a large result set and closing the statement before retrieving all of its rows can result in the following error when attempting subsequent operations with the statement: "An error occurred during query preparation: Multiple commands cannot be active on the same connection. Consider increasing ResultBufferSize or fetching all results before initiating another command." Workaround: Set the ResultBufferSize property to 0 or retrieve all rows associated with a result set before closing the statement. |
VER-61834 | Data Export, Execution Engine, Hadoop | Occasionally the EXPORT TO PARQUET command writes data to incorrect partition directories of types timestamp, date, and time. |
Take a look at the Vertica 9.1 New Features Guide for a complete list of additions and changes introduced in this release.
As of Vertica 9.1 you can use a Vertica by the hour license model that provides a pay-as-you-go model where you pay for only the number of nodes and number of hours you use. These Paid Listings are available in the AWS Marketplace:
An advantage of using the Paid Listing is that all charges appear on your Amazon AWS bill rather than purchasing a robust Vertica license. This eliminates the need to compute potential storage needs in advance.
See more: Vertica with CloudFormation Templates
Vertica 9.1.0 now automatically audits ORC and Parquet data stored in external tables.
Vertica licenses can include a raw data allowance. Since 2016, Vertica licenses have allowed you to use ORC and Parquet data in external tables. This data has always counted against any raw data allowance in your license. Previously, the audit of data in ORC and Parquet format was handled manually. Because this audit was not automated, the total amount of data in your native tables and in external tables could exceed your licensed allowance for some time before being spotted.
Starting in version 9.1.0, Vertica automatically audits ORC and Parquet data in external tables. This auditing begins soon after you install or upgrade to version 9.1.0. If your Vertica license includes a raw data allowance and you have data in external tables based on Parquet or ORC files, review your license compliance before upgrading to Vertica 9.1.x. Verifying that your database is compliant with your license terms avoids having your database become non-compliant soon after you upgrade.
See more: Verify License Compliance for ORC and Parquet Data
As of release 9.1 you can install Vertica on Amazon Web Services (AWS) using the following products in the AWS Marketplace:
Each of these products have three CloudFormation Templatess and one AMI, as follows:
CFTs
AMI
See more: Installing Vertica with CloudFormation Templates
Eon Mode, a database mode that was previously in beta, is now live.
You can now choose to operate your database in Enterprise Mode (the traditional Vertica architecture where data is distributed across your local nodes) or in Eon Mode, an architecture in which the storage layer of the database is in a single, separate location from compute nodes. You can rapidly scale an Eon Mode database to meet variable workload needs, especially for increasing the throughput of many concurrent queries.
After you create a database, the functionality of the actual database is largely the same regardless of the mode. The differences in these two modes lay in their architecture, deployment, and scalability.
See more: Using Eon Mode
When you read from S3 using COPY FROM with S3 URLs, Vertica uses the configuration parameters described in AWS Parameters. Previously, these parameters could be set only globally, which made it harder to read from different regions or with different credentials in parallel. You can now set these parameters at the session level using ALTER SESSION.
In addition, if you use ALTER SESSION to set an AWS parameter, Vertica automatically sets the corresponding UDParameter used by the UDSource described in Bulk Loading and Exporting Data From Amazon S3.
See more: Specifying COPY FROM Options
Management Console now provides the ability to revive an Eon Mode database. Eon Mode databases keep an up-to-date version of their data and metadata in their communal storage locations. After the database is shut down, you can restore it later in the same state in a newly provisioned cluster.
The Provision and Revive wizard is provided through a deployment of Vertica and Management Console available on the AWS Marketplace.
See more: Reviving an Eon Mode Database in MC
Previously, Management Console only provided monitoring information for internal Vertica tables. In Vertica 9.1.0, MC detects and monitors any external tables and HCatalog data included in your database.
To see this external data visualized, take a look at the Table Utilization charts on the MC Activity page. The table utilization charts on this page now reflect external tables and HCatalog data. The table information displayed now includes table types (external, internal, and HCatalog) and table definitions (applicable only to external tables).
You can also see changes in the Storage View page. When MC detects that your database contains external tables or references HCatalog data, it displays an option to view more details about those tables.
See more: Monitoring Table Utilization and Projections and Monitoring Database Storage
This feature creates a set of audit categories that make it easy to search for queries, parameters, and tables with a similar purpose. There are three types of SQL objects you can audit in Vertica: queries, tables, and parameters. With this feature, you can see system tables that bring together changes to these SQL objects and track them more easily. Use the security and authentication audit category to better understand changes to your database.
This feature introduces four new system tables to better audit changes to your database:
AUDIT_MANAGING_USERS_PRIVILEGES
See more: Database Auditing
New features include:
See more: Machine Learning for Predictive Analytics
All UDx types now support cancellation callbacks. You can implement the CANCEL() function to perform any cleanup specific to your UDx. Previously, only some UDx types supported cancellation.
See more: Handling Cancel Requests
The SDK now supports writing user-defined transform functions (UDTFs) in Python, in addition to C++, Java, and R.
See more: Python API and Python SDK Documentation
An alternative to granting HDFS access to individual Vertica users is to use delegation tokens, either directly or with a proxy user. In this configuration, Vertica accesses HDFS on behalf of some other (Hadoop) user. The Hadoop users need not be Vertica users at all, and Vertica need not be Kerberized.
See more: Proxy Users and Delegation Tokens
Vertica 9.1.0 now automatically audits data stored in external tables in ORC and Parquet format.
Vertica licenses can include a raw data allowance. Since 2016, Vertica licenses have allowed you to use ORC and Parquet data in external tables. This data has always counted against any raw data allowance in your license. Previously, the audit of ORC and Parquet data was handled manually. Because this audit was not automated, the total amount of data in your native tables and in external tables could exceed your licensed allowance for some time before being spotted.
Starting in 9.1.0, Vertica automatically audits ORC and Parquet data in external tables. This auditing begins soon after you install or upgrade to version 9.1.0. If your Vertica license includes a raw data allowance and you have data in external tables based on Parquet or ORC files, review your license compliance before upgrading to Vertica 9.1.x. Verifying your database is complaint with your license terms avoids having your database become non-compliant soon after you upgrade.
See more: Verify License Compliance for ORC and Parquet Data
The Spark Connector is now distributed as part of the Vertica server installation. Instead of downloading the connector from the myVertica portal, you can now get the Spark Connector file from a directory on a Vertica node.
The Spark Connector JAR file is now compatible with multiple versions of Spark. For example, the Connector for Spark 2.1 is also compatible with Spark 2.2.
See more: Getting the Spark Connector and Vertica Integration for Apache Spark in Support Platforms
Vertica 9.1.0 now integrates with the Voltage SecureData encryption. This feature lets you:
See more: Voltage SecureData
Vertica 9.1 introduces the following new features to KafkaAvroParser and KafkaJSONParsers:
The Vertica integration with Apache Kafka now works in Eon Mode. There are several details to consider when streaming data from Kafka into an Eon Mode Vertica cluster. See Vertica Eon Mode and Kafka for details.
See more: Integrating Apache with Kafka
All UDx types now support cancellation callbacks. You can implement the CANCEL() function to perform any cleanup specific to your UDx. Previously, only some UDx types supported cancellation.
The SDK now supports writing user-defined transform functions (UDTFs) in Python, in addition to C++, Java, and R.
See more: Handling Cancel Requests and Python API
The following Vertica functionality was deprecated:
IdolLib library and all its functions:
DELETE_COMMUNITY_KEY
DESCRIBE_COMMUNITY_KEY
IDOL_CHECK_ACL
INSTALL_COMMUNITY_KEY
SSL certificates containing weak (MD5) CA signatures
For more information see Deprecated and Retired Functionality in the Vertica documentation.
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-63128 | DDL - Table |
In past releases, Vertica placed an exclusive lock on the global catalog while it created a query plan for CREATE TABLE AS <query>. Very large queries could prolong this lock until it eventually timed out. On rare occasions, the prolonged lock caused an out-of-memory exception that shut down the cluster. This problem was resolved by moving the query planning process outside the scope of catalog locking. |
VER-63295 | Execution Engine |
In some workloads, the memory consumed by a long-running session could grow over time. This has been resolved. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-63072 | Optimizer - Query Rewrite |
Applying an access policy on some columns in a table significantly impacted execution of updates on that table, even where the query excluded the restricted columns. Further investigation determined that the query planner materialized unused columns, which adversely affected overall performance. This problem has been resolved. |
VER-63026 | DDL - Projection |
Some versions of Vertica allowed projections to be defined with a subquery that referenced a view or another projection. These malformed projections occasionally caused the database to shut down. Vertica now detects these errors when it parses the DDL and returns with a message that projection FROM clauses must only reference tables. |
VER-62887 | Diagnostics |
Scrutinize was collecting the logs only on the node where it was executed, not on any other nodes. This issue has been fixed. |
VER-63188 | Optimizer - Join & Data Distribution |
Optimized MERGE was, at times, not used with NULL constants in Varchar or Numeric columns. This issue has been fixed. |
VER-63040 | UI - Management Console |
With LDAP authentication turned on, users added before LDAP "default search path" modified were not able to logon. This issue has been fixed. |
VER-63029 | Backup/DR |
Object replication and restore left behind temporary files in the database Snapshot folder. This fix ensures such files are properly cleaned-up when the operation completes. |
VER-62971 | Data load / COPY |
COPY statements using the default parser could erroneously reject rows if the input data contained overflowing floating point values in one column and the maximum 64-bit integer value in another column. This issue has been fixed. |
VER-62969 | Execution Engine |
Loads and copies from HDFS occasionally failed if Vertica was reading data through libhdfs++ when the NameNode abruptly disconnected from the network. libhdfs++ was being compiled with a debug flag on, which caused network errors to be caught in debug assertions rather than normal error handling and retry logic. This issue has been fixed. |
VER-62953 | Hadoop |
The ORC Parser skipped rows when the predicate "IS NULL" was applied on a column containing all NULLs. This issue has been fixed. |
VER-63075 | Data load / COPY |
INSERT DIRECT failed when configuration parameter ReuseDataConnections was set to 0 (disabled). When this parameter was disabled, no buffer was allocated to accept inserted data. Now, a buffer is always allocated to accept data regardless of how ReuseDataConnections is set. |
VER-63260 | Backup/DR |
Object replication to a target database with a newer version failed during the catalog preprocessing phase when the source and destination cluster had different node names. This issue has been fixed. |
VER-63263 | Optmizer |
An Internal Error sometimes occurred in queries with subqueries contaning a mix of outer and inner joins. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-62510 | AP-Advanced, Sessions |
Using an ML function, which accepts a model as a parameter in the definition of a view, and then feeding that view into another ML function as its input relation caused failures in some special cases. This issue has been fixed. |
VER-62836 | S3 |
There was an issue loading from the default region (us-east-1) in some cases where the communal location was in a different region. This issue has been fixed. |
VER-62852 | Optimizer - GrpBy & Pre Pushdown |
Including certain functions like nvl in a grouping function argument resulted in a "Grouping function arguments need to be group by expressions" error. This issue has been fixed |
VER-62419 | Optimizer |
Attempts to swap partitions between flattened tables that were created in different versions of Vertica failed, due to minor differences in how source and target SET USING columns were internally defined. This issue has been fixed. |
VER-62076 | Hadoop |
Hadoop impersonation status messages were not being properly logged. This issue has been fixed to allow informative messages at the default logging level. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-62096 | Data Export |
Export to Parquet using local disk non-deterministically failed in some situations with a 'No such file or directory' error message. This issue has been fixed. |
VER-62465 | Execution Engine, Optimizer |
The way null values were detected for float data type columns returned inconsistent results for some queries with median and percentile functions. This issue has been fixed. |
VER-62463 | Vertica Log Text Search |
Dependent index tables did not have their reference indices updated when the base table's columns were altered but not removed. This caused an internal error on reference. This issue has been fixed so the indices are updated when you alter the base table's columns. |
VER-62144 | Error Handling, Execution Engine |
If you tried to insert data that was too wide for a VARCHAR or VARBINARY column, the error message did not specify the column name. This error message now includes the column name. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-62043 | UI - Management Console |
When creating an Eon Mode database through Management Console on an AWS cluster, entering a valid communal storage location and selecting IAM Role authentication did not enable the Next button. This issue occurred only during database creation, not when creating an Eon Mode database cluster. This issue has been fixed. |
VER-62063 | UI - Management Console |
In the Database and Clusters > VerticaDB activity > Detail screen of an Eon database, some column text was not properly formatted. This issue has been fixed. |
VER-62045 | Optimizer, Sessions |
An internal EE error occurred when running several query retries and, at the same time, sequence objects referenced in the query were dropped and re-created. This issue has been fixed so the retried query now picks up the re-created sequence. |
VER-62148 | Cloud - Amazon, UI - Management Console |
When entering the Communal Storage URL for an Eon Mode database in the Management Console, some invalid forms of the URL were allowed. This issue has been fixed. |
VER-62118 | UI - Management Console |
In the Vertica Management Console (MC) in AWS, with some Vertica BYOL licenses, when using Cluster Management to add an instance to a Vertica database cluster, you were prompted to upload a license file even if your Premium Edition license is already installed. In the Add Instance wizard, you saw the message 'Your database exceeds the free Community Edition limits, please upload a valid Premium Edition license'. This issue has been fixed |
To see a complete list of additions and changes introduced in this release, refer to the Vertica 9.1 New Features Guide.
Issue |
Component |
Description |
VER-59892 | Hadoop, SAL, Sessions | Previously Vertica built up sockets stuck in a CLOSE_WAIT state on non-kerberized WEBHDFS connection. This indicates that the socket is closed on the HDFS side but Vertica has not called CLOSE(). This issue has been fixed. |
VER-15347 | Data load / COPY | Previously, COPY created files for saving rejected data as soon as the COPY started. Now COPY waits to create any rejected data files once a row has been rejected. |
VER-60277 | Optimizer | Grouping on Analytics arguments, which are complex expressions, sometimes resulted in an internal Optimizer error causing the database to crash. This issue has been fixed. |
VER-58604 | Execution Engine, FlexTable | An INSERT query caused multiple nodes to crash with a segmentation fault. This issue has been fixed. |
VER-53704 | Cloud - Amazon, UDX | AWS UDX cancellation on long-running operations was not working properly. That is improved in this release, and S3 export transactions get canceled eventually. S3 source still has some cancellation issues, however, S3 source is being deprecated in favor of S3FS which has none of those issues and is much faster. |
VER-60368 | AP-Advanced | Upgrading a cluster from 8.1.1-4 to 9.0.1-2 no errors were reported, but the database would not start after the upgrade. The CLUSTER upgrade changes task kept looping and rolled back with an Invalid model name error. This issue has been fixed. |
VER-60454 | S3 | Before this release some AWS UDX functions including S3EXPORT did not have explicit execution permission, and in some cases, would not be callable due to permission errors. This issue has been fixed. |
VER-59055 | Execution Engine | If a query contained multiple PERCENTILE_CONT(), PERCENTILE_DISC(), or MEDIAN() functions with similar PARTITION BY clauses, and the query also had a LIMIT clause, execution occasionally failed due to a bug during cleanup. This problem has been resolved. |
VER-56645 | Basics | The INSTR() function sometimes missed valid matches when the position parameter was set to a negative value. This problem was resolved. |
VER-55542 | Execution Engine | Queries that specified both LIMIT and UNION ALL clauses failed to complete execution. This issue has been fixed. |
VER-44795 | Hadoop | Sometimes when the datanode is overloaded and/or running out of memory, it starts sending incomplete HTTP messages over WEBHDFS, where the 'Content-Length' field does not correspond to the actual length of the payload. This caused a CURL error. Vertica now tries to recover by requesting the data again with a longer timeout. If it does not succeed, after about 3 minutes Vertica terminates with a message like: "Error Details: transfer closed with 109314115 bytes remaining to read". |
VER-59857 | Optimizer | In some cases, upgrading Vertica introduced inconsistencies in the catalog that caused fatal errors when it tried to update non-existent objects. Vertica now verifies that statistics objects exist before invalidating statistics for a given column. |
VER-59567 | Catalog Engine | Previously the TABLE_CONSTRAINTS system table incorrectly reflected a cached value for the constraint table name. There was no internal corruption. The code has been updated, so that that table name value reflects the correct value and not the cached one. |
VER-58529 | Optimizer | In certain queries with outer joins over simple subqueries, the Optimizer chose a sub-optimal outer table projection. This led to inefficient resegmentation or broadcast of the join data. This problem has been resolved. |
VER-59123 | Execution Engine | Queries with window functions may produce the wrong result intermittently. |
VER-57129 | Hadoop | After a user connected to HDFS using the Vertica realm, users from other realms could not connect to HDFS. This behavior has been corrected. |
VER-53488 | Catalog Engine | Vertica did not previously release the catalog memory for objects that were no longer visible by any current or subsequent transactions. The Vertica garbage collector algorithm has been improved. |
VER-61021 | DDL | When using ALTER NODE to change the MaxClientSessions parameter, the node's state changes from Standby or Ephemeral to Permanent. This issue has been fixed. |
VER-59791 | Client Drivers - JDBC | For lengthy string values, the hash value computed by the JDBC implementation differed from the HASH() function on the Vertica server. This issue has been fixed. |
VER-57757 | Kafka Integration | When using start_point parameter, the KafkaJSONParser sometimes failed to parse nested JSON data, which led to all rows after the first being rejected. This issue has been fixed. |
VER-60123 | Client Drivers - ODBC | The Vertica ODBC driver supports up to 133-digit precision for Numeric types bound to a decimal type. Previously, the Vertica ODBC driver threw a data conversion exception when the precision was over the 133-digit limit. Now, the ODBC driver truncates Numeric values with precision over 133 digits. |
VER-53943 | Client Drivers - ADO | An error handling issue sometimes caused the ADO.NET driver to hang when a connection to the server was lost. This problem has been corrected. |
VER-36453 | Client Drivers - ODBC | The COPY LOCAL statement can appear only once in a compound query. It must be the first statement in a compound query. |
VER-60535 | Optimizer | Running a MERGE statement resulted in the error "DDL statement interfered with this statement". This issue has been fixed. |
VER-60887 | Backup/DR |
Backups to S3 failed when both of the following occurred:
This issue has been fixed. |
VER-59314 | Backup/DR | Previously, Python script failures on the remote host triggered during a vbr task could result in the error message "No JSON object could be decoded." Vertica now displays a more meaningful error message. |
VER-58149 | Backup/DR | The underneath problem is during restore, at stage of copying over snapshot metadata to the initiator, we didn't make sure that succeeded before using these files. That's why we are seeing this terrible error message. After the fix, we give a more meaningful error if this case occurs. |
VER-58068 | Scrutinize | Sometimes scrutinize times out during diagnostic collection, leading to diagnostic output from a single host instead of from the cluster. The timeouts for scrutinize have been increased. |
VER-60510 | Kafka Integration | Previously, only pseudosuperuser/dbadmin users could create Kafka schedulers. Non-privileged user would need to have operation privileges granted to them in order to use the scheduler. The scheduler tables always belonged to pseudosuperuser/dbadmin users. Now, non-privileged Vertica users can run the vkconfig utilities to create and operate a scheduler directly. The Kafka scheduler's tables automatically belong to the user who created the scheduler. |
VER-59994 | Optimizer |
The default value of MaxParsedQuerySizeMB has changed. The original default was 512MB. This only bound a certain amount of "used" parse memory. The default is now 1024MB. This results in bounding all parse memory. There may be some queries that used to be able to run successfully, that will now encounter the "Request size too big. Please try to simplify the query" error. This is not a regression. To successfully run the query, increase the value of MaxParsedQuerySizeMB and reset the session. |
VER-60042 | Optimizer | When running a query with one or more nodes down, in some cases an inconsistency in plan pruning with buddy plan resulted in an Internal Optimizer error. This issue has been fixed. |
VER-51143 | Backup/DR | Previously, vbr failed on full restore tasks and copy cluster tasks when there is only one database on the cluster and no dbName parameter specified in the vbr configuration file. This issue has been resolved. |
VER-60665 | Backup/DR | Object restore/replication used to crash a node when restoring/replicating from a backup/snapshot that contains sequences with the default minimum value. This issue is resolved and such sequences are now restored gracefully with the correct minimum value. |
Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.
Issue |
Component |
Description |
VER-61069 | Execution Engine |
In very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely. Workaround: The remaining processes can be killed using admin tools. |
VER-59235 | UI - Management Console | Previously, the MC LDAP user authentication didn't support changing default search path. Based on the existing design , the default search path is supposed to stay unchanged and served as a base LDAP search path. User should ONLY change the user search attribute to retrieve user information from LDAP server. |
VER-60797 | License |
AutoPass format licenses do not work properly when installed in Vertica 8.1 or older. In order to replace them with legacy Vertica license, users need to set AllowVerticaLicenseOverWriteHP=1. |
VER-60642 | Data Export |
Export to Parquet using local disk can non-deterministically fail in some situations, with "No such file or directory". Re-running the same export will likely succeed. |
VER-60468 | Recovery |
A transaction that started before a node began recovery is referred to as a dirty transaction in the context of recovery. A recovering node must wait for or cancel such a transaction in order to recover tables modified by the transaction. In rare instances such transactions may be hung and can not be canceled. Tables locked by such a transaction can not be recovered and therefore a recovering node can not transition to 'UP'. Usually you can stop the hung transaction by restarting the node on which the transaction was initiated, assuming this is not a critical node. In extreme cases or instances where the initiator node is a critical node the cluster must be restarted. |
VER-56679 | Nimbus, SAL | When generating an annotated query, the optimizer does not recognize that the ETS flag is ON and produces the annotated query as if ETS is OFF. If the resulting annotated query is then run when ETS is ON, some hints might not be feasible. |
VER-48041 | Admin Tools |
On some systems, occasionally admintools cannot parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient. There is no known work-around. |
VER-54924 | Admin Tools |
On databases with hundreds of storage locations, admintools SSH communication buffers can overflow. The overflow can interfere with database operations like startup and adding nodes. There is no known work-around. |
VER-48020 | Hadoop | Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load. |
VER-41895 | Admin Tools |
On some systems admintools cannot parse output while running SSH commands on hosts in the cluster. We do not know the root cause of this issue. In some situations, if the admintools operation needs to run on just one node, there is a workaround. Users can avoid this issue by using SSH to connect to the target node and running the admintools command on that node directly. |
VER-61433 | Hadoop | Under heavy concurrency, when querying ORC files in a Kerberized High Availability HDFS environment, it is possible for the Vertica process on a single node to crash. |
VER-57126 | Data Removal - Delete, Purge, Partitioning |
Partition operations that use a range, for example, COPY_PARTITIONS_TO_TABLE must split storage containers that span the range in order to complete. For tables partitioned using a GROUP BY, expression such split plans can require a relatively large amount of memory, especially when the table has a large number of columns. In some cases the partition operation may error with "ERROR 3587: Insufficient resources to execute plan on pool <poolname>". Workaround: Increase the memorysize or decrease the plannedconcurrency of <poolname>. Hint: A best practice is to group partitions such that it is never necessary to split storage containers. Following this guidance greatly improves the performance of most partition operations. |
VER-60409 | AP-Advanced, Optimizer |
The APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also returns many columns may fail with error "Request size too big" due to additional memory requirement in parsing. Workaround: Increase configuration parameter MaxParsedQuerySizeMB or reduce the number of output columns. For the cases of APPLY_SVD and APPLY_PCA, you can limit the number of output columns either by setting the parameter num_components or by setting the parameter cutoff. In practice, cutoff=0.9 is usually enough. Please note that If you increase MaxParsedQuerySizeMB to a larger value, for example 4096, each query you run may use 4 GB of memory during parsing. This means that running multiple queries at the same time could cause out-of-memory (OOM) errors if your total memory is limited. Please refer to Vertica documentation for more information about MaxParsedQuerySizeMB. |
VER-59147 | AP Advanced |
Using a machine learning (ML) function, which accepts a model as a parameter, in the definition of a view, and then feeding that view into another ML function as its input relation may cause failure in some special cases. Workaround: You should always prefix a model_name with its appropriate schema_name when you use it in the definition of a view. |
VER-61420 | Data Removal - Delete, Purge, Partitioning | Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Vertica 9.1.0 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Any extraneous containers created this way will eventually be merged by the Tuple Mover. |
VER-60158 | Client Drivers - JDBC | Using a single connection in multiple threads could result in a hang if one of the threads does a COMMIT or ROLLBACK without joining the other thread first. |
VER-61205 | Basics, Catalog Engine |
If a configuration parameter in vertica.conf begins with the # character, Vertica crashes and throws an unhandled error. For example:
Workaround: Avoid using parameter values beginning with "#" in the vertica.conf file. |
VER-61584 | Nimbus, Subscriptions | The VAssert(madeNewPrimary) fails. This occurs only while nodes are shutting down or in unsafe status. |
VER-62000 | UI - Management Console |
When creating an Eon Mode database through Management Console on an AWS cluster, entering a valid communal storage location and selecting IAM Role authentication does not enable the Next button. This issue occurs during database creation only, not when creating an Eon Mode database cluster. Workaround: On the same wizard page, select “Use AWS Key Credentials”, enter enough text to enable the Next button, then select IAM Role authentication. Then click the Next button. |
VER-61362 | Nimbus, Subscriptions |
During cluster formation, when one of the up-to--date nodes are missing libraries and attempts to recover them, the recovery fails with a cluster shutdown. Workaround: Copy libraries into the node's Libraries/ directory from a peer node. |
VER-61876 | UI - Management Console |
In the Vertica Management Console (MC) in AWS, with some Vertica BYOL licenses, when using Cluster Management to add an instance to a Vertica database cluster, you are prompted to upload a license file even if your Premium Edition license is already installed. In the Add Instance wizard, the message 'Your database exceeds the free Community Edition limits, please upload a valid Premium Edition license.' appears. Workaround: Upload your license file again and the instance and database node are added. |
The only warranties for Micro Focus products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein.
The information contained herein is subject to change without notice.
Confidential computer software. Valid license from Micro Focus required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
© Copyright 2006 - 2019 Micro Focus, Inc.
Adobe® is a trademark of Adobe Systems Incorporated.
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.
Send documentation feedback to HPE