Vertica 8.1.1-12: Resolved Issues
Vertica 8.1.1-8: Resolved Issues
Vertica 8.1.1-7: Resolved Issues
Vertica 8.1.1-6: Resolved Issues
Vertica 8.1.1-5: Resolved Issues
Vertica 8.1.1-4: Resolved Issues
Vertica 8.1.1-3: Resolved Issues
Vertica 8.1.1: Resolved Issues
Vertica 8.1.0-5: Resolved Issues
Vertica 8.1.0-4: Resolved Issues
Vertica 8.1.0-3: Resolved Issues
Vertica 8.1.0-2: Resolved Issues
The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 8.1.x.
They also contain information about issues resolved in:
The Premium Edition of Vertica is available for download at www.vertica.com.
The Community Edition of Vertica is available for download at the following sites:
The documentation is available at http://www.vertica.com/docs/8.1.x/HTML/index.htm.
Hotfixes are available to Premium Edition customers only. Each software package on the www.vertica.com/downloads site is labeled with its latest hotfix version.
Take a look at the Vertica 8.1.1 New Features Guide for a complete list of additions and changes introduced in this release.
As of Vertica 8.1.1-7 and higher, 8.1.1 hotfixes now support Red Hat Enterprise Linux 7.4 and CentOS 7.4.
Vertica now handles throughput of concurrent queries much more efficiently, yielding significant improvements in execution time.
See more: Performance Improvements
Queries with complex expressions are routinely generated by business intelligence tools, and can occasionally be slow to execute. Query execution can be optimized by identifying and reusing common subexpressions (common subexpression elimination, or CSE). With this release, Vertica provides its own CSE implementation, so evaluation results between expressions can be shared within a single query.
See more: Performance Improvements
Vertica now has the ability to backup and restore to Amazon S3 Standard for Vertica instances hosted locally and on Amazon Web Services.
See more: Backup, Restore, Recovery, and Replication
You can now use Management Console to run SQL queries via your browser. Enter ad hoc queries, upload a SQL script, or select previously run queries to get query results in a spreadsheet-like format. And with an in-browser query editor, you can get started running Vertica in the cloud without configuring client drivers and SSH connections.
See more: Management Console
Vertica provides an easy way to apply new IP addresses to nodes in a cluster in cases where old IP addresses are no longer valid. For example, you can re-map if you are running Vertica in a cloud environment and the cloud vendor reassigns IP addresses.
See more: Database Management
You can export data from Vertica, either to share it with other Hadoop-based applications or to move lower-priority data from ROS to less-expensive storage. The EXPORT TO PARQUET statement exports a result set as Parquet data.
See more: Apache Hadoop Integration
You can load or reload more precise sets of data from a topic with the Vertica integration with Apache Kafka. Previously, a microbatch always started loading data from the start of a stream, and KafkaSource could end streaming either after a timeout or reaching the end fo a stream. You can now set an offset in the stream where the microbatch will start loading data, and set KafkaSource to end streaming when it reaches a specific offset.
See more: Apache Kafka Integration
The Vertica Connector for Apache Spark now supports Spark 2.1. The connector's JAR file is compatible with a specific combination of Spark and Scala versions. Vertica supplies multiple JAR files, one for each supported version combination.
See more: Apache Spark Integration
Vertica machine learning for predictive analytics now includes in-database predictive modeling for classification problems using random forest. Easily classify large data sets residing in Vertica using random forest and perform multiclass classification on numerical and categorical data to get categorical classes.
See more: Data Analysis
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-65639 | Catalog Engine |
Having frequent create/insert/drop tables caused a memory leak in the catalog. This issue has been fixed. |
VER-65536 | Data Export |
Export to Parquet using a local disk non-deterministically failed in some situations with a 'No such file or directory' error message. This issue has been fixed. |
VER-65302 | Hadoop |
Vertica prunes Hive style partitioning directories only if all the directory names are in lower-case. Previously, Parquet Export sometimes created Hive style partitioning directories with names not in lower-case. This issue has been fixed. |
VER-65030 | Admin Tools |
Adding a large number of nodes in a single operation could lead to an admintools error parsing output. This issue has been fixed. |
VER-65775 | Hadoop |
When hive partition keys are used, the case of the key is now ignored in the external table definition, copy statement, and hdfs directory. This behavior is more consistent with Hive. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-65163 | Catalog Engine |
If vertica.conf sets a parameter to a value that starts with hash (#), Vertica ignores that parameter and logs a warning. This issue has been fixed. |
VER-64828 | UI - Management Console |
Management Console could not import multiple clusters when the host had the same private IP address as the private IP address of a previously imported cluster. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-64498 | Catalog Engine, Spread |
If a control node and one of its child nodes went down, attempts to restart the second (child) node sometimes failed. This issue has been fixed. |
VER-64638 | Optimizer |
Very large expressions could run out of stack and crash the node. This has been fixed. |
VER-64573 | Tuple Mover |
When executing heavy workloads over an extended period of time, the Tuple Mover was liable to accumulate significant memory until its session ended and it released the memory. This issue has been fixed. |
VER-64816 | UI - Management Console |
The Database Designer wizard in Management Console did not correctly set the default value for design k-safety. This issues has been fixed. |
VER-64495 | Data load / COPY |
The SKIP keyword of a COPY statement was not properly supported with the FIXEDWIDTH data format. This issue has been fixed. |
VER-64290 | UI - Management Console |
Long queries were failing in the Management Console Query Execution page. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-64307 | Optimizer |
When a projection was created with only the "PINNED" keyword, Vertica incorrectly considered it a segmented projection. This caused optimizer internal errors and incorrect results when loading data into tables with these projections. This issued has been fixed. IMPORTANT: The fix only applies to newly created pinned projections. Existing pinned projection in the catalog are still incorrect and need to be dropped and recreated manually. |
VER-63832 | Data Export, Execution Engine, Hadoop |
Occasionally, the EXPORT TO PARQUET command wrote data to incorrect partition directories of data types timestamp, date, and time. This issue has been fixed. |
VER-63606 | Data load / COPY |
In a COPY statement, excessively long invalid inputs to any date or time columns could cause stack overflows, resulting in a crash. This issue has been fixed. |
VER-63905 | UI - Management Console |
A bad SQL grammar exception appeared in the log when importing a Vertica database, version 7.1 or earlier, using Management Console. The issue has been fixed. |
This hotfix addresses the issue below.
Issue |
Component |
Description |
VER-63899 | Data load / COPY, Hadoop |
Queries involving a join of two external tables loading parquet files sometimes caused Vertica to crash. The crash happened in a very rare situation due to memory misalignment. This issue has been fixed. |
VER-63597 | Data load / COPY | Occasionally, a copy or external table query could crash a node. This issue has been fixed. |
VER-63679 | UDX, Vertica Log Text Search |
The AdvancedTextSearch package leaked memory. This issue has been fixed. |
VER-63717 | UI - Management Console | The jackson-databind library used by MC has been upgraded to 2.9.5 to fix security vulnerabilities. |
VER-63994 | DDL |
Add Column Not Null gets transformed into two different statements: Alter Table Add Column and Alter Table Add Constraint. The second statement can fail and results in a new column in your table, but missing the constraint. It also produces a confusing rollback message that doesn't indicate that the column was added. The fix is to deal with Add Column Not Null in a same statement. |
VER-63267 | UI - Management Console |
Export Data URL on Query Monitoring page wasn't working when using IE. This issue has been fixed. |
This hotfix addresses the issue below.
Issue |
Component |
Description |
VER-63484 | DDL - Table |
In past releases, Vertica placed an exclusive lock on the global catalog while it created a query plan for CREATE TABLE AS <query>. Very large queries could prolong this lock until it eventually timed out. On rare occasions, the prolonged lock caused an out-of-memory exception that shut down the cluster. This problem was resolved by moving the query planning process outside the scope of catalog locking. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-63253 | Optimizer |
An Internal Error sometimes occurred in queries with subqueries that contained a mix of outer and inner joins. This issue has been fixed. |
VER-62832 | Admin Tools |
Sometimes, adding many nodes in a single operation would fail with a message about invalid control characters. This issue has been fixed. |
VER-63028 | DDL - Projection |
Some versions of Vertica allowed projections to be defined with a subquery that referenced a view or another projection. These malformed projections occasionally caused the database to shut down. Vertica now detects these errors when it parses the DDL and returns with a message that projection FROM clauses must only reference tables. |
VER-63074 | Optimizer - Query Rewrite |
Applying an access policy on some columns in a table significantly impacted execution of updates on that table, even where the query excluded the restricted columns. Further investigation determined that the query planner materialized unused columns, which adversely affected overall performance. This issue has been fixed. |
VER-63288 | Execution Engine |
In some workloads, the memory consumed by a long-running session could grow over time. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-57030 | Catalog Engine |
Vertica did not previously release the catalog memory for objects that were no longer visible by any current or subsequent transactions. The Vertica garbage collector algorithm has been improved. |
VER-62512 | AP-Advanced, Sessions |
Using an ML function, which accepts a model as a parameter in the definition of a view, and then feeding that view into another ML function as its input relation caused failure in some special cases. This issue has been fixed. |
VER-62973 | Data Load/COPY |
COPY statements using the default parser could erroneously reject rows if the input data contained overflowing floating point values in one column and the maximum 64-bit integer value in another column. This issue has been fixed. |
VER-62854 | Optimizer - GrpBy & Pre Pushdown | Including certain functions like nvl in a grouping function argument resulted in a "Grouping function arguments need to be group by expressions" error. This issue has been fixed. |
VER-60985 | Hadoop |
The ORC Parser skipped rows when the predicate "IS NULL" was applied on a column containing all NULLs. This issue has been fixed. |
VER-62077 | Hadoop |
Hadoop impersonation status messages were not being properly logged. This issue has been fixed to allow informative messages at the default logging level. |
VER-61107 | UI - Management Console |
The MC version was not being displayed properly on the home page when installed on a debian or ubuntu system. This issue has been fixed. |
VER-62883 | Execution Engine |
Loads and copies from HDFS occasionally failed if Vertica was reading data through libhdfs++ when the NameNode abruptly disconnected from the network. In this case, libhdfs++ was being compiled with a debug flag on, causing network errors to be caught in debug assertions rather than normal error handling and retry logic. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-62509 | Execution Engine, Optimizer |
The way null values were detected for float data type columns led to inconsistent results for some queries with median and percentile functions. This issue has been fixed. |
VER-62474 | Vertica Log Text Search |
Dependent index tables did not have their reference indices updated when the base table's columns were altered but not removed. This caused an internal error on reference. This issue has been fixed so the indices are updated when you alter the base table's columns. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-61489 | Client Drivers - JDBC |
For lengthy string values, the hash value computed by the JDBC implementation differed from the hash() function on the Vertica server. This issue has been fixed. |
VER-61603 | Data load / COPY |
A file system error when opening a 'rejected data' or 'exceptions' file during a COPY statement would occasionally cause Vertica to crash due to a segmentation fault. This issue has been fixed. |
VER-61995 | Optimizer |
Some instances of type casting on a datetime column that contained a value of 'infinity' triggered segmentation faults that caused the database to fail. This issue has been fixed. |
VER-61211 | Scrutinize |
Scrutinize was collecting the logs only on the node where it was executed, not on any other nodes. This issue has been fixed. |
VER-62074 | Data load / COPY |
When using the REJECTED DATA AS <filename> clause with COPY statements, filename> could be treated as either a directory or a file, and the choice was made independent of user input depending on the number of sources. This could lead to confusion as in the case of GLOBS. With GLOBS, if a glob expands to just one source, <filename> is treated as a file. If a glob expands to multiple sources, it is treated as a directory. This could result in unpredictable behavior depending on the actual files a user is trying to load. This has now been fixed. 1. In REJECTED DATA AS <filename>, if the filename is appended with '/', it is treated as a directory regardless of the number of sources. 2. If the '/' is omitted but <filename> already exists as a directory, it will still be treated as a directory regardless of the number of sources. 3. If the '/' is omitted and <filename> does not already exist, and there are multiple sources, it will be treated as a directory. (This is unchanged behavior, but listed here for completeness) |
VER-62147 | Error Handling, Execution Engine | If you tried to insert data that was too wide for a VARCHAR or VARBINARY column, the error message did not specify the column name. This error message now includes the column name. |
VER-62143 | FlexTable | MapAggregate sometimes used too much memory which could lead to a node down. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-61191 | Backup/DR |
Backing up on S3 failed when both of the following occurred: 1. when you backup to the root of the S3 bucket, AND 2. when the backup location reaches the restorePointLimit. This issue has been fixed. |
VER-61410 | DDL |
When using ALTER NODE to change the MaxClientSessions parameter, the node's state changes from Standby or Ephemeral to Permanent. This issue has been fixed. |
VER-61347 | Execution Engine |
Running a large query with several analytic functions returned the wrong results. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-60928 | Optimizer | Grouping on Analytics arguments, which are complex expressions, sometimes resulted in an internal Optimizer error causing the database to crash. This issue has been fixed. |
VER-61080 | Client Drivers - ADO |
An issue with VStream in the Vertica client resulted in a Socket connection error handling error message and caused the client to crash. This issue has been fixed. |
VER-60997 | Backup/DR |
Running replication on a schema that contains sequences resulted in an error causing the replication to fail. In addition, it caused a node crash on the secondary cluster. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-60795 | Optimizer | When running a query with one or more nodes down, in some cases an inconsistency in plan pruning with buddy plan resulted in an Internal Optimizer error. This issue has been fixed. |
VER-60732 | Kafka Integration |
One or more nodes crashed when using COPY SOURCE KafkaSource() with executionparallelism set to greater than 1. This issue has been fixed. |
VER-60858 | Kafka Integration | Performing a COPY with the KafkaAVROparser using a schema registry with multiple partitions no longer causes Vertica to fail. |
VER-60831 | Execution Engine, Hadoop |
Attempting to load data from Parquet resulted in an internal |
This hotfix addresses the issue below.
Issue |
Component |
Description |
VER-60139 | DDL | Processing failed when running ALTER TABLE partition or ANALYZE STATS on an external table. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-60151 | Optimizer | When running five or more queries that have a large number of CASE statements Vertica had memory issues and displayed an out of memory error message. This issue has been fixed. |
VER-60223 | Catalog Engine | After renaming a table and changing the schema to which table belongs, the TABLE_NAME field in TABLE_CONSTRAINTS system table was not updated. This issue has been fixed. |
VER-60177 | Execution Engine, FlexTable |
An insert query caused multiple nodes to crash with a segmentation fault. This issue has been fixed. |
VER-60140 | Backup/DR |
Previously, copycluster would overwrite the target database's spread control mode configuration setting to be the same as the source database's. When running copycluster from an on-premise database configured to use the default "broadcast" setting to a database running on AWS EC2, the database on EC2 would not be able to start, because EC2 requires a "point-to-point" setting. This issue has been resolved by preserving the spread control mode setting of the destination database during copycluster. Workaround: Working around the issue requires using the catalog editor on the destination database nodes to alter the spread control mode to "point-to-point." This requires support engagement. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-58488 | Hadoop | When namenodes URLS in a high availability cluster configuration used hostnames that could not be resolved to an IP address, the Vertica node attempting to access HDFS crashed due to segfault. This issue has been fixed. |
VER-52143 | Kafka Integration | Adding an operator to a Kafka scheduler produced a null pointer exception. |
VER-60035 | Security |
Previously, the network port connection used by the Management Console to communicate with nodes accepted the older DES and 3DES ciphers, which have a known vulnerability. This connection no longer accepts connections that use these ciphers. |
VER-60121 | Hadoop, SAL, Sessions |
Previously, Vertica built up sockets stuck in a CLOSE_WAIT state on non-kerberized webhdfs connection. This indicates that the socket is closed on the HDFS side but Vertica has not called close(). This issue has been fixed. |
VER-60012 | Optimizer |
There was an internal issue with AnalyzeRowCount causing a node to crash. To work around this issue, disable AnalyzeRowCount using the following command:
|
VER-59992 | DDL - Table |
Dropping an Identity column resulted in loss of all the data in the table. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-59594 | DDL | The database crashed when attempting to access the catalog when a catalog snapshot didn't exist. This issue has been fixed. |
VER-59755 | Recovery | If you turn off RecoverByContainer, LAP may be skipped to recover data on a recovered node. This issue has been fixed by preventing the node from recovering if it detects that LAP cannot perform a recover by container until you turn on RecoverByContainer. |
VER-59825 | Execution Engine |
At times Vertica crashed when running queries with both of the following:
This issue has been fixed. |
VER-59229 | Admin Tools, UI - Agent, UI - Management Console |
After a Vertica cluster host is restarted, a new database could not be created from the Vertica Management Console. This issue has been fixed. |
VER-59737 | Data load / COPY, Security | Usage of COPY option REJECTED DATA AS TABLE incorrectly logged an error message. Usage of this option no longer generates an error. |
VER-59096 | Execution Engine | Queries with a combination of LIMIT and UNION ALL clauses were hanging during execution. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-58924 | Optimizer | When inserting data to a flattened table from a SELECT query (i.e., INSERT INTO ... SELECT), the insert query incorrectly populated NULL into the column with a DEFAULT query, if implicit type casting was needed. This issue has been fixed. |
VER-58978 | License | Running License Audit tasks took several hours and negatively affected performance. This issue has been resolved with several performance and optimization improvements. |
VER-59112 | Hadoop | Queries to external tables that use the hdfs scheme used to intermittently fail with an empty error message. This issue has been fixed. |
VER-56685 | AP-Advanced, Database Designer Core | Analyze_correlations() would sometimes produce error if run on columns with extremely high distinct value count. This issue has been fixed. |
VER-59282 | Hadoop | Vertica implemented a caching optimization that reduces the number of calls made to the HDFS Namenode. |
VER-59287 | Optimizer | Previously, querying the projections_column table sometimes incorrectly displayed the most recent analyze event as FULL if the rowcount of the table in question changed. Querying the projections_column table will now correctly display the most recent analyze event as ROWCOUNT after a change in table rowcount. |
VER-59121 | Resource Manager, Security | After being granted usage and select privileges on a schema and table in a specific resource pool, the user was denied permission when trying to access the schema and table. This was because Vertica was checking permissions on secondary resource pools. |
VER-59369 | AP-Advanced | The function APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS is not back compatible with 7.x. We provide a workaround to make APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS perform the same as 7.x as a choice. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-57755 | Security | Access to a view was denied to its owner unless the owner was explicitly granted SELECT privileges to the view's base table. With this fix, access to a view can be enabled through privileges that its base table inherits from the container schema. |
VER-57707 | Hadoop | Queries of ORC or Parquet data with many partitions (tens of thousands) were slow. More intelligent partition pruning has improved the performance. |
VER-57557 | DDL | If you moved a table with foreign key constraints to another schema, the constraints remained associated with the previous schema. If you dropped the previous schema, the foreign key constraints were also dropped from the table. With this release, a table's foreign key constraints move with the table to its new schema. |
VER-57777 | Optimizer - LAP, Recovery | Previously, all live aggregate projections were marked out-of-date when the cluster underwent an administrator-specified recovery. This issue has been resolved so that only live aggregate projections whose anchor tables' data was modified since the recovery epoch would be marked out-of-date during recovery. |
VER-57566 | Optimizer | The query planner now follows join inner/outer inputs specified by the syntactic optimizer during outer join to inner join conversion. |
VER-58590 | Execution Engine, Eon | Recovering a node from Scratch was restricted to a single threaded storage transfer, this has been fixed to use the recovery resource pool maxconcurrency. |
VER-58177 | DDL - Table | When creating External tables you could not use default expressions. This issue has been resolved. |
VER-57327 | AP - Advanced |
Upgrading from Vertica 7.2 to 8.1.1-2 caused a loss of historical data due to changes in dataformat of synopsis objects. Use AGGREGATE FUNCTION "public.ApproxCountDistinctOfSynopsisDeprecated" to process existing synopsis objects. |
VER-56780 | AP - Advanced | Kmeans clustering limited the number of clusters to1,000. This issue has been resolved and now the limit is 10,000. |
VER-58838 | Catalog Engine | The Vertica garbage collector algorithm has been modified to prevent query slowdown. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-58082 | Client Drivers - ADO | Transferring decimal data from Vertica using an ADO.NET client sometimes produced incorrect results if the BinaryTransfer connection parameter was set to true. |
VER-58098 | Data load / COPY | The storage formats of some columns in Vertica primary key and foreign key tables negatively impacted the performance of COPY. These storage formats have been changed to improve performance. |
VER-57862 | DDL | CREATE VIEW statements sometimes hung and caused the server to fail if the view was selecting from hdfs sources. This issue has been resolved. |
VER-58281 | Hadoop, SAL | Previously, querying external tables sometimes failed with an error if HDFS was redistributing data blocks at the same time the query tried to access a block. With this fix, the query will first attempt to read a replica of the block from a different datanode before failing. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-57596 | AP-Advanced, Catalog Engine | As of this release, you can create models only in public and user-created schemas (excluding HCatalog). Attempts to create models elsewhere return an error. If the upgrade finds a model that was created in a schema that does not conform with this restriction, it moves the model to the public schema. |
VER-57878 | Hadoop, Security | This fix adds SSL support for HCatalogConnector with HiveServer2. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-57154 | Backup/DR | After replicating statistics, Vertica failed to send them to the target cluster. |
VER-57030 | Catalog Engine | Vertica did not previously release the catalog memory for objects that were no longer visible by any current or subsequent transactions. The Vertica garbage collector algorithm has been improved. |
VER-55904 | Client Drivers - Misc | This update fixes an issue in which SSRS in SQL Server 2016 stopped working or failed to launch if you installed the Microsoft Connectivity Pack. This fix reintroduces Vertica client driver support for SSRS in SQL Server 2016. |
VER-57428 | DDL - Projection, DDL - Table | Copying a table more than 99 times caused projection naming conflict errors. |
VER-57207 | Execution Engine | Case expressions of type "case <Const> when <condition> then <result>..." repeated in the same query sometimes caused server failure. |
VER-57383 | Security | Vertica incorrectly allowed users without the correct permissions to read and delete from the RejectionTableData directory. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-57082 | Client Drivers - Misc | The JDBC driver did not correctly handle timestamps with infinity values. |
VER-56719 | Data load / COPY | In rare cases, during parallel loading the COPY command did not correctly load all data, due to unnecessary SKIP keywords. |
VER-56915 | Data Removal - Delete, Purge, Partitioning | In Vertica 8.1.1-2, loads and queries on certain tables failed with "DDL statement (ExampleCatalogEvent) interfered with this statement" even though no such DDL statement had executed recently. This was caused by replication copying some catalog events that should not have been copied. Replication has been fixed and upgrade removes such extraneous events. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-56298 | Admin Tools | This fix resolves an issue in which admintools incorrectly interpreted login messages using the word "password" as password prompts. |
VER-56646 | License |
This update adds the configuration parameter EnableUnlimitedLicenseAudit, which allows the ability to enable or disable internal system audits for unlimited licenses. This parameter is enabled (set to 1) by default. To disable the internal system audits for an unlimited license, set EnableUnlimitedLicenseAudit to 0. Example to disable: => SELECT SET_CONFIG_PARAMETER('EnableUnlimitedLicenseAudit',0); |
VER-55903 | Security | Performing CREATE OR REPLACE VIEW previously did not preserve grants on the view. |
VER-56548 | UI - Management Console | When the username of the Management Console user was different from that of the database user, MC did not properly close connections to Vertica. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-56173, VER-56287 | AP-Geospatial | In some circumstances, the database failed during expression analysis for the following user-defined scalar function: STV_ Intersect(x, y USING PARAMETERS INDEX = 'index_name') |
VER-56107 | Kafka Integration | This fix resolves an issue in which favroparser and avroparser did not populate values for union types due to an avro-cpp library upgrade in Vertica 8.1. In addition, with this fix, if the avro source schema has fewer columns than the Vertica destination schema, Vertica now generates an error instead of placing a NULL in that field. (Note that if the source has more columns than the destination, Vertica currently drops the additional columns.) |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-55863 | Client Drivers – Python | Previously, the Vertica Python library only displayed errors from the first query in a compound statement when execution completed. With this fix, you can call the nextset() method on the cursor object until None is returned in order to view all errors for subsequent queries in the statement. |
VER-55590 | Database Designer Core | ILM statements (such as SWAP_PARTITIONS_BETWEEN TABLES, MOVE_PARTITIONS, COPY_PARTITIONS, and COPY_TABLE) caused the Vertica server to go down if the target table had enabled primary key constraints. |
VER-55788 | Kafka Integration | The KafkaAVROParser incorrectly wrote byte data types as NULL. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-55502 | Cloud - Amazon, Data load / COPY | When loading files from S3, reusing the authentication manager object from a previous request sometimes caused an invalid request signature, and the load failed with an error. |
VER-55787 | Execution Engine | A Vertica 8.1.0 feature improved performance by sharing work for multiple partitions of data when evaluating multiple percentile functions. In some cases, Vertica did not correctly reset a flag when moving from one partition to the next. |
VER-55649 |
Hadoop |
In rare circumstances, if multiple users attempt to concurrently authenticate with HDFS, an unnecessary assert could cause core dump. |
VER-55648 | Hadoop | The locking mechanism shared between Vertica and libhdfs++ did not protect against certain race conditions if there was an error while accessing HDFS. |
VER-55583 |
Hadoop |
When using HCatalog connector, non-superusers can now read from an HCatalog schema when usage is granted. |
VER-55214 | Kafka Integration | NOT NULL violations rolled back COPY statements and prevented the Kafka scheduler from updating its progress in the stream, causing loads to hang. |
VER-55640 | Security | Granting a user access to a view caused permission errors if the view had a subquery or UNION clause that queried tables that user did not have access to. |
VER-55808 | Tuple Mover | The Tuple Mover incorrectly held transaction locks until the transaction completed, instead of releasing when the mergeout completed. |
To see a complete list of additions and changes introduced in this release, refer to the Vertica 8.1.1 New Features Guide.
Issue |
Component |
Description |
VER-51180 |
AP-Advanced |
Improved the performance of APPROXIMATE_COUNT_DISTINCT and APPROXIMATE_COUNT_DISTINCT_SYNOPSIS when the query is used with the GROUP BY clause. |
VER-42417 |
Admin Tools |
Vertica has improved admintools error message handling. Specifically, errors that caused admintools to issue the generic error message "'NoneType' object has no attribute '__getitem__'" now include messages specific to the actual error. |
VER-53756 |
Admin Tools |
This fix resolves an issue that occurred when setting parameter MaxParsedQuerySizeMB. |
VER-47034 |
Backup/DR |
Vertica now prevents backup and restore operations from being terminated by firewall timeout intervals. |
VER-48074, VER-53456 |
Backup/DR |
The vbr task listbackup now properly displays backups taken when a node is down or when the cluster contains a standby node. |
VER-52947 |
Backup/DR |
When backing up to shared storage, it was possible for vbr to treat the same actual location on a shared volume as different backup locations, which could result in backup failure when trying to delete the same file multiple times. |
VER-53007 |
Backup/DR |
Restoring objects containing grouped ROSes previously failed. |
VER-53137 |
Backup/DR |
This fix removes an incorrect warning that appeared during vbr replication. |
VER-52834 |
Catalog Engine |
In rare cases, updating a table that had failed to commit caused an assertion due to incomplete catalog cleanup. |
VER-53843 |
Catalog Engine |
Vertica previously stored full width padded strings as max values, resulting in unnecessarily large catalogs that could cause node failure. |
VER-44716 |
Client Drivers - JDBC |
Previously, the JDBC driver threw an exception when reading a BC-era date. It now correctly returns a date object. Vertica recommends that for dates outside the positive yyyy-mm-dd range, applications should rely on a Java DateFormat to render the value. |
VER-48554 |
Client Drivers - JDBC |
Previously, a JDBC routable query would incorrectly return metadata information of identically named tables from multiple schemas irrespective of whether you had specified a schema. |
VER-54149 |
Client Drivers - JDBC |
If JDBC received multiple warnings while running routable queries, the JDBC client sometimes hung. |
VER-54315 |
Client Drivers - Misc |
Due to a non-deterministic error, connecting to Vertica using the 8.1 ADO connector previously failed with heap corruption. |
VER-45615 |
Client Drivers - ODBC |
With Perl ODBC clients, Vertica allowed a forked process (a child process) to drop the parent connection to the server when the child process completed execution and exited. Vertica would allow this behavior regardless of the setting of the Perl DBI AutoInactiveDestroy attribute. For those scenarios where you do not want the parent connection to close, Vertica has added a new vertici.ini parameter, CleanupInForkChild. The parameter, when set to 1, tells Vertica to honor the setting of the Perl AutoInactiveDestroy attribute. If the Perl attribute is set to 1, and the Vertica parameter CleanupInForkChild is set to 1, Vertica does not drop the parent connection upon child process completion. |
VER-53309 |
Client Drivers - VSQL |
If you asked the vsql client to run a COPY statement that was over 30KB long, it would return the following error: lost synchronization with server: got message type "F", length XXXXX and close the connection to the database. The vsql client now allows COPY that are longer than 30KB. |
VER-52757 |
Cloud - Amazon |
COPY loads from S3 failed in an error if authentication occurred more than 15 minutes before the HTTP GET request was made. |
VER-52731 |
DDL |
Previously, an error occurred when recreating a table using export_objects if a column’s default expression referenced another column that appeared logically prior to it in the table’s definition. |
VER-54144 |
DDL |
An identity column with no associated sequence sometimes caused the database to fail when exporting objects. |
VER-49677 |
DDL - Projection |
With this fix, dropping a primary key constraint also drops the supporting key constraint projections, if they are no longer necessary. |
VER-52760 |
DDL - Projection |
Vertica prefixed the table name in projections created as part of a CREATE TABLE AS statement. Vertica no longer does so if the target schema of the statement is different from the schema of the source table. |
VER-50850 |
Data Removal - Delete, Purge, Partitioning |
Messages in the error_messages system table about slow deletes now include a projection name. |
VER-45466 |
Database Designer Core |
The Database Designer function DESIGNER_DESIGN_PROJECTION_ENCODINGS returned an error if you ran it on a projection where all the fields were already encoded. This prevented users from reevaluating projection encodings. DESIGNER_DESIGN_PROJECTION_ENCODINGS now has a fourth Boolean parameter. If this parameter is set to true, DESIGNER_DESIGN_PROJECTION_ENCODINGS ignores existing codings in a projection where all columns are already encoded, and returns with recommendations. |
VER-53170 |
Database Designer Core |
Running Database Designer on tables that had multiple enabled constraints on the same sets of columns sometimes produced inappropriate projection layouts. |
VER-52412 |
EE |
The issue where restarting a host did not allow the Vertica service to automatically restart on SUSE 11 SP3 has been resolved. |
VER-51154 |
Error Handling |
This fix resolves an issue in which the server sometimes silently exited after a log rotate. |
VER-43818 |
Execution Engine |
Session sockets were occasionally blocked indefinitely while awaiting client input or output for a given query. You can now set a grace period to handle session socket blocking. You set the grace period for the database or a node through the configuration parameter BlockedSocketGracePeriod. You can set the grace period at the session level through SET SESSION GRACEPERIOD. |
VER-53387 |
Execution Engine |
Previously, if ephemeral or execute nodes were present in your cluster, Vertica incorrectly created many unnecessary key constraint projections. |
VER-53997 |
Execution Engine |
The datatype of the column VIEW_DEFINITION in system table v_catalog.views has been changed from varchar(65000) to varchar(32000). |
VER-54628 |
Execution Engine |
In one rare case, Vertica incorrectly handled a failed memory allocation system call. In this release, this has been fixed. |
VER-54949 |
Execution Engine |
The issue where a fatal error occurred when a rare system memory call failed, has been resolved. Vertica now reports the error rather than crashing the system." |
VER-52707 |
FlexTable |
Moving flex tables did not also move their corresponding key projections and views. |
VER-45722 |
Hadoop |
Partition pruning did not work if queries included the predicates IN or NOT IN. |
VER-53164 |
Hadoop |
Queries of ORC or Parquet data with many partitions (tens of thousands) were slow. More intelligent partition pruning has improved the performance. |
VER-53244 |
Data load / COPY Hadoop |
Previously, predicate pushdown in the Parquet and ORC parser did not support predicates that used IN / NOT IN clauses. This fix adds support. |
VER-53454, VER-52934 |
Hadoop |
The locking mechanism shared between Vertica and libhdfs++ did not protect against certain race conditions if there was an error while accessing HDFS. |
VER-53633 |
Hadoop |
Including a predicate when querying ORC files in an evolved schema caused the database to fail. |
VER-53680 |
Hadoop |
In 8.0.1, querying ORC and PARQUET external tables with integer partition columns produced an error if the partition value equals 2. Integer partition values are now correctly interpreted. |
VER-54613 |
Hadoop |
Loading from a highly-available Hadoop cluster with Kerberos sometimes caused COPY to fail to authenticate with HDFS and appear hung while exhausting its retry attempts. |
VER-53205 |
Installation Program |
After Vertica upgrades, pre-existing storage IDs are updated the first time a backup occurs. After an upgrade to Vertica version 7.2.x or above, database failure or data corruption sometimes occurred if a storage ID upgrade happened simultaneously to a data management operation, such as repartition or mergeout. |
VER-51104 |
Kafka Integration |
Due to a race condition in the librdkafka library, a partially destroyed topic object was used when loading topic metadata and caused Vertica to fail. This fix moves a read-write lock to prevent this condition from occurring. |
VER-51754 |
Kafka Integration |
The Kafka scheduler logger incorrectly attempted to insert log messages into the kafka_events system table under the default config schema, even if a user specified a different schema. |
VER-53302 |
Kafka Integration |
You can start a backup Kafka scheduler to provide high availability for your Kafka data load. In the past, when the systemÂ’s default lock timeout was set to a low value (i.e. several seconds) this scheduler could exit with an error similar to this: [Vertica][VJDBC](5156) ERROR: Unavailable: initiator locks for query - Locking failure: Timed out X locking Table:config_schema.stream_lock. S held by [user username (SELECT scheduler_id FROM "config_schema".stream_lock)]. Your current transaction isolation level is SERIALIZABLE This error occurs as the backup scheduler waits in the background for the primary scheduler to fail. Frequent lock timeouts are expected when the default lock timeout is low and the primary scheduler is still active. The background Kafka scheduler discards these frequent timeout errors and continues to wait in the background until it sees that the primary Kafka scheduler has failed. |
VER-18513 |
Optimizer |
A query plan's GROUPBY HASH clause specified unnececessary resegmentation in cases where the grouping key matched the segmentation expression. This issue was resolved. |
VER-23867, VER-48649 |
Optimizer |
Set operations such as UNION returned an error that NULL was incompatible with other data types. These operations now handle NULL as compatible with all data types, without requiring explicit casting. |
VER-49113 |
Optimizer |
The COPY_ExplainPlans_TABLE.sql file provided for users to load Data Collector explain plan logs failed to properly escape the contents of the path_line column. This could result in data being rejected during the load. This script now properly escapes the column's contents. |
VER-53602 |
Optimizer |
This fix improves complex query performance during the query planning phase. |
VER-53693 |
Optimizer |
Previously, querying a table while some partitions of it are being dropped or swapped could result in the query failing with a "DDL interfered" error. Now, Vertica retries these queries automatically (up to three times by default) before the query fails. |
VER-53742 |
Execution Engine Optimizer |
Executing COUNT() on all rows in a large table with RLE columns performed slower than executing COUNT() on an RLE column in the same table. With this release, COUNT() performance is equivalent for both queries: SELECT COUNT(*) FROM t1; SELECT COUNT(rle-column) FROM t1; |
VER-53923 |
Optimizer |
Queries with multiple distinct aggregates grouped on a subquery containing another subquery sometimes caused the server to fail. |
VER-53962 |
Optimizer |
If a table has a large number of projections, queries on that table may execute slower than expected. Performance of these queries has been improved. |
VER-54837 |
Optimizer |
Under certain circumstances, running UDAFs result in an Internal Optimizer Error. This problem has been resolved. |
VER-49775 |
Optimizer - Plan Stability |
Queries now support EARLY_MATERIALIZATION hints, which can be used to qualify any number of tables in the same query. Typically, the query optimizer delays materialization until late in the query execution process. This hint overrides any choices that the optimizer otherwise might make. |
VER-52672 |
Optimizer - Plan Stability |
If used with multiple commands, a client connection using the vsql -c option sometimes generated an error when retrying queries. |
VER-54801 |
Optimizer - Query Rewrite |
This fix changes the data type for vs_resource_pool_defaults.runtimecap from timestamptz to interval. |
VER-50287 |
Optimizer - Statistics and Histogram |
In earlier versions, ANALYZE_ROW_COUNT was reported to incur excessive memory usage, which adversely affected database performance. ANALYZE_ROW_COUNT was refactored so execution is divided into multiple chunks, where each chunk analyzes a different subset of tables. A custom allocator was also implemented, which allocates and deallocates memory more efficiently. Collectively, these improvements have improved this function's overall performance. |
VER-52207 |
Optimizer - Statistics and Histogram |
Individual nodes occasionally failed due to excessive memory usage by ANALYZE_ROW_COUNT, which was refactored to address memory issues related to issue VER-50287. This work was extended towards resolution of VER-52207. |
VER-51674 |
Recovery |
The initial phase of a recovery by table is no longer slow in instances with thousands of tables. This issue had also affected other concurrent cluster operation for a brief period of time. |
VER-53425 |
Recovery |
If Vertica attempted to recover a table with out-of-date projections after a partition swapping operation, Vertica incorrectly dropped ROSes in the swapped partition, causing projection row count mismatch. |
VER-43211 |
Security |
The issue where you could not access existing views after performing a Vertica upgrade has been resolved. Users can access existing views if respective permissions exist. |
VER-53040 |
Security |
The issue where executing REVOKE (Role) runs for several minutes (as opposed to subseconds previously) has been resolved. |
VER-51280 |
Third Party Tools Integration |
The Kafka data source did not correctly handle an error that occurred during loading or parsing by the KafkaAVROParser. This sometimes caused the Vertica process to fail. With this fix, the load fails appropriately if the error occurs. |
VER-53685 |
AP-SparkIntegration Third Party Tools Integration |
In some cases, Vertica can store values in NUMERIC columns that have greater precision than the column's definition allows. In the past, when the Spark Connector sent these higher-precision values, they could result in Spark throwing an exception similar to the following: org.apache.spark.SparkException: Task failed while writing rows. at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:272) . . . The Spark Connector now ignores the precision Vertica defines for NUMERIC columns. Instead, it defines all NUMERIC columns in the data frames it sends to Spark as having the maximum precision that Spark allows. |
VER-51036 |
Transactions |
When using READ COMMITTED isolation level and savepoints, the transaction's isolation level changes to SERIALIZABLE and thus blocks other queries. This issue has been resolved. |
VER-54691 |
UI - Agent |
Management Console unnecessarily performed a daily license audit. |
VER-40185 |
UI - Management Console |
In rare circumstances, Management Console did not load the Database Designer page and instead displayed a "String index out of range" error. |
VER-44817 |
Execution Engine UI - Management Console |
The transaction_id and statement_id columns were added to the output of the dc_requests_completed catalog monitoring table. |
VER-46923 |
UI - Management Console |
In Management Console, non-superusers were unable to upload queries to the database designer. In addition to this fix, administrators can now adjust the number of queries that a non-superuser can include in the file to be uploaded to the design. The default value is 50. To change the limit, open the file /opt/vconsole/config/console.properties, and edit the following parameter: dbd.nonadminuser.fileupload.queries.limit |
VER-52981 |
UI - Management Console |
If a Management Console user with insufficient credentials navigated to the Query Monitoring page, Management Console returned an incorrect error in Chrome. |
VER-53261 |
UI - Management Console |
Management Console encountered JavaScript errors on the Overview page if the browser language was set to Japanese. |
VER-54238 |
UI - Management Console |
In the Queries detail chart in Management Console, the elapsed time column sorted rows incorrectly. |
VER-45839 |
Undecided |
Due to non-time based DC table retention rates, or server failures, missing records could cause inconsistencies between DC_REQUESTS_ISSUED and DC_REQUESTS_COMPLETED. |
VER-39548 |
Optimizer Usability |
CREATE TABLE AS statements that included non-stable functions occasionally failed with the following error: "Cannot use meta function or non-deterministic function in SEGMENTED BY expression" |
Take a look at the Vertica 8.1.x New Features Guide for a complete list of additions and changes introduced in this release.
Highly normalized database designs can incur significant query overhead; however, maintaining an additional redundant denormalized schema for query performance has its own administrative costs. Vertica now includes flattened tables to address this issue. Flattened tables include columns that get their values by querying other tables. Operations on the source tables and flattened table are decoupled; changes in one are not automatically propagated to the other. This minimizes the overhead that is otherwise typical of denormalized tables.
See more: Query Optimization
Vertica now supports the use of a Confluent schema registry with the KafkaAVROParser. By using a schema registry, you enable the Avro parser to parse and decode messages written by the registry and to retrieve schemas stored in the registry. In addition, a schema registry enables Vertica to process streamed data without sending a copy of the schema with each record.
See more: Apache Kafka Integration
The Apache Kafka integration uses User Define Source and COPY commands to load data directly from Kafka topics to Vertica tables. You can now specify the executionparallelism parameter in the KafkaSource UDL. You can use this parameter to throttle the number of threads used to process any COPY statement. It also increases the throughput of short queries issued in the pool, especially if the queries are executed concurrently.
See more: Apache Kafka Integration
The CREATE FLEX TABLE AS statement now allows you to create a flex table from the results of a query.
See more: Creating a Flex Table from Query Results
The Vertica Connector for Apache Spark now supports Spark 2.0. Spark 2.0 provides API stability, SQL 2003 support, subquery support, improved ORC and Parquet performance, and unification of data frame with data sets.
See more: Apache Spark Integration
Exact median and exact percentile functions may be slow for extremely large data, because the function must perform a full table scan to calculate exact values. You can now use the aggregate function APPROXIMATE_PERCENTILE to compute approximate percentiles on large data sets. Based on the T-digest algorithm, this function works for looking for patterns in data, while remaining faster and using less memory than the exact percentile function.
See more: APPROXIMATE_PERCENTILE
With Vertica on AWS, sample Tableau and Logi workbooks. These workbooks come pre-built with example clickstream data to illustrate how Vertica analysis improves digital marketing.
See more: AWS Marketplace
A critical security vulnerability has been found. An unauthenticated entity with the ability to establish a connection to the client socket can reset the password of a known database user. This security vulnerability has been resolved in 8.1.0-1. Vertica strongly recommends you upgrade to this version as soon as possible.
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-54872 |
Admin Tools |
Vertica has improved admintools error message handling. Specifically, errors that caused admintools to issue the generic error message "'NoneType' object has no attribute '__getitem__'" now include messages specific to the actual error. |
VER-54667 |
Catalog Engine |
Vertica previously stored full width padded strings as max values, resulting in unnecessarily large catalogs that could cause node failure. |
VER-55068 |
Client Drivers - JDBC |
Previously, the JDBC driver threw an exception when reading a BC-era date. It now correctly returns a date object. Vertica recommends that for dates outside the positive yyyy-mm-dd range, applications should rely on a Java DateFormat to render the value. |
VER-54851 |
Client Drivers - ADO Client Drivers - Misc |
Due to a non-deterministic error, connecting to Vertica using the 8.1 ADO connector previously failed with heap corruption. |
VER-55020 |
DDL |
An identity column with no associated sequence sometimes caused the database to fail when exporting objects. |
VER-55023 |
Execution Engine |
In extremely rare circumstances, row count mismatch occurred after a delete operation committed. |
VER-53738 |
Hadoop |
In 8.0.1, querying ORC and PARQUET external tables with integer partition columns produced an error if the partition value equals 2. Integer partition values are now correctly interpreted. |
VER-55069 |
Hadoop |
Loading from a highly-available Hadoop cluster with Kerberos sometimes caused COPY to fail to authenticate with HDFS and appear hung while exhausting its retry attempts. |
VER-55252 |
Hadoop |
Queries of ORC or Parquet data with many partitions (tens of thousands) were slow. More intelligent partition pruning has improved the performance. |
VER-55291, VER-55295 |
Hadoop |
The locking mechanism shared between Vertica and libhdfs++ did not protect against certain race conditions if there was an error while accessing HDFS. |
VER-55135 |
Kafka Integration |
You can start a backup Kafka scheduler to provide high availability for your Kafka data load. In the past, when the systemÂ’s default lock timeout was set to a low value (i.e. several seconds) this scheduler could exit with an error similar to this: [Vertica][VJDBC](5156) ERROR: Unavailable: initiator locks for query - Locking failure: Timed out X locking Table:config_schema.stream_lock. S held by [user username (SELECT scheduler_id FROM "config_schema".stream_lock)]. Your current transaction isolation level is SERIALIZABLE This error occurs as the backup scheduler waits in the background for the primary scheduler to fail. Frequent lock timeouts are expected when the default lock timeout is low and the primary scheduler is still active. The background Kafka scheduler discards these frequent timeout errors and continues to wait in the background until it sees that the primary Kafka scheduler has failed. |
VER-54508 |
Optimizer |
If a table has a large number of projections, queries on that table may execute slower than expected. Performance of these queries has been improved. |
VER-54664 |
Optimizer |
Previously, querying a table while some partitions of it are being dropped or swapped could result in the query failing with a "DDL interfered" error. Now, Vertica retries these queries automatically (up to three times by default) before the query fails. |
VER-55251 |
Optimizer |
Set operations such as UNION returned an error that NULL was incompatible with other data types. These operations now handle NULL as compatible with all data types, without requiring explicit casting. |
VER-54670 |
Optimizer - Statistics and Histogram |
Individual nodes occasionally failed due to excessive memory usage by ANALYZE_ROW_COUNT, which was refactored to address memory issues related to issue VER-50287. This work was extended towards resolution of VER-54670. |
VER-55051 |
Optimizer - Statistics and Histogram |
In earlier versions, ANALYZE_ROW_COUNT was reported to incur excessive memory usage, which adversely affected database performance. ANALYZE_ROW_COUNT was refactored so execution is divided into multiple chunks, where each chunk analyzes a different subset of tables. A custom allocator was also implemented, which allocates and deallocates memory more efficiently. Collectively, these improvements have improved this function's overall performance. |
VER-54960 |
Recovery |
The initial phase of a recovery by table is no longer slow in instances with thousands of tables. This issue had also affected other concurrent cluster operation for a brief period of time. |
VER-55293 |
Hadoop SAL |
The locking mechanism shared between Vertica and libhdfs++ did not protect against certain race conditions if there was an error while accessing HDFS. |
VER-54499 |
Security |
The issue where executing REVOKE (Role) runs for several minutes (as opposed to subseconds previously) has been resolved. |
VER-54506 |
UI - Management Console |
In the Queries detail chart in Management Console, the elapsed time column sorted rows incorrectly. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-54401 | Admin Tools | This fix resolves an issue that occurred when setting parameter MaxParsedQuerySizeMB. |
VER-53749 |
AP-Advanced |
This fix improves the performance of APPROXIMATE_MEDIAN and APPROXIMATE_PERCENTILE when using GROUP BY. |
VER-54282 |
Backup/DR |
Restoring objects containing grouped ROSes previously failed. |
VER-54398 |
DDL - Projection |
With this fix, dropping a primary key constraint also drops the supporting key constraint projections, if they are no longer necessary. |
VER-54279 |
Execution Engine |
Previously, if ephemeral or execute nodes were present in your cluster, Vertica incorrectly created many unnecessary key constraint projections. |
VER-54661 |
Execution Engine |
The datatype of the column VIEW_DEFINITION in system table v_catalog.views has been changed from varchar(65000) to varchar(32000). |
VER-54291 |
FlexTable |
Moving flex tables did not also move their corresponding key projections and views. |
VER-54619 |
Kafka Integration |
The Kafka data source did not correctly handle an error that occurred during loading or parsing by the KafkaAVROParser. This sometimes caused the Vertica process to fail. With this fix, the load fails appropriately if the error occurs. |
VER-54673 |
Kafka Integration |
The Kafka scheduler did not work if the database had a non-default locale. The scheduler now explicitly sets the locale to the Vertica default locale. |
VER-54704 |
Kafka Integration |
The Kafka scheduler logger incorrectly attempted to insert log messages into the kafka_events system table under the default config schema, even if a user specified a different schema. |
VER-54277 |
Optimizer |
Queries with multiple distinct aggregates grouped on a subquery containing another subquery sometimes caused the server to fail. |
VER-54522 |
Catalog Engine, ResourceManager |
During recovery, if acquiring a global catalog lock failed on some nodes, the nodes sometimes panicked and aborted recovery. |
VER-54591 | Security | This fix resolves an issue that sometimes caused the database to fail during LDAP authentication. |
VER-54677 |
SDK |
A performance issue with variable length types used as intermediates of User Defined Aggregates has been fixed. |
VER-54176 |
UI - Management Console |
In Management Console, non-superusers were unable to upload queries to the database designer. In addition to this fix, administrators can now adjust the number of queries that a non-superuser can include in the file to be uploaded to the design. The default value is 50. To change the limit, open the file /opt/vconsole/config/console.properties, and edit the following parameter: dbd.nonadminuser.fileupload.queries.limit |
VER-54286 |
Optimizer, Usability |
CREATE TABLE AS statements that included non-stable functions occasionally failed with the following error: "Cannot use meta function or non-deterministic function in SEGMENTED BY expression" |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-53725 |
Backup/DR |
This fix removes an incorrect warning message during vbr replication. |
VER-53798 |
Backup/DR |
When backing up to shared storage, it was possible for vbr to treat the same actual location on a shared volume as different backup locations, which could result in backup failure when trying to delete the same file multiple times. |
VER-54169 |
Client Drivers - JDBC |
If JDBC received multiple warnings while running routable queries, the JDBC client sometimes hung. |
VER-54289 |
DDL - Projection |
Vertica prefixed the table name in projections created as part of a CREATE TABLE AS statement. Vertica no longer does so if the target schema of the statement is different from the schema of the source table. |
VER-53744 |
Hadoop |
Including a predicate when querying ORC files in an evolved schema caused the database to fail. |
VER-53745 |
Installation Program |
In Vertica 7.2.3-19 and earlier, performing an object-level backup as the first backup following an upgrade could cause database failure. After an upgrade to version 7.2.3-20 or later, performing an object level-backup as the first backup after an upgrade could now cause (if a concurrent mergeout job is running on storage belonging to a projection that is being backed) "ERROR 2082: A Mergeout operation is already in progress on projection". Database operation is unaffected. Performing a full backup as the first backup following an upgrade to 7.2.3-20 or later now updates pre-existing storage IDs and avoids errors entirely. |
VER-54186 |
UI - Management Console |
In rare circumstances, Management Console did not load the Database Designer page and instead displayed a "String index out of range" error. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-53212 | AP-Advanced | The function KMEANS failed to generate an output view in custom schemas. |
VER-53380 | AP-Geospatial | This hotfix resolves an issue in the function ST_Distance that caused segmentation fault. |
VER-53899 |
AP-Geospatial |
ST_Distance and STV_DWithin sometimes encountered errors on a large table that caused the server to fail. This hotfix puts ST_Distance and STV_DWithin in fenced mode to prevent server failure. |
VER-53781 |
Data Removal - Delete, Purge, Partitioning |
If Vertica dropped a table after taking an O lock during a partition swapping operation, a segmentation fault occurred when Vertica re-checked the table. |
VER-53791 |
Database Designer Core |
Running Database Designer on tables that had multiple enabled constraints on the same sets of columns sometimes produced inappropriate projection layouts. |
VER-53837 |
Error Handling |
This fix resolves an issue in which the server sometimes silently exited after a log rotate. |
VER-53714 |
Data load / COPY Hadoop |
Previously, predicate pushdown in the Parquet and ORC parser did not support predicates that used IN / NOT IN clauses. This fix adds support. |
VER-53563 |
Kafka Integration |
Due to a race condition in the librdkafka library, a partially destroyed topic object was used when loading topic metadata and caused Vertica to fail. This fix moves a read-write lock to prevent this condition from occurring. |
VER-53793 |
Recovery |
If Vertica attempted to recover a table with out-of-date projections after a partition swapping operation, Vertica incorrectly dropped ROSes in the swapped partition, causing projection row count mismatch. |
Vertica has introduced hotfix 8.1.0-1 to address a security vulnerability. Vertica strongly recommends you upgrade to this version as soon as possible.
This hotfix addresses the issue below.
Issue |
Component |
Description |
VER-53608 |
Security, Virtual Appliance |
A critical security vulnerability has been found in Vertica. An unauthenticated entity with the ability to establish a connection to the client socket can reset the password of a known database user. This security vulnerability has been resolved. |
To see a complete list of additions and changes introduced in this release, refer to the Vertica 8.1.x New Features Guide.
Issue |
Component |
Description |
VER-45519 | Data Removal - Delete, Purge, Partitioning |
The issue where running some Partition Management functions failed with the error "A Moveout operation is already in progress on projection” has been resolved. Now instead of an error, running the Partition Management functions kills the Tuple Mover session and performs a retry. This issue was limited to the following Partition Management functions: DROP_PARTITIONS SWAP_PARTITIONS_BETWEEN_TABLES MOVE_PARTITIONS_TO_TABLE COPY_PARTITIONS_TO_TABLE Note: This issue was also found and resolved in COPY_TABLE. |
VER-48836 |
Admin Tools |
In rare cases, admintools timed out when adding new nodes to a cluster. |
VER-44405 |
Backup/DR |
The table dc_backups now includes the column snapshot_type. This column indicates whether reported backup activity was a backup or a copycluster. |
VER-46187 |
Backup/DR |
Vertica sometimes reassigned ownership of objects incorrectly when restoring or replicating tables in CreateOrReplace mode. |
VER-50458 |
Backup/DR |
When backing up to shared storage, it was possible for vbr to treat the same actual location on a shared volume as different backup locations, which could cause corrupt backup files. |
VER-50734 |
Backup/DR |
Object replication and restore sometimes failed if the target database had storage locations, UDTs or storage policies with the same object IDs as the source database objects. |
VER-50799 |
Backup/DR |
You can query tables without their fully qualified names (i.e schema.table) by setting the search_path. Such queries sometimes failed if a referenced schema was replicated concurrently. |
VER-51411 |
Backup/DR |
Vertica 7.2.3-15 and later failed to clean up a temporary directory created during backup. |
VER-34605 |
Catalog Engine |
Previously, the view_definition column of the VIEWS system table would truncate definitions that exceeded the 8192 character limit. The limit for the view_definition column has been increased to 65,000 characters. |
VER-51615 |
Client Drivers - ADO |
When opening a connection on Windows Server 2012 R2, an error caused a failed connection if permissions were set using domain management. |
VER-32766 |
Client Drivers - JDBC Client Drivers - ODBC |
Queries made through JDBC and ODBC incorrectly reported EOL errors for queries containing nested comments. The error did not occur with queries made from vsql. |
VER-55336 | Client Drivers - Misc |
The Vertica client driver does not support SSRS in SQL Server 2016. SSRS stops working or fails to launch if you have installed the Microsoft Connectivity Pack. What follows is a sample of how the problem occurs on a system with SQL Server 2016: 1. You download the Vertica Client Drivers and Tools for Windows. 2. When you begin installation, you check (or fail to uncheck) the box for Microsoft Connectivity Pack. Result: If SSRS was active, it stops working. Otherwise, SSRS fails when you launch the component. Workaround: Do not check the box for Microsoft Connectivity Pack when you install or reinstall the Vertica Client Drivers and Tools for Windows. Uncheck the box for Microsoft Connectivity Pack if it is checked. Note that, when unchecked, this means that the drivers and tools for SSAS and SSIS are also not installed. There is another scenario for which there is a workaround. You may have SSRS running on a system with both SQL Server 2016 and another supported SQL Server version. In this case, you can ensure that the previous version of SQL Server works with the Microsoft Connectivity Pack, while ensuring that SSRS in SQL Server 2016 is not negatively impacted by the driver installation process. A sample for this workaround follows. The sample assumes SQL Server 2016 and SQL Server 2014 are present on the same system, and you want to install or reinstall the Vertica Client Drivers and Tools for Windows. 1. Before installing or reinstalling the Vertica Client Drivers and Tools for Windows, you need to locate and backup two configuration files. From your ReportServer folder within the running instance of SSRS 2016, perform the search for the files rsreportserver.config and rssrvpolicy.config. 2. Backup the configuration files. 3. Install or reinstall the Vertica Client Drivers and Tools for Windows. In this scenario, you check the box for the Microsoft Connectivity Pack. 4. For SQL Server 2016 only, replace the configuration files with your backup copies. Result: This workaround allows SSRS to work with SQL Server 2014, but excludes SSRS from use with SQL Server 2016. Note that this workaround also allows Vertica support for SSAS and SSIS in SQL Server 2016. |
VER-49213 |
Communications/messaging |
In the past, when adding multiple nodes to a cluster, Vertica invited them to the cluster one at a time. Now Vertica invites all available nodes at once. |
VER-50832 |
DBD |
Running the database designer to optimize a table with a large number of projections while simultaneously dropping the table could result in a node failing with a fatal SIGSEGV signal. |
VER-50072 |
DDL |
Occasionally, Vertica would issue an error if you tried to drop a partition on a table that included a text index. |
VER-51049 |
DDL |
A CREATE TABLE statement with a LIKE clause INCLUDING PROJECTIONS would roll back if the source table included table-qualified field names (table.field) within a check constraint predicate. |
VER-50889 |
DDL Data Removal - Delete, Purge, Partitioning |
Querying a table while performing data changing operations on it could produce incorrect results, due to inconsistent snapshots across nodes. |
VER-51233 |
Data Removal - Delete, Purge, Partitioning |
Vertica incorrectly summarized the data from the v_monitor.PARTITIONS view and the v_monitor.PARTITION_COLUMNS view due to an incorrect join occurring with delete vectors. |
VER-42125 |
Data load / COPY |
When using apportioned load, Vertica could read one row incorrectly if a portion ended with an escape character and the next began with a record terminator. Built-in parsers now account for this case. |
VER-51285 |
Data load / COPY |
Non-dbadmin users could not create a rejected data table using COPY if the path to the rejected records file contained a symbolic link. |
VER-52215 |
Data load / COPY |
When loading data with multi-character record terminators while using apportioned load, Vertica could drop up to one record per portion. A row was lost from a portion if the end of the portion landed on any character of the record terminator other than the last one. |
VER-52241 |
Database Designer Core |
Running DESIGNER_DESIGN_PROJECTION_ENCODINGS on projections sometimes caused Vertica to incorrectly remove comments on those projections. |
VER-44659 |
Diagnostics |
Running scrutinize left behind zombie SSH processes. |
VER-49335 |
Diagnostics |
Running scrutinize left behind zombie SSH processes. |
VER-50333 |
Execution Engine |
During long-running sessions, Vertica sometimes held onto query level resources longer than necessary. |
VER-50658 |
Data load / COPY Execution Engine |
INSERT queries with joins running as part of ETL took longer to complete in Vertica 7.2.3. |
VER-49641 | Execution Engine |
Exporting data from one Vertica database to another using the EXPORT TO VERTICA statement sometimes failed if the destination table included an enabled primary key or unique constraint. Vertica produced an error message similar to the following: “Client Server protocol error. Message type 'ServerInfo' is invalid in state 'CommandDone'.” |
VER-50756 |
Execution Engine |
DataReader's getVal method occasionally caused SIGSEGV failures due to int overflows. |
VER-51153 |
Execution Engine |
If a commit transaction failed while AUTOCOMMIT was enabled, in some cases Vertica did not clean up constraints enforcement objects. This could cause node failure the next time a query successfully ran. |
VER-52519 |
Execution Engine |
Inserting a Top-K projection sometimes failed if it contained long varchar columns. |
VER-50847 |
FlexTable |
In some cases, flex map functions returned values larger than allowed by their return types. This caused the server to fail. |
VER-45503 |
Hadoop |
Previously, Vertica did not check the size of a file immediately after writing it to HDFS, which sometimes caused data consistency issues. With this fix, Vertica produces an error and rolls back the transaction if a file size mismatch occurs. |
VER-49168 |
Hadoop |
Parquet files can contain decimal values in INT32 and INT64 columns. Previously, Vertica was not able to read these values and returned an error. Now, Vertica reads them as decimals if the columns in the external tables are defined as decimals. |
VER-52570 |
DDL ILM |
The move_partitions_to_table function could incorrectly issue the error “A DDL statement interfered with this statement” if the source table included an enabled primary key constraint and schema-qualified table names were passed to the function. |
VER-51695 |
Kafka Integration |
An incorrect assertion within the UDFilter occasionally generated a false error, and in rare cases could cause the database to fail. |
VER-48055 |
Optimizer |
Previously, running the command "CONNECT TO VERTICA ... FROM VERTICA" would sometimes return an error: Internal Optimizer Error (11). This issue has been resolved in Release 8.1. |
VER-48672 |
Execution Engine Optimizer |
Certain NOT IN anti-join queries failed to spill and threw a run-time error. |
VER-48823 |
Front end - Parse & Analyze Optimizer |
Vertica meta-function export_objects now supports table names with embedded commas. |
VER-49768 |
Optimizer |
If any nodes were down, queries using predicates on live aggregate projections sometimes failed with an error. |
VER-50398 |
Optimizer |
Previously, explain plans could return very inaccurate row estimation for queries that returned unique or nearly unique results due to how selectivity was estimated. This issue is resolved in Release 8.1. |
VER-50547 |
Optimizer |
Previously, Vertica always pruned aggregate columns in a subquery if the subquery had no group by expression. This could result in a wrong result. This issue is fixed in Release 8.1. |
VER-42209 | Execution Engine Optimizer | Each WHEN MATCHED and WHEN NOT MATCHED clause in a MERGE statement can now optionally specify an update filter and insert filter, respectively. MERGE supports Oracle-style syntax for specifying these filters. |
VER-50681 |
Execution Engine Optimizer |
The database sometimes failed when Vertica set the type modifier incorrectly in certain CASE statements with SQL macros. |
VER-50808 |
Execution Engine Optimizer |
Some queries that contained CASE expressions, and ran in a runtime pool with a defined runtime cap and cascading, caused node failure. |
VER-50993 |
Optimizer |
Under certain circumstances, involving complex cyclic joins, Vertica encountered an internal optimizer error. |
VER-51232 |
Basics Optimizer |
On upgrading to Vertica 8.0.1-0, Vertica merged an anchor table's unsegmented projections under a single identifier. Vertica used a projection's base name to identify duplicates. In some cases, this approach caused changes to the database catalog that prevented the database from restarting. Vertica 8.0.1-1 resolves this issue. Now, on upgrade, Vertica automatically merges unsegmented projections of the same anchor table with identical properties. These properties include but are not exclusive to: * Sort order * Number of columns and their order * Encodings * Identical creation epochs Vertica retains all out-of-date projections. One exception applies: the out-of-date projection is duplicated by another projection that is up to date. In that case, Vertica drops the out-of-date projection. |
VER-51184 |
Optimizer - Query Rewrite |
Aggregating CASE expressions of numeric type sometimes caused an internal server error. |
VER-50315 |
Recovery |
When recovering to a specific epoch, Vertica no longer sets the current checkpoint epoch to zero for projections created after the specified epoch. |
VER-51859 |
Refresh |
Previously, rebalance hung if a buddy projection group contained both balanced projections and projections that needed to be rebalanced. Additionally, Vertica generated an internal error if it attempted to perform multiple rebalance tasks on the same projection simultaneously. |
VER-48469 |
ResourceManager |
In some circumstances, Vertica continually allocated unused virtual memory, which slowed performance. This issue has been resolved. |
VER-39685 |
Data load / COPY SDK |
The API did not allow user-defined parsers to enforce a requirement of NOT NULL for a column, so parsers could write null values into these columns. The API now makes this column information available to parsers, and parsers are responsible for complying. |
VER-48254 |
Security |
The issue where a user assigned the SYSMONITOR role could not view other user's sessions (SESSIONS table) has been resolved. |
VER-49688 |
Security |
The issue where user's passwords were appearing in plain text in the dc_requests_table has been resolved. The passwords are now masked. |
VER-50900 |
Security |
The issue where the creation of a second user with the DBADMIN role did not have the same privileges as the original DBADMIN user has been resolved. A new user assigned the DBADMIN role now has the same privileges as the default DBADMIN user. |
VER-51644 | Sessions | During long-running sessions, Vertica sometimes held onto query level resources longer than necessary. |
VER-8818 |
Tuple Mover |
Querying a table while performing data changing operations on it could produce incorrect results, due to inconsistent snapshots across nodes. |
VER-35943 |
UI - Management Console |
Management Console encountered errors if the operating system language was set to Japanese. |
VER-49562 |
Admin Tools UI - Management Console |
With this fix, Management Console accepts 0-100 characters for database passwords. |
VER-49813 |
UI - Management Console |
Management Console encountered JavaScript errors on the Overview page if the browser language was set to Japanese. |
VER-50761 |
UI - Management Console |
Disabling or enabling a network interface controller sometimes caused errors in Management Console. Such errors included displaying duplicate cluster information, and an incorrect error message stating that the database was running an older version of Vertica. |
VER-50768 | UI - Management Console | Permission checks that took more than 1 minute during Management Console login caused the login page to hang. |
VER-51092 |
UI - Management Console |
During email configuration, Management Console previously could not accept empty SMTP credential fields, even if no credentials were necessary. |
VER-51388 | UI - Management Console | The Vertica Agent Rest API returned an incorrect Vertica version number. |
VER-51910 |
UI - Management Console |
When a Management Console user that was mapped to a non-dbadmin Vertica user viewed MC charts, MC did not properly close connections to Vertica. |
VER-51949 |
Execution Engine Optimizer |
A high-concurrency throughput regression occurred between versions 8.0 and 8.0SP1. |
Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.
Backup operations are not currently supported on Vertica implementations using HDFS storage locations.
HCatalog connector does not currently support Kerberos authentication. Currently only HDFS Connector supports Kerberos authentication.
Issue |
Component |
Description |
VER-47840, VER-47885, VER-47886 | AP-Advanced |
Upgrading from 7.2.3 to Vertica 8.0 or above can cause naming conflicts due to new UDx names introduced in for Machine Learning in Vertica 8.0. Two types of conflicts can occur: a. If a 7.2.3 UDx name conflicts with UDx's introduced in the Vertica 8.0 Machine Learning Package, the Machine Learning Package installation fails when you start the database for the first time after upgrading Vertica. In this case, you can find the reason for failure and name conflict details in the AdminTools log file. b. If a 7.2.3 UDx name conflicts with Vertica Machine Learning functions introduced in Vertica 8.0, no failure occurs. However, after upgrading, the Vertica 7.2.3 UDx's are not available for execution due to the naming conflict. Vertica 8.0 introduced the following functions: • KMEANS • LINEAR_REG • LOGISTIC_REG • BALANCE • DELETE_MODEL • SUMMARIZE_MODEL • RENAME_MODEL • NORMALIZE Workaround: In both cases, rename the UDx causing the conflict and retry the installation. |
VER-43471 | Backup/DR |
When restoring a subset of objects from a backup, the vbr utility identifies objects on a case-sensitive basis. Workaround: Be sure to use the correct case when referring to the objects that you want to restore. |
VER-43040 | Client Drivers - ODBC | ENABLE_WITH_CLAUSE_MATERIALIZATION is not supported for WITH CLAUSE prepared statements. |
VER-55384 | Client Drivers - ODBC |
After installing the Vertica ODBC version 8.1.1 client on a Windows 7 pre-SP1 operating system, an attempt to connect using the ODBC DSN fails with the message, “The specified module could not be found. (Vertica, C:\Program Files\Vertica Systems\ODBC64\lib\vertica_8.1_odbc_3.5.dll)”. Workaround: To resolve this issue, Vertica recommends you upgrade to Windows 7 Service Pack 1. |
VER-42714 | Execution Engine |
Be aware that if Vertica cancels a query that generated an error, Vertica sometimes additionally generates the error "Send: Connection not open" during the cancel, even though that is not the cause of the original error. |
VER-55380 | Hadoop |
When using HCatalog connector, non-superusers cannot read from an HCatalog schema despite having usage granted. Workaround: Set configuration parameter HCatalogConnectorUseHiveServer2 to 0 to use WebHCat instead of HiveServer2. |
VER-55409 | Hadoop |
In rare circumstances, if multiple users attempt to concurrently authenticate with HDFS, an unnecessary assert could cause core dump. Workaround: Wait until an authentication process completes before beginning another. |
VER-42282 | Kafka Integration |
In some cases, Vertica fails while loading data from Kafka. The failure is often preceded by an error indicating a discrepancy between the bytes read by the DataBuffer and LengthBuffer. Workaround: Enable cooperative parsing with the following command: ALTER DATABASE <dbname> SET EnableCooperativeParse=0 Cooperative parsing can slow database performance, but decreases the likelihood of a failure. |
VER-48062 | Security | When determining valid ciphers to set on a FIPS-enabled system, be aware that SELECT SET_CONFIG_PARAMETER('EnabledCipherSuites','....'); can accept invalid values. For example, it could accept a cipher suite not allowed by FIPS. However, invalid cipher suites are not present in the FIPS-enabled system, so their algorithms are not used. |
The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
The information contained herein is subject to change without notice.
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
© Copyright 2006 - 2017 Hewlett-Packard Development Company, L.P.
Adobe® is a trademark of Adobe Systems Incorporated.
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.
Send documentation feedback to HPE