10.0.x Release Notes

Vertica
Software Version: 10.0.x

 

IMPORTANT: Vertica for SQL on Hadoop Storage Limit

Vertica for SQL on Hadoop is licensed per node, on an unlimited number of central processing units or CPUs and an unlimited number of users. Vertica for SQL on Hadoop is for deployment on Hadoop nodes. Includes 1 TB of Vertica ROS formatted data on HDFS.

This 1 TB of ROS limit enforcement is currently only contractual, but it will begin to be technically enforced in Vertica 10.1. Starting with Vertica 10.1, if you are using Vertica for SQL on Hadoop, you will not be able to load more than 1 TB of ROS data into your Vertica database. If you were unaware of this limitation and already have more than 1 TB of ROS data in your database at this time, please make any necessary adjustments to stay below the limit, or contact our sales team to explore other licensing options.

IMPORTANT: Before Upgrading: Identify and Remove Unsupported Projections

With version 9.2, Vertica has removed support for pre-join and range segmentation projections. If a table's only superprojection is one of these projection types, the projection is regarded as unsafe.

Before upgrading to 9.2 or higher, you are strongly urged to check your database for unsupported projections. If the upgrade encounters these projections, it is liable to fail. You must then revert to the previous installation.

Solution: Run the pre-upgrade script

Vertica has provided a pre-upgrade script that examines your current database and sends to standard output its analysis and recommendations. The script identifies and lists any unsupported projections. If the script finds projection buddies with different SELECT and ORDER BY clauses, it generates a deploy script. Run this script to remedy projections so they comply with system K-safety.

https://www.vertica.com/pre-upgrade-script/

For more information, see Fixing Unsafe Buddy Projections in the Vertica documentation.

Updated: 10/7/2020

About Vertica Release Notes

What's New in Vertica 10.0.1

What's Deprecated in Vertica 10.0.1

Vertica 10.0.1-2: Resolved Issues

Vertica 10.0.1-1: Resolved Issues

Vertica 10.0.1: Resolved Issues

Vertica 10.0.1: Known Issues

What's New in Vertica 10.0

What's Deprecated in Vertica 10.0.0

Vertica 10.0.0-3: Resolved Issues

Vertica 10.0.0-2: Resolved Issues

Vertica 10.0.0-1: Resolved Issues

Vertica 10.0.0: Resolved Issues

Vertica 10.0: Known Issues

About Vertica Release Notes

The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 10.0.x.

They also contain information about issues resolved in:

Downloading Major and Minor Releases, and Service Packs

The Premium Edition of Vertica is available for download at https://support.microfocus.com/downloads/swgrp.html.

The Community Edition of Vertica is available for download at https://www.vertica.com/download/vertica/community-edition.

The documentation is available at https://www.vertica.com/docs/10.0.x/HTML/index.htm.

Downloading Hotfixes

Hotfixes are available to Premium Edition customers only. Contact Vertica support for information on downloading hotfixes.

What's New in Vertica 10.0.1

Take a look at the Vertica 10.0.1 New Features Guide for a complete list of additions and changes introduced in this release.

admintools

--force Option for command_host

The new --force option tells the database to delete any unrecognized files in the data folders on the local node.

Backup, Restore, Recovery, and Replication

Support for an On-Premise S3 Backup in Enterprise Mode

You can now back up an Enterprise Mode database to an on-premises destination that supports the S3 protocol, such as Pure Storage FlashBlades. All vbr tasks for Enterprise Mode databases on AWS S3 are supported.

To perform a backup or restore of an on-premises database, you must set some additional environment variables. The vbr configuration file does not change. For more information, see Configuring Backups to and from S3.

Configuration

S3EnableVirtualAddressing

This parameter configures whether to rewrite S3 URLs to use virtual-hosted paths. For example, if you use AWS, the S3 URLs change to bucketname.s3.amazonaws.com instead of s3.amazonaws.com/bucketname. This configuration setting takes effect only when you have specified a value for AWSEndpoint.

Data Types

Casting Arrays and Sets

You can now cast collections (ARRAY and SET). Casting a collection casts each element of the collection, following the same rules as for casts of scalar values.

You can perform explicit casts, but not implicit casts, between arrays and sets. When casting an array to a set, Vertica first casts each element and then sorts the set and removes duplicates.

Casting from a smaller data type to a larger one could cause a collection value to exceed the column limit, causing the operation to fail. For example, if you cast an array of INT to an array of VARCHAR(50), each element takes more space and thus the whole array takes more space.

Comparing and Ordering Collections

You can now compare collections (arrays and sets) using comparison operators (=, <>, <, >, <=, >=), and queries can now compare collections. You can now use collections in the following ways:

See the "Functions and Operators" section on the ARRAY reference page for information on how Vertica orders collections. (The same information is also on the SET reference page.)

Using Structs in Subqueries and Views

You can now use structs and fields from structs, represented by the ROW data type, in views and subqueries.

Eon Mode

Subcluster Support

The meta-function CLEAR_DATA_DEPOT can now clear data from the depot of a single subcluster.

Subcluster Support for Large Cluster

The large cluster feature addresses the limitations in the Spread service that Vertica uses for cluster-wide broadcast messages. When large cluster is enabled, a subset of nodes, called control nodes, connect to the Spread service. Other nodes depend on control nodes to receive and send these broadcast messages.

In previous versions of Vertica, a control node and the nodes that depend on it were not guaranteed to be within the same subcluster. This cross-subcluster dependency could result in nodes in other subclusters failing when you shut down the subcluster containing their control node.

Now, Vertica always assigns nodes to a control node within their subcluster. When large cluster is enabled, every subcluster must have at least one control node. In Eon Mode, you now set the number of control nodes on a per-subcluster basis, rather than setting a single value for the entire cluster. The control-node related functions SET_CONTROL_SET_SIZE and REALIGN_CONTROL_NODES functions now take a mandatory subcluster argument when in Eon Mode.

Note: These changes do not alter how large cluster works when running in Enterprise Mode. You still set a single cluster-wide value for the number of control nodes in your database.

See Large Cluster for more information.

Important: Upgrading from a prior version of Vertica to version 10.0.1 or beyond does not change the existing layout of control nodes and their dependents in your database. If you have an Eon Mode database with multiple subclusters and have large cluster enabled, your existing control node assignments may cross subcluster boundaries. Vertica highly recommends that you remove these cross-subcluster dependencies. You must realign the control nodes and then restart the subclusters after you upgrade. See Realigning Control Nodes and Reloading Spread for instructions.

Node Subscriptions to Shards is More Deterministic

In previous versions of Vertica, the primary subscriber for a shard was chosen by several different mechanisms, which eventually could result in a random set of shard assignments in a subcluster. Starting in 10.0.1, Vertica assigns shard subscriptions to nodes using a single, simplified process.

As part of this process, one subscriber to each shard is designated as the participating primary node. This node is the only one that reads from and writes to communal storage for the shard. Other nodes in the subcluster that subscribe to the same shard get their data from the primary participating node via peer-to-peer transfer. A new column named IS_PARTICIPATING_PRIMARY in the V_CATALOG.NODE_SUBSCRIPTIONS indicates when a node is the participating primary node for a shard.

Subcluster-level Resource Pools

You can now create and manage resource pools at the subcluster level. In the subcluster-specific resource pools, you can override MEMORYSIZE, MAXMEMORYSIZE, and MAXQUERYMEMORYSIZE settings for built-in global resource pools for that subcluster. A new system table named SUBCLUSTER_RESOURCE_POOL_OVERRIDES lets you examine the overrides applied to global resource pools for a subcluster. Additionally, the RESOURCE_POOLS system table has two new columns named SUBCLUSTER_OID and SUBCLUSTER_NAME. These columns are populated when a resource pool belongs to a specific subcluster.

See Managing Workloads for more information.

Loading Data

Delimited Parser Supports Collections

The default (DELIMITED) parser now supports one-dimensional collections (arrays or sets) of scalar types. Several new COPY options allow customization of delimiters and other special characters. See Loading Delimited Data and the DELIMITED (Parser) reference page.

Security and Authentication

CONNECT TO Vertica: SHA-512 Support

The CONNECT TO VERTICA function now supports SHA-512 as an authentication method.

In-Database Cryptographic Key and Certificate Management

You can now generate or import cryptographic keys and certificates with the CREATE KEY and CREATE CERTIFICATE statements.

For examples, see Generating TLS Certificates and Keys.

MIT Kerberos 1.18.2

MIT Kerberos has been upgraded to version 1.18.2.

SQL Functions and Statements

Exporting Arrays to Parquet

EXPORT TO PARQUET can now export one-dimensional arrays. Nested (multi-dimensional) arrays cannot be exported, though you can extract the nested arrays as one-dimensional arrays and then export them.

Privilege Management for Subcluster Resource Pools

You can now GRANT and REVOKE USAGE privileges on subcluster resource pools.

System Tables

CRYPTOGRAPHIC_KEYS and CERTIFICATES

The new CRYPTOGRAPHIC_KEYS and CERTIFICATES tables contain information about keys and certificates generated or imported through the CREATE KEY and CREATE CERTIFICATE statements.

User-Defined Extensions

C++ UDL Support for Fenced Mode

Vertica user-defined load functions now support fenced mode for source, filter, and parser functions created in C++. Fenced mode allows you to run your UDx code outside of the main Vertica process. This protects the main Vertica process from any issues or crashes in the UDx code that might result in system-wide problems, including database failure.

What's Deprecated in Vertica 10.0.1

This section describes the two phases Vertica follows to retire Vertica functionality:

Deprecated

The following Vertica functionality was deprecated and will be retired in future versions:

Functionality Notes
Specifying segmentation on specific nodes  
DBD meta-function DESIGNER_SET_ANALYZE_CORRELATIONS_MODE Calls to this meta-function return with a warning message.
skip_strong_schema_match parameter to the Parquet parser  

Removed

The following functionality is no longer accessible as of the releases noted:

Functionality Notes
partition_key column in system tables STRATA and STRATA_STRUCTURES This column has been removed from both tables.

Vertica 10.0.1-2: Resolved Issues

Release Date: 10/7/2020

This hotfix addresses the issues below.

Issue

Component

Description

VER-74176 Backup/DR Previously, when migrate_enterprise_to_eon found projections that were Inconsistent with cluster segmentation, it wrote to the migration log file an incorrect anchor table name; it also provided only one projection, <projection-name>_b0. This issue has been resolved: now the log file error log file lists the names of all tables with problematic projections. You can use these names as arguments to meta-function REBALANCE_TABLE.
VER-74367 Execution Engine We made a change that adversely affected performance of the date() function. This issue has now been fixed.
VER-74407 Backup/DR The migrate_enterprise_to_eon function migrated grants on external procedures even though the procedures themselves were excluded from migration. This inconsistency caused the migrated database to crash and further attempts at startup to fail. This issue has been resolved: the migration now excludes privileges that are granted on external procedures, and also checks that all migrated objects have consistent dependencies.
VER-74444 Execution Engine Under some circumstances, the size of the METADATA resource pool was calculated incorrectly. This issue has been resolved.

Vertica 10.0.1-1: Resolved Issues

Release Date: 9/23/2020

This hotfix addresses the issues below.

Issue

Component

Description

VER-73869 Storage and Access Layer After reading data from Google Cloud Storage via external table or a COPY statement, Vertica now closes TCP connections when users close their sessions.
VER-74002 Execution Engine The query_profiles table now reports the correct number of rows processed when querying an external table stored in parquet files on HDFS.
VER-74133 Execution Engine Queries sometimes threw a Sort Order Violation error when join output that was sorted like the target projection was consumed by a single Analytic function that only partially sorted the data as required by the target projection. This issue has been resolved.
VER-74173 Backup/DR The vbr script no longer requires communal storage paths to end with a / character.
VER-74174 Execution Engine Expressions of the form <expression> IN (<list of constants>) were not handled correctly in cases where <expression> was of type LONG VARCHAR or LONG VARBINARY. If the IN list was long, query execution could be slow and use excessive memory. This problem has been resolved.
VER-74184 Admin Tools The MC is now able to properly revive Eon Mode hosts running on AWS.

Vertica 10.0.1: Resolved Issues

Release Date: 8/19/2020

This release addresses the issues below.

Issue

Component

Description

VER-40651 UI - Management Console A Security bug on MC is now fixed.
VER-40662 UI - Management Console Starting with 10.0SP1, MC disables the http TRACE call with the embedded Jetty server.
VER-48151 Client Drivers - Misc Before Vertica version 10.0SP1, result set counts were limited to values that would fit in a 32-bit integer. This problem has been corrected.
VER-56614 Documentation Updated DROP_STATISTICS documentation by removing redundant BASE option, which has the same effect as the ALL option.
VER-60017 FlexTable Parser FCSVPARSER used to ignore the NULL parameter passed to a COPY command. This parser now applies NULL parameter to loaded data, replacing matching values by database NULL.
VER-60547 Admin Tools When upgrading the database, if automatic package updates fail, Vertica will provide a HINT with instructions for updating these packages manually.
VER-61344 Installation Program You can optionally use the OS provided package manager to handle dependencies when you install the vertica software package.
VER-65314 Security Kerberos updated to 1.18.2: fixes an issue where Vertica could exceed the limit for opened files when using Kerberos.
VER-65476 Cloud - Amazon Vertica.local_verify scripts were raising errors due to low values of nofile, nproc, pid_max, max_map_count, and PAM module requirement in different Vertica VMs. These values have been modified to eliminate errors for all instances below 128GB memory.
VER-69442 Client Drivers - VSQL, Supported Platforms On RHEL 8.0, VSQL has an additional dependency on the libnsl library. Attempting to use VSQL in Vertica 9.3SP1 or 10.0 without first installing libnsl will fail and produce the following error: Could not connect to database (EOF received)/opt/vertica/bin/vsql: error while loading shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory
VER-69860 Optimizer In some cases, querying certain projections failed to find appropriate partition-level statistics. This issue has been resolved: queries on projections now use partition-level statistics as needed.
VER-70427 Client Drivers - JDBC The JDBC client driver could hang on a batch update when a table locktimeout happened, This update has fixed this issue and standardized the JDBC client driver update behavior in both batched and non-batched, prepared and non-prepared situations to follow the JDBC SQL standard.
VER-70591 AP-Geospatial In rare cases, describing indexes with the list_polygons parameter set caused the initiator node to fail. This was caused by incorrectly sizing the geometry or geography output column size. This issue has been resolved: users can now resize the shape column size through a new parameter shape_column_length, which is set to bytes.
VER-70804 Execution Engine In some cases, cancelling a query caused subsequent queries in the same transaction to return with an error that they also had been cancelled. This issue has been resolved.
VER-71728 Data load / COPY, SDK The protocol used to run a user-defined parser in fenced mode had a bug which would occasionally cause the parser to process the same data multiple times. This issue has been fixed.
VER-71983 ComplexTypes, Optimizer The presence of directed queries sometimes caused "Unknown Expr" warnings with queries involving arrays and sets. This issue has been resolved.
VER-71997 Backup/DR Vertica backups were failing to delete old restore points when the number of backup objects exceeded 10,000 objects. This issue has been fixed.
VER-72078 Optimizer Meta-function copy_table now copies projection statistics from the source table to the target new table.
VER-72227 Cloud - Amazon, Security Vertica instances can now only access the S3 bucket that the user specified for communal storage.
VER-72242 ComplexTypes The ALTER TABLE ADD COLUMN command previously did not work correctly when adding a column of array type. This issue has been fixed.
VER-72342 Backup/DR The error message "Trying to delete untracked object" has been replaced with the more useful " Trying to delete untracked object: This is likely caused by inconsistent backup metadata. Hint: Running quick-repair can resolve the backup metadata inconsistency. Running full-check task can provide more thorough information, and guidance on how to fix this issue.
VER-72443 Tuple Mover A ROS of inactive partitions was excluded from mergeout if it was very large in relation to the size of other ROS's to be merged. Now, a large ROS container of inactive partitions is merged with smaller ROS containers when the following conditions are true:
* Total number of ROS containers for the projection is close to the threshold for ROS pushback.
* Total mergeout size does not exceed the limit for ROS container size.
VER-72553 DDL Vertica returned an error if you renamed an unsegmented non-superprojection and the new name conflicted with an existing projection name in the same schema. This issue has been resolved: now Vertica resolves the conflict by modifying the new projection name.
VER-72577 Data load / COPY Copy of parquet files was very slow when parquet files were poorly written (generated by GO) and had wide varchar columns. This is now fixed.
VER-72589 Optimizer

If you created projections in earlier (pre-10.0.x) releases with pre-aggregated data (for example, LAPs and TopK projections) and the anchor tables were partitioned with a GROUP BY clause, their ROS containers are liable to be corrupted from various DML and ILM operations. In this case, you must rebuild the projections:

  1. Identify problematic projections by running the meta-function REFRESH on the database.
  2. Export the DDL of these projections with EXPORT_OBJECTS or EXPORT_TABLES.
  3. Drop the projections, then recreate them as originally defined.
  4. Run REFRESH. Vertica rebuilds the projections with new storage containers.
VER-72603 DDL Setting NULL on a table column's DEFAULT expression is equivalent to setting no default expression on that column.
So, if a column's DEFAULT/SET USING expression was already NULL, then changing the column's data type with ALTER TABLE...ALTER COLUMN...SET DATA TYPE removes its DEFAULT/SET USING expression.
VER-72613 Security The LDAPLinkSearchTimeout configuration parameter has been restored.
VER-72625 Optimizer The "COMMENT ON" statement no longer causes certain views (all_tables, user_functions, etc.) to show duplicate entries.
VER-72683 Optimizer In some cases, common sub-expressions in the SELECT clause of an INSERT...SELECT statement were not reused, which caused performance degradation. Also, the EXPLAIN-generated query plan occasionally rendered common sub-expressions incorrectly. These issues have been resolved.
VER-72721 Catalog Engine, Execution Engine Dropping a column no longer corrupts partition-level statistics, causing subsequent runs of the ANALYZE_STATISTICS_PARTITION meta-function on the same partition range to fail. If you have existing corrupted partition-level statistics, drop the statistics and run ANALYZE_STATISTICS_PARTITION to recreate them.
VER-72797 Data load / COPY Added support guidance parameter 'UDLMaxDataBufferSize' with default value 256 x 1024 x 1024 (256 MB). Its value can be increased to avoid 'insufficient memory' errors during User Define Load.
VER-72832 Execution Engine Querying UUID data types with an IN operator ran significantly slower than an equivalent query using OR. This problem has been resolved.
VER-72852 Kafka Integration, Security The scheduler now supports CA bundles at the UDX and vkconfig level.
VER-72864 Optimizer An INSERT statement with a SELECT clause that computed many complex expressions and returned a small result set sometimes performed slower than running the same SELECT statement independently. This issue has been resolved.
VER-72886 Execution Engine Setting configuration parameter CascadeResourcePoolAlwaysReplan = 1 occasionally caused problems when a timed-out query had already started producing results on the original resource pool. Now, if configuration parameter CascadeResourcePoolAlwaysReplan = 1, and a query times out on the original resource pool and must cascade to a secondary pool, the following behavior applies:

  • The query has not started to produce results: the query cascades to the secondary resource pool and creates another query plan.
  • The query already started to produce results: The query cascades to the secondary resource pool where CascadeResourcePoolAlwaysReplan is ignored, and query output continues.
VER-72930 Monitoring Two new columns have been added to Data Collector system table dc_resource_acquisitions: session_id and request_id.
VER-72943 Optimizer The hint message that was associated with the DEPOT_FETCH hint referenced a deprecated alias of that hint. The hint has been fixed to reference the supported hint name.
VER-72952 DDL - Projection Users without the required table and schema privileges were able to create projections if the CREATE PROJECTION statement used the createType hint with an argument of L, D, or P. This problem has been resolved.
VER-72991 Hadoop Previously, when accessing data on HDFS, Vertica would sometimes connect to the wrong HDFS cluster in cases when several namenodes from different clusters have the same hostname. This is now fixed.
VER-73064 Control Networking When a wide table had many storage containers, adding a column sometimes caused the server to fail. This issue has been resolved.
VER-73080 DDL If you renamed a schema, the change was not propagated to table DEFAULT expressions that specified sequences of that schema. This issue has been resolved: all sequences of a renamed schema are now updated with the new schema name.
VER-73100 DDL - Table When you add a column to a table with ALTER TABLE...ADD COLUMN, Vertica registers an AddDerivedColumnEvent, which it uses during node recovery to avoid rebuilding projections of that table. When you dropped a column from the same table with ALTER TABLE...DROP COLUMN, Vertica updated this event to indicate that the number of table columns changed. At the same time, it also checked if the dropped column was referenced as the default expression of another column in the table. If so, Vertica returned with an error and hint to advance the AHM before before executing the drop. Users who lacked privileges to advance the AHM were blocked from performing the drop operation.

This issue has been resolved: given the same conditions, dropping a column no longer depends on advancing the AHM. Instead, Vertica now sets the AddDerivedColumnEvent attribute number to InvalidAttrNumber. When the next recovery operation detects this setting, it rebuilds the affected projections.
VER-73102 DDL - Table When you renamed a table column, in some instances the DDL of the table projections retained the previous column name as an alias. This problem has been resolved.
VER-73111 Backup/DR Error reporting by MIGRATE_ENTERPRISE_TO_EON on unbundled storage containers has been updated: MIGRATE_ENTERPRISE_TO_EON now returns the names all tables with projections that store data in unbundled storage containers, and recommends to run meta-function COMPACT_STORAGE() on those tables.
VER-73131 Admin Tools The admintools locking mechanism has been changed to prevent instances of admintools from leaving behind admintools.conf.lock, which could stop secondary nodes from starting.
VER-73209 Data load / COPY Under certain circumstances, the FJsonParser rejecting rows could cause the database to panic. This issue has been fixed.
VER-73224 Optimizer Previously, you could only set configuration parameter PushDownJoinFilterNull at the database level. If this parameter is set to 0, the NOT NULL predicate is not pushed down to the SCAN operator during JOIN operations. Now, this parameter can also be set for the current session.
VER-73304 Tuple Mover If mergeout on a purge request failed because the target table was locked, the Tuple Mover was unable to execute mergeout on later purge requests for the same table. This issue has been resolved.
VER-73314 UI - Management Console An issue with selecting various Vertica database versions for Data Source on MC has been fixed.
VER-73380 Optimizer Users will now receive a hint to increase value of MaxParsedQuerySizeMB if their queries surpass their database's query memory limit.
VER-73516 Catalog Engine Under very rare circumstances, when a node PANIC'ed during commit, subsequent recovery resulted in discrepancies among several buddy projections. This issue has been resolved.

Known issues Vertica 10.0.1

Updated: 8/19/20

Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.

Issue

Component

Description

VER-41895 Admin Tools On some systems, admintools fails to parse output while running SSH commands on hosts in the cluster.
VER-48020 Hadoop Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load.
VER-48041 Admin Tools On some systems, occasionally admintools cannot parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient.
VER-61069 Execution Engine In very rare circumstances, if a vertica process crashes during shutdown, the remaining processes might hang indefinitely.
VER-61420 Data Removal - Delete, Purge, Partitioning Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM.
VER-62983 Hadoop When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas.
VER-64352 SDK Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2:

CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python';
DROP LIBRARY lib1;
CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python';

Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails.
VER-64916 Kafka Integration When Vertica exports data collector information to Kafka via notifier, the sterilization logic does not properly handle Vertica internal data type translation. When importing or copying that data back into the Management Console, Vertica uses the raw data in Kafka and inserts garbled data into the extended monitoring database.
VER-69803 Hadoop The infer_external_table_ddl meta-function will hang if you do not set authentication information in the current session when analyzing a file on GCP.
VER-70468 Documentation Creating a load balance group for a subcluster with a ROUNDROBIN load balance policy only has an effect if you set the global load balancing policy to ROUNDROBIN as well. This is the case, even though the LOAD_BALANCE_GROUPS system table shows the group's policy as ROUNDROBIN.
VER-71761 ComplexTypes The result of an indexed multi-dimension array cannot be compared to an un-typed string literal without a cast operation.
VER-72380 ComplexTypes Insert-selecting an array[varchar] type can result in an error when the source column varchar element length is smaller than the target column.
VER-73715 AMI, License An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing.

What's New in Vertica 10.0

Take a look at the Vertica 10.0 New Features Guide for a complete list of additions and changes introduced in this release.

admintools

Eon Mode: list_db

The admintools list_db tool now shows the communal storage location for Eon Mode databases.

Apache Kafka Integration

Vertica 10.0.0 changes some of the default settings in the Apache Kafka integration to support better performance overall and to account for the removal of the WOS.

Note: The changes to the Apache Kafka integration in Vertica version 10.0 do not require an update to your scheduler's schema. However, you may want to change some of your scheduler's settings based on the new default values. These new defaults only affect newly-created schedulers. Even if your existing scheduler is set to use default values, the new defaults do not affect it.

Longer Default Frame Length

The default frame length is now 5 minutes (increased from the previous default of 10 seconds). This increase helps prevent the creation of many small ROS containers now that Vertica no longer uses the WOS. It is also a better choice for most use cases. The old default frame duration is too short for most non-trivial workloads.

The vkconfig tool now displays a warning if you set the frame duration so low that the scheduler will have less than two seconds to run each microbatch on average. Usually, you should set the frame duration to allow more than two seconds per microbatch.

Caution: This change only affects new schedulers. Your existing schedulers are not updated with the new default frame length. This is the case, even if you created them to use the default frame length.

Default Kafka Resource Pool Change

Prior to 10.0, if you did not assign your scheduler a resource pool, it would use a resource pool named kafka_default_pool. This behavior could cause resource issues if you created multiple schedulers that used the default pool. You could also see problems if your scheduler needed more resources than those provided by the default pool.

In Vertica version 10.0, if you do not specify a resource pool for your scheduler, it will use one quarter of the resources of the GENERAL resource pool. This behavior avoids having multiple schedulers use a single resource pool that has not configured with that workload in mind. This change only affects newly-created schedulers. It does not affect existing schedulers that use the kafka_default_pool.

Caution: Even with this change, you should create a dedicated resource pool for your scheduler to ensure it has the resources it needs. The dedicated resource pool lets you tailor and control the resources your scheduler uses.

The scheduler's fallback behavior of using the GENERAL pool is intended to allow for quick testing and validation of a scheduler before allocating a resource pool for it. Do not rely on having the scheduler use the GENERAL pool for production use. The vkconfig utility warns you every time you start a scheduler that uses the GENERAL pool.

Configuration

User-Level Configuration Parameters

Vertica now supports setting some configuration parameters for individual users. This support includes expanded syntax for ALTER USER, and new statement SHOW USER.

Database Designer

Database Designer (DBD) has been extensively overhauled. Significant improvements include:

Supported Data Types

Native Support for Arrays and Sets

Vertica-managed tables now support one-dimensional arrays of primitive types. External tables continue to support multi-dimensional arrays. The two types are the same with respect to queries and array functions, but are different types with different OIDs. For more information, see the ARRAY type .

Vertica-managed tables now support sets, which are collections of unique values. For more information, see the SET type.

Several functions that previously operated only on arrays now also operate on sets, and some new functions have been added. For more information, see Collection Functions.

Flexible Complex Types in External Tables

In some cases, the complex types in a Parquet file are structured such that you cannot fully describe their structure in a table definition. For example, a row (struct) can contain other rows but not arrays or maps, and an array can contain other arrays but not other types. A deeply-nested set of structs could exceed the nesting limit for an external table.

In other cases, you could fully describe the structure in a table definition but might prefer not to. For example, if the data contains a struct with a very large number of fields, and in your queries you will only read a few of them, you might prefer to not have to enumerate them all individually. And if the data file's schema is still evolving and the type definitions might change, you might prefer not to fully define, and thus have to update, the complete structure.

Flexible complex types allow you to treat a complex type in a Parquet file as unstructured data in an external table. The data is treated like the data in a flex table, and you can use the same mapping functions to extract values from it that are available for flex tables.

Documentation Updates

Reorganized Documentation on Data Load and Export

Documentation on bulk-loading data (including using external tables), importing or exporting between Vertica databases, and exporting data to Parquet format has been reorganized and improved. See the following new top-level topic hierarchies:

Eon Mode

Eon Mode Support for Google Cloud Platform (GCP)

You can now deploy an Eon Mode database on Google Cloud Platform.

Currently, there are a few limitations when using an Eon Mode database on GCP:

Note: You must supply a valid Vertica license when creating a database with more than three nodes in it.

Future releases will address these limitations.

For more information, see Eon Mode Databases on GCP.

Eon Mode Support for HDFS

Vertica now supports communal storage on HDFS when accessed through WebHDFS. See Installing Eon Mode on Premises with Communal Storage on HDFS for more information.

There are some restrictions:

Pinning on Subclusters

Vertica now supports pinning on subcluster depots. This enhancement is implemented with two new meta-functions, which supersede the now-deprecated SET_DEPOT_PIN_POLICY meta-function:

New DelayForDeletes Configuration Parameter Setting and Default Value

The new default value for DelayForDeletes is 0, which deletes a file from communal storage as soon as it is not in use by shard subscribers. In earlier releases, the default was 2 hours.

After you upgrade, DelayForDeletes retains any value that you configured in a previous version, although Vertica recommends setting this configuration parameter to 0 for version 10.0.0. If you used the previous default of 2 hours, DelayForDeletes is set to 0 automatically.

Nodes Now Always Warm Their Depots in the Background

When depot warming is enabled, newly added, restarted, or recovered nodes now warm their depots in the background. They start processing queries immediately. They also copy data from communal storage to populate their depots with relevant data based on the content of other node's depots in their subcluster.

Previously, nodes defaulted to foreground depot warming: when starting, they would copy relevant data from communal storage into their depots before taking part in queries. This behavior could lead to a delay between the time you added nodes to a subcluster and when they assisted in resolving queries.

This new behavior is the default. Depot warming in the foreground is no longer supported.

The BACKGROUND_DEPOT_WARMING function is now deprecated. This function had nodes switch from foreground to background depot warming. It has been deprecated and will be removed in a future release.

Voltage SecureData Integration

Type Casting and Identity Management with SQL Macros

You can integrate the VoltageSecureProtect and VoltageSecureAccess functions with SQL macros to manage identities and perform automatic type casting.

Voltage SecureData SimpleAPI 6.0

The version 6.0 of the SimpleAPI library includes several new features.

NULL Value Handling

When given a NULL value, VoltageSecureProtect now returns a NULL value. Previously, NULL inputs would return an error.

Configurable Network Timeout

You can now configure the network timeout for when Vertica interacts with your Voltage SecureData server.

The default and maximum value for this parameter is 300 seconds.

Manually Refresh Client Policy

You can now manually refresh the client policy across all nodes with the VoltageSecureRefreshPolicy function.

Safe Unicode FPE Formats

VoltageSecureProtect and VoltageSecureAccess now offer predefined formats to encrypt and decrypt all Unicode code point values. Previous versions of the Voltage library offered incomplete Unicode support with predefined formats using FPE extensions (FPE2 and JapaneseFPE).

For more information, see Best Practices for Safe Unicode FPE.

Machine Learning

Support for PMML Models

Vertica now supports the import and export of K-means, linear regression, and logistic regression machine learning models in Predictive Model Markup Language (PMML) format. Support for this platform-independent model format allows you to use models trained on other platforms to predict on data stored in your Vertica database. You can also use Vertica as your model repository.

The PREDICT_PMML function is new in Vertica 10.0. In addition, these existing functions now support PMML models:

Support for TensorFlow Models

Vertica now supports importing trained TensorFlow models, and using those models to do prediction in Vertica on data Stored in the Vertica database. Vertica supports TensorFlow models trained in TensorFlow version 1.15.0.

The PREDICT_TENSORFLOW function is new in Vertica 10.0. In addition, these existing functions now support TensorFlow models:

Management Console

Provision and Revive Eon Clusters on GCP

Management Console (MC) now supports provisioning and reviving Eon database clusters on Google Cloud Platform (GCP).

Create and Manage Eon Mode Subclusters

In addition to monitoring Eon Mode subclusters, MC now allows you to create and manage subclusters for Eon Mode on AWS and Eon Mode for Pure Storage.

Depot Management Tools: Depot Efficiency and Depot Pinning

Depot Efficiency

The new Depot Efficiency tab on the MC Database > Depot Activity Monitoring page provides charts with metrics that allow you to determine whether your Eon depot is properly tuned and whether there are issues you need to adjust for better query performance.

Depot Pinning

The new Depot Pinning tab on the MC Database > Depot Activity Monitoring page allows you to view the tables that are pinned in the depot. It also allows you to create, modify, and remove the pinning policies for each table. You may want to change pinning policies based on factors such as a table's frequency of re-fetches, the size of a table's data in depot, and the number of requests for a table's data.

Vertica in Eon Mode for Pure Storage

MC now supports Vertica in Eon Mode for Pure Storage, including creating and reviving Eon Mode databases on-premises, using Pure Storage FlashBlade as the communal storage.

Select and Execute Workload Analyzer Recommendations

In MC, in addition to viewing Workload Analyzer (WLA) tuning recommendations, you can also select and execute certain individual WLA tuning commands, to improve how queries execute on your database.

Set the MAXQUERYMEMORYSIZE Parameter from MC

The parameter "MAXQUERYMEMORYSIZE" was added in version 9.1.1, with the ability to modify the parameter using VSQL.

In 10.0, you can now modify the "MAXQUERYMEMORYSIZE" parameter directly from MC.

Projections

Changing Projection Column Encodings

You can now call ALTER TABLE…ALTER COLUMN to add an encoding type to a projection column, or change its current encoding type. Encoding projection columns can help reduce their storage footprint, and enhance performance. Until now, you could not add encoding types to an existing projection; instead, you had to recreate the projection and refresh its data. Doing so for very large projections was liable to incur significant overhead, which could be prohibitively expensive for a running production server.

When you add or change a column's encoding type, it has no immediate effect on existing projection data. Vertica applies the encoding only to newly loaded data, and to existing data on mergeout.

Security and Authentication

New Parameter: LDAPLinkJoinAttr

The attribute used to associate users and groups can vary between standards. For example, POSIX groups use the memberUid attribute. To provide support for these standards, the LDAPLinkJoinAttr parameter allows you to specify the attribute with which to associate users to their roles in the LDAP Link.

For more information on this and other parameters, see LDAP Link Parameters.

KERBEROS_CONFIG_CHECK: Credential Cache Files

To help with troubleshooting, KERBEROS_CONFIG_CHECK now prints the path of KerberosCredentialCache files.

SQL Functions and Statements

Some Array Functions Renamed

The array_min, array_max, array_sum, array_avg, array_count, and array_length functions perform aggregate operations on the elements of an array. They have been expanded to operate on other collection types and have been renamed to apply_min, apply_max, apply_sum, apply_avg, apply_count_elements, and apply_length. The array_* functions are deprecated and automatically call the corresponding apply_* functions. For more information, see Collection Functions.

SQL Support for User-Level Configuration

Two SQL statements support setting configuration parameters for individual users:

ALTER USER now supports setting user-level configuration parameters for the specified user.

New statement SHOW USER returns all configuration parameter values that are set for the specified user.

New EXPLODE Array Function

The new EXPLODE array function expands a 1D array column and returns query results where each row is an array element. The query results include a position column for the array element index, and a value column for the array element.

Statistics

Statistics Collection Improvements

Statistics that are collected for a given range of partition keys now supersede statistics that were previously collected for a subset of that range. For details, see Collecting Partition Statistics.

System Tables

New system table V_MONITOR.TABLE_STATISTICS

TABLE_STATISTICS displays statistics that have been collected for tables and their respective partitions.

New column in RESOURCE_POOL_STATUS

RESOURCE_POOL_STATUS now includes a RUNTIMECAP_IN_SECONDS column, which specifies in seconds the maximum time a query in the pool can execute. If a query exceeds this setting, it tries to cascade to a secondary pool.

Users and Privileges

Increased Security for Privileges through Non-Default Roles

The Vertica database is now more strict with privileges through non-default roles. Some actions have privilege requirements that depend on the effective privileges of other users. If these other users have the prerequisite privileges exclusively through a role, the role must be a default role for the action to succeed.

For example, for Naomi to change Zinn's default resource pool to RP, Zinn must already have USAGE privileges on RP. If Zinn only has USAGE privileges on RP through the Historian role, it must be set as a default role, otherwise the action will fail.

For more information on changing a user's default roles, see Enabling Roles Automatically.

Removal of Support for Write-Only Storage (WOS)

Over the past several releases, Vertica has significantly improved small batch loading into ROS, which now provides WOS-equivalent performance. In order to simplify product usage, Vertica began in release 9.3.0 to phase out support for WOS-related functionality. With Vertica 10.0, this process is complete; support for all WOS-related functionality has been removed.

What's Deprecated in Vertica 10.0

This section describes the two phases Vertica follows to retire Vertica functionality:

Deprecated

The following Vertica functionality was deprecated and will be retired in future versions:

Functionality Notes

Array-specific functions:

  • array_min
  • array_max
  • array_sum
  • array_avg
  • array_count
  • array_length

Superseded by new functions that operate on collections, including arrays:

  • apply_min
  • apply_max
  • apply_sum
  • apply_avg
  • apply_count
  • apply_length

Configuration parameters:

  • HiveMetadataCacheSizeMB
  • DMLTargetDirect
  • MoveOutInterval
  • MoveOutMaxAgeTime
  • MoveOutSizePct
Setting these parameters has no effect.
vbr configuration parameter SnapshotEpochLagFailureThreshold With WOS removal, full and object backups no longer use SnapShotEpochLagFailureThreshold. If a vbr configuration file contains this parameter, vbr returns a warning that it was ignored.
DBD meta-function DESIGNER_SET_ANALYZE_CORRELATIONS_MODE Calls to this meta-function return with a warning message.
Eon Mode meta-function BACKGROUND_DEPOT_WARMING Foreground depot warming is no longer supported. Nodes only warm depots in the background. Calling this function has no effect.

Removed

The following functionality is no longer accessible as of the releases noted:

Functionality Notes
Write‑only storage (WOS) As of 10.0, WOS and related functionality has been removed from Vertica.
Vertica Python client The Vertica Python client is now open source.


For more information see Deprecated and Retired Functionality in the Vertica documentation.

Vertica 10.0.0-3: Resolved Issues

Release Date: 7/29/2020

This hotfix addresses the issues below.

Issue

Component

Description

VER-73015 ComplexTypes The ALTER TABLE ADD COLUMN command previously did not work correctly when adding a column of array type. This issue has been fixed.
VER-73139 Optimizer In some cases, common sub-expressions in the SELECT clause of an INSERT...SELECT statement were not reused, which caused performance degradation. Also, the EXPLAIN-generated query plan occasionally rendered common sub-expressions incorrectly. These issues have been resolved.
VER-73172 Hadoop Previously, when accessing data on hdfs Vertica would sometimes connect to the wrong hdfs cluster in cases when several namenodes from different clusters had the same hostname. This is now fixed.
VER-73261 DDL - Projection Users without the required table and schema privileges were able to create projections if the CREATE PROJECTION statement used the createType hint with an argument of L, D, or P. This problem has been resolved.
VER-73262 Execution Engine In some cases, cancelling a query caused subsequent queries in the same transaction to return with an error that they also had been cancelled. This issue has been resolved.
VER-73265 Control Networking When a wide table had many storage containers, adding a column could sometimes cause the server to fail. The issue is fixed.
VER-73442 Admin Tools Admintools did not start because the file admintools.conf.lock was left behind by another instance of admintools. The locking mechanism has been changed to address this issue.
VER-73464 Optimizer In some cases, querying certain projections failed to find appropriate partition-level statistics. This issue has been resolved: queries on projections now use partition-level statistics as needed.
VER-73467 Tuple Mover

A ROS of inactive partitions was excluded from mergeout if it was very large in relation to the size of other ROS's to be merged. Now, a large ROS container of inactive partitions is merged with smaller ROS containers when the following conditions are true:

  • Total number of ROS containers for the projection is close to the threshold for ROS pushback.
  • Total mergeout size does not exceed the limit for ROS container size.
VER-73468 DDL If you renamed a schema, the change was not propagated to table DEFAULT expressions that specified sequences of that schema. This issue has been resolved: all sequences of a renamed schema are now updated with the new schema name.
VER-73470 Data load / COPY Under certain circumstances, the FJsonParser rejecting rows could cause the database to panic. This issue has been resolved.
VER-73480 ComplexTypes, Optimizer The presence of directed queries sometimes caused "Unknown Expr" warnings with queries involving arrays and sets. This issue has been resolved.
VER-73478 Optimizer Previously, you could only set configuration parameter PushDownJoinFilterNull at the database level. If this parameter is set to 0, the NOT NULL predicate is not pushed down to the SCAN operator during JOIN operations. Now, this parameter can also be set for the current session.

Vertica 10.0.0-2: Resolved Issues

Release Date: 6/24/2020

This hotfix addresses the issues below.

Issue

Component

Description

VER-72840 Catalog Engine, Execution Engine Dropping a column no longer corrupts partition-level statistics, causing subsequent runs of the ANALYZE_STATISTICS_PARTITION metafunction on the same partition range to fail. If you have existing corrupted partition-level statistics, drop the statistics and run ANALYZE_STATISTICS_PARTITION recreate them.
VER-72904 Data Load, COPY / SDK The protocol used to run a user-defined parser in fenced mode had a bug which would occasionally cause the parser to process the same data multiple times. This issue has been fixed.
VER-72908 Security The LDAPLinkSearchTimeout configuration parameter has been restored.
VER-72973 Kafka Integration, Security When omitting a CA alias setting up the scheduler's TLS configuration (parameter --ssl-ca-alias), the scheduler loads all certificates into the trust store. When given an alias the scheduler still loads only that alias.
VER-72975 Kafka Integration The scheduler now supports CA bundles at the UDX and vkconfig level.
VER-72996 Cloud - Amazon, Security Vertica instances can now only access the S3 bucket that the user specified for communal storage.
VER-73010 Backup/DR The error message "Trying to delete untracked object" has been replaced with the more useful " Trying to delete untracked object: This is likely caused by inconsistent backup metadata. Hint: Running quick-repair can resolve the backup metadata inconsistency. Running full-check task can provide more thorough information, and guidance on how to fix this issue.

Vertica 10.0.0-1: Resolved Issues

Release Date: 6/1/2020

This hotfix addresses the issues below.

Issue

Component

Description

VER-72709 DDL ALTER TABLE...RENAME was unable to rename multiple tables if projection names of the target tables were in conflict. This issue was resolved by maintaining local snapshots of the renamed objects.
VER-72711 DDL Setting NULL on a table column's DEFAULT expression is equivalent to setting no default expression on that column. So, if a column's DEFAULT/SET USING expression was already NULL, then changing the column's data type with ALTER TABLE...ALTER COLUMN...SET DATA TYPE removes its DEFAULT/SET USING expression.

Vertica 10.0.0: Resolved Issues

Release Date: 5/7/2020

This release addresses the issues below.

Issue

Component

Description

VER-68548 UI - Management Console A problem occurred in MC where javascript was caching the TLS checkbox state while importing a database. This problem has been fixed.
VER-71398 UI - Management Console Previously, an exception occurred if a database that had no password was used as the production database for Extended Monitoring, This issue has been fixed.
VER-69103 AP-Advanced Queries using user-defined aggregates or the ACD library would occasionally return the error "DataArea overflow". This issue has been fixed.
VER-52301 Kafka Integration When an error occurs while parsing Avro messages, Vertica now provide a more helpful error message in the rejection table.
VER-68043 Kafka Integration Previously, the KafkaAvroParser would report a misleading error message when the schema_registry_url pointed to a page that was not an Avro schema. KafkaAvroParser now reports a more accurate error message.
VER-69208 Kafka Integration The example vkconfig launch script now uses nohup to prevent the scheduler from exiting prematurely.
VER-69988 Kafka Integration, Supported Platforms In newer Linux distributions (RHEL 8 or Debian 10, for example) the rdkafka library had an issue with the new glibc thread support. This issue could cause the database to go down when executing a COPY statement via the KafkaSource function. This issue has been resolved.
VER-70919 Kafka Integration Applied patch to librdkafka issue #2108 to fix infinite loop that would cause COPY statements using KafkaSource() and their respective schedulers to hang until the vertica server processes are restarted.
https://github.com/edenhill/librdkafka/issues/2108
VER-71114 Execution Engine Fixed an issue where loading large Kafka messages would cause an error.
VER-69437 Supported Platforms The vertica_agent.service can now stop gracefully on Red Hat Enterprise Linux version 7.7.
VER-70932 License The license auditor may core dump in some cases on tables containing columns with SET USING expressions.
VER-67024 Client Drivers - ODBC Previously, most batch insert operations (performed with the ODBC driver) which resulted in the server rejecting some rows presented the user with an inaccurate error message:

Row rejected by server; see server log for details

No such details were actually available in the server log. This error message has been changed to:

Row rejected by server; check row data for truncation or null constraint violations
VER-69654 Third Party Tools Integration Previously, the length of the returned object was based on the input's length. However, numeric formats do not generally preserve size, since 1 may encrypt to 1000000, and so on. This leads to problems such as copying a 20 byte object into a 4 byte VString object. This fix ensures that the length of the output buffer is at least the size of the input length + 100 bytes.
VER-54779 Spread When Vertica is installed with a separate control network (by using the "--control-network" option during installation), replacing an existing node or adding a new one to the cluster might require restarting the whole database. This issue has been fixed.
VER-69112 Security, Third Party Tools Integration NULL input to VoltageSecureProtect and VoltageSecureAccess now returns NULL value.
VER-70371 Admin Tools Log rotation now functions properly in databases upgraded from 9.2.x to 9.3.x.
VER-70488 Admin Tools Starting a subcluster in an EON Mode database no longer writes duplicate entries to the admintools.log file.
VER-70973 Admin Tools The Database Designer no longer produces an error when it targets a schema other than the "public" schema.
VER-68453 Tuple Mover In previous releases, the TM resource pool always allocated two mergeout threads to inactive partitions, no matter how many threads were specified by its MAXCONCURRENCY parameter. The inability to increase the number of threads available to inactive partitions sometimes caused ROS pushback. Now, the Tuple Mover can allocate up to half of the MAXCONCURRENCY-specified mergeout threads to inactive partitions.
VER-70836 Optimizer In some cases, the optimizer required an unusual amount of time to generate plans for queries on tables with complex live aggregate projections. This issue has been resolved.
VER-71748 Optimizer Queries with a mix of single-table predicates and expressions over several EXISTS queries in their WHERE clause sometimes returned incorrect results. The issue has been fixed.
VER-71953 Optimizer, Recovery Projection data now remains consistent following a MERGE during node recovery in Enterprise mode.
VER-71397 Execution Engine In some cases, malformed queries on specific columns, such as constraint definitions with unrecognized values or tranformations with improper formats, caused the database to fail. This problem has been resolved.
VER-71457 Execution Engine Certain MERGE operations were unable to match unique hash values between the inner and outer of optimizedmerge join. This resulted in setting the hash table key and value to null. Attempts to decrement the outer join hash count failed to take into account these null values, and this caused node failure. This issue has been resolved.
VER-71148 DDL - Table If a column had a constraint and its name contained ASCII and non-ASCII characters, attempts to insert values into this column sometimes caused the database to fail. This issue has been resolved.
VER-71151 DDL - Table ALTER TABLE...RENAME was unable to rename multiple tables if projection names of the target tables were in conflict. This issue was resolved by maintaining local snapshots of the renamed objects.
VER-62046 DDL - Projection When partitioning a table, Vertica first calculated the number of partitions, and then verified that columns in the partition expression were also in all projections of that table. The algorithm has been reversed: now Vertica checks that all projections contain the required columns before calculating the number of partitions.
VER-71145 Data Removal - Delete, Purge, Partitioning Vertica now provides clearer messages when meta-function drop_partitions fails due to an insufficient resource pool.
VER-70607 Catalog Engine In Eon Mode, projection checkpoint epochs on down nodes now become consistent with the current checkpoint epoch when the nodes resume activity.
VER-61279 Hadoop Previously, loading data into an HDFS storage location would occasionally fail with a "Error finalizing ROS DataTarget" message. This is now fixed.
VER-63413 Hadoop Previously, export of huge tables to parquet format would fail sometimes with "file not found exception", this is now fixed.
VER-68830 Hadoop On some HDFS distributions, if a datanode is killed during a Vertica query to HDFS, the namenode fails to respond for a long time causing Vertica to time out and roll back the transaction. In such cases Vertica used to log a confusing error message, and if the timeout happened during the SASL handshaking process, Vertica would hang without the possibility of being canceled. Now we log a better error message (saying that dfs.client.socket-timeout value should be increased on the HDFS cluster), and the hang during SASL handshaking is now cancelable.
VER-71088 Hadoop Previously, Vertica queries accessing HDFS would fail immediately if an HDFS operation returned 403 error code (such as Server too busy). Vertica now retries the operation.
VER-71542 Hadoop The Vertica function INFER_EXTERNAL_TABLE_DDL is now compatible with Parquet files that use 2-level encoding for LIST types.
VER-71047 Backup/DR When logging with level 3, vbr won't print out the content of dbPassword, dest_dbPassword and serviceAccessPass any more.
VER-71451 EON The clean_communal_storage meta-function is now up to 200X faster.

Known issues Vertica 10.0.0

Updated: 5/7/20

Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.

Issue

Component

Description

VER-72422 Nimbus In Eon Mode, library files that are queued for deletion are not removed from S3 or GCS communal storage. Running FLUSH_REAPER_QUEUE and CLEAN_COMMUNAL_STORAGE does not remove the files.
VER-72380 ComplexTypes Insert-selecting an array[varchar] type can result in an error when the source column varchar element length is smaller than the target column.
VER-71761 ComplexTypes The result of an indexed multi-dimension array cannot be compared to an un-typed string literal without a cast operation.
VER-70468 Documentation Creating a load balance group for a subcluster with a ROUNDROBIN load balance policy only has an effect if you set the global load balancing policy to ROUNDROBIN as well. This is the case, even though the LOAD_BALANCE_GROUPS system table shows the group's policy as ROUNDROBIN.
VER-69803 Hadoop The infer_external_table_ddl meta-function will hang if you do not set authentication information in the current session when analyzing a file on GCP.
VER-69797 ComplexTypes When referencing elements from an array, Vertica cannot cast to other data types without an explicit reference to the original data type.
VER-69442 Client Drivers - VSQL, Supported Platforms On RHEL 8.0, VSQL has an additional dependency on the libnsl library. Attempting to use VSQL in Vertica 9.3SP1 or 10.0 without first installing libnsl will fail and produce the following error: Could not connect to database (EOF received)/opt/vertica/bin/vsql: error while loading shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory
VER-67228 AMI, License An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing.
VER-64352 SDK Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2:

CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python';
DROP LIBRARY lib1;
CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python';

Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails.
VER-63720 Backup/DR Refuse to full restore if the number of nodes participating in restore doesn't match the number of mapping nodes in the configuration file.
VER-62983 Hadoop When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas.
VER-61420 Data Removal - Delete, Purge, Partitioning Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM.
VER-61069 Execution Engine In very rare circumstances, if a vertica process crashes during shutdown, the remaining processes might hang indefinitely.
VER-60409 AP-Advanced, Optimizer APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also outputs many columns may fail with error "Request size too big" due to additional memory requirement in parsing.
VER-48041 Admin Tools On some systems, occasionally admintools cannot parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient.
VER-48020 Hadoop Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load.
VER-41895 Admin Tools On some systems, admintools fails to parse output while running SSH commands on hosts in the cluster.

Legal Notices

Warranty

The only warranties for products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein.

The information contained herein is subject to change without notice.

Restricted Rights Legend

Confidential computer software. Valid license from Micro Focus required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.

Copyright Notice

© Copyright 2006 - 2020 Micro Focus, Inc.

Trademark Notices

Adobe® is a trademark of Adobe Systems Incorporated.

Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.

UNIX® is a registered trademark of The Open Group.