10.0.x Release Notes

Vertica
Software Version: 10.0.x

 

IMPORTANT: Before Upgrading: Identify and Remove Unsupported Projections

With version 9.2, Vertica has removed support for pre-join and range segmentation projections. If a table's only superprojection is one of these projection types, the projection is regarded as unsafe.

Before upgrading to 9.2 or higher, you are strongly urged to check your database for unsupported projections. If the upgrade encounters these projections, it is liable to fail. You must then revert to the previous installation.

Solution: Run the pre-upgrade script

Vertica has provided a pre-upgrade script that examines your current database and sends to standard output its analysis and recommendations. The script identifies and lists any unsupported projections. If the script finds projection buddies with different SELECT and ORDER BY clauses, it generates a deploy script. Run this script to remedy projections so they comply with system K-safety.

https://www.vertica.com/pre-upgrade-script/

For more information, see Fixing Unsafe Buddy Projections in the Vertica documentation.

Updated: 5/7/2020

About Vertica Release Notes

What's New in Vertica 10.0

What's Deprecated in Vertica 10.0.0

Vertica 10.0: Resolved Issues

Vertica 10.0: Known Issues

About Vertica Release Notes

The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 10.0.x.

They also contain information about issues resolved in:

Downloading Major and Minor Releases, and Service Packs

The Premium Edition of Vertica is available for download at https://support.microfocus.com/downloads/swgrp.html.

The Community Edition of Vertica is available for download at https://www.vertica.com/download/vertica/community-edition.

The documentation is available at https://www.vertica.com/docs/10.0.x/HTML/index.htm.

Downloading Hotfixes

Hotfixes are available to Premium Edition customers only. Each software package on the https://support.microfocus.com/downloads/swgrp.html site is labeled with its latest hotfix version.

What's New in Vertica 10.0

Take a look at the Vertica 10.0 New Features Guide for a complete list of additions and changes introduced in this release.

admintools

Eon Mode: list_db

The admintools list_db tool now shows the communal storage location for Eon Mode databases.

Apache Kafka Integration

Vertica 10.0.0 changes some of the default settings in the Apache Kafka integration to support better performance overall and to account for the removal of the WOS.

Note: The changes to the Apache Kafka integration in Vertica version 10.0 do not require an update to your scheduler's schema. However, you may want to change some of your scheduler's settings based on the new default values. These new defaults only affect newly-created schedulers. Even if your existing scheduler is set to use default values, the new defaults do not affect it.

Longer Default Frame Length

The default frame length is now 5 minutes (increased from the previous default of 10 seconds). This increase helps prevent the creation of many small ROS containers now that Vertica no longer uses the WOS. It is also a better choice for most use cases. The old default frame duration is too short for most non-trivial workloads.

The vkconfig tool now displays a warning if you set the frame duration so low that the scheduler will have less than two seconds to run each microbatch on average. Usually, you should set the frame duration to allow more than two seconds per microbatch.

Caution: This change only affects new schedulers. Your existing schedulers are not updated with the new default frame length. This is the case, even if you created them to use the default frame length.

Default Kafka Resource Pool Change

Prior to 10.0, if you did not assign your scheduler a resource pool, it would use a resource pool named kafka_default_pool. This behavior could cause resource issues if you created multiple schedulers that used the default pool. You could also see problems if your scheduler needed more resources than those provided by the default pool.

In Vertica version 10.0, if you do not specify a resource pool for your scheduler, it will use one quarter of the resources of the GENERAL resource pool. This behavior avoids having multiple schedulers use a single resource pool that has not configured with that workload in mind. This change only affects newly-created schedulers. It does not affect existing schedulers that use the kafka_default_pool.

Caution: Even with this change, you should create a dedicated resource pool for your scheduler to ensure it has the resources it needs. The dedicated resource pool lets you tailor and control the resources your scheduler uses.

The scheduler's fallback behavior of using the GENERAL pool is intended to allow for quick testing and validation of a scheduler before allocating a resource pool for it. Do not rely on having the scheduler use the GENERAL pool for production use. The vkconfig utility warns you every time you start a scheduler that uses the GENERAL pool.

Configuration

User-Level Configuration Parameters

Vertica now supports setting some configuration parameters for individual users. This support includes expanded syntax for ALTER USER, and new statement SHOW USER.

Database Designer

Database Designer (DBD) has been extensively overhauled. Significant improvements include:

Supported Data Types

Native Support for Arrays and Sets

Vertica-managed tables now support one-dimensional arrays of primitive types. External tables continue to support multi-dimensional arrays. The two types are the same with respect to queries and array functions, but are different types with different OIDs. For more information, see the ARRAY type .

Vertica-managed tables now support sets, which are collections of unique values. For more information, see the SET type.

Several functions that previously operated only on arrays now also operate on sets, and some new functions have been added. For more information, see Collection Functions.

Flexible Complex Types in External Tables

In some cases, the complex types in a Parquet file are structured such that you cannot fully describe their structure in a table definition. For example, a row (struct) can contain other rows but not arrays or maps, and an array can contain other arrays but not other types. A deeply-nested set of structs could exceed the nesting limit for an external table.

In other cases, you could fully describe the structure in a table definition but might prefer not to. For example, if the data contains a struct with a very large number of fields, and in your queries you will only read a few of them, you might prefer to not have to enumerate them all individually. And if the data file's schema is still evolving and the type definitions might change, you might prefer not to fully define, and thus have to update, the complete structure.

Flexible complex types allow you to treat a complex type in a Parquet file as unstructured data in an external table. The data is treated like the data in a flex table, and you can use the same mapping functions to extract values from it that are available for flex tables.

Documentation Updates

Reorganized Documentation on Data Load and Export

Documentation on bulk-loading data (including using external tables), importing or exporting between Vertica databases, and exporting data to Parquet format has been reorganized and improved. See the following new top-level topic hierarchies:

Eon Mode

Eon Mode Support for Google Cloud Platform (GCP)

You can now deploy an Eon Mode database on Google Cloud Platform.

Currently, there are a few limitations when using an Eon Mode database on GCP:

Note: You must supply a valid Vertica license when creating a database with more than three nodes in it.

Future releases will address these limitations.

For more information, see Eon Mode Databases on GCP.

Eon Mode Support for HDFS

Vertica now supports communal storage on HDFS when accessed through WebHDFS. See Installing Eon Mode on Premises with Communal Storage on HDFS for more information.

There are some restrictions:

Pinning on Subclusters

Vertica now supports pinning on subcluster depots. This enhancement is implemented with two new meta-functions, which supersede the now-deprecated SET_DEPOT_PIN_POLICY meta-function:

New DelayForDeletes Configuration Parameter Setting and Default Value

The new default value for DelayForDeletes is 0, which deletes a file from communal storage as soon as it is not in use by shard subscribers. In earlier releases, the default was 2 hours.

After you upgrade, DelayForDeletes retains any value that you configured in a previous version, although Vertica recommends setting this configuration parameter to 0 for version 10.0.0. If you used the previous default of 2 hours, DelayForDeletes is set to 0 automatically.

Nodes Now Always Warm Their Depots in the Background

When depot warming is enabled, newly added, restarted, or recovered nodes now warm their depots in the background. They start processing queries immediately. They also copy data from communal storage to populate their depots with relevant data based on the content of other node's depots in their subcluster.

Previously, nodes defaulted to foreground depot warming: when starting, they would copy relevant data from communal storage into their depots before taking part in queries. This behavior could lead to a delay between the time you added nodes to a subcluster and when they assisted in resolving queries.

This new behavior is the default. Depot warming in the foreground is no longer supported.

The BACKGROUND_DEPOT_WARMING function is now deprecated. This function had nodes switch from foreground to background depot warming. It has been deprecated and will be removed in a future release.

Voltage SecureData Integration

Type Casting and Identity Management with SQL Macros

You can integrate the VoltageSecureProtect and VoltageSecureAccess functions with SQL macros to manage identities and perform automatic type casting.

Voltage SecureData SimpleAPI 6.0

The version 6.0 of the SimpleAPI library includes several new features.

NULL Value Handling

When given a NULL value, VoltageSecureProtect now returns a NULL value. Previously, NULL inputs would return an error.

Configurable Network Timeout

You can now configure the network timeout for when Vertica interacts with your Voltage SecureData server.

The default and maximum value for this parameter is 300 seconds.

Manually Refresh Client Policy

You can now manually refresh the client policy across all nodes with the VoltageSecureRefreshPolicy function.

Safe Unicode FPE Formats

VoltageSecureProtect and VoltageSecureAccess now offer predefined formats to encrypt and decrypt all Unicode code point values. Previous versions of the Voltage library offered incomplete Unicode support with predefined formats using FPE extensions (FPE2 and JapaneseFPE).

For more information, see Best Practices for Safe Unicode FPE.

Machine Learning

Support for PMML Models

Vertica now supports the import and export of K-means, linear regression, and logistic regression machine learning models in Predictive Model Markup Language (PMML) format. Support for this platform-independent model format allows you to use models trained on other platforms to predict on data stored in your Vertica database. You can also use Vertica as your model repository.

The PREDICT_PMML function is new in Vertica 10.0. In addition, these existing functions now support PMML models:

Support for TensorFlow Models

Vertica now supports importing trained TensorFlow models, and using those models to do prediction in Vertica on data Stored in the Vertica database. Vertica supports TensorFlow models trained in TensorFlow version 1.15.0.

The PREDICT_TENSORFLOW function is new in Vertica 10.0. In addition, these existing functions now support TensorFlow models:

Management Console

Provision and Revive Eon Clusters on GCP

Management Console (MC) now supports provisioning and reviving Eon database clusters on Google Cloud Platform (GCP).

Create and Manage Eon Mode Subclusters

In addition to monitoring Eon Mode subclusters, MC now allows you to create and manage subclusters for Eon Mode on AWS and Eon Mode for Pure Storage.

Depot Management Tools: Depot Efficiency and Depot Pinning

Depot Efficiency

The new Depot Efficiency tab on the MC Database > Depot Activity Monitoring page provides charts with metrics that allow you to determine whether your Eon depot is properly tuned and whether there are issues you need to adjust for better query performance.

Depot Pinning

The new Depot Pinning tab on the MC Database > Depot Activity Monitoring page allows you to view the tables that are pinned in the depot. It also allows you to create, modify, and remove the pinning policies for each table. You may want to change pinning policies based on factors such as a table's frequency of re-fetches, the size of a table's data in depot, and the number of requests for a table's data.

Vertica in Eon Mode for Pure Storage

MC now supports Vertica in Eon Mode for Pure Storage, including creating and reviving Eon Mode databases on-premises, using Pure Storage FlashBlade as the communal storage.

Select and Execute Workload Analyzer Recommendations

In MC, in addition to viewing Workload Analyzer (WLA) tuning recommendations, you can also select and execute certain individual WLA tuning commands, to improve how queries execute on your database.

Set the MAXQUERYMEMORYSIZE Parameter from MC

The parameter "MAXQUERYMEMORYSIZE" was added in version 9.1.1, with the ability to modify the parameter using VSQL.

In 10.0, you can now modify the "MAXQUERYMEMORYSIZE" parameter directly from MC.

Projections

Changing Projection Column Encodings

You can now call ALTER TABLE…ALTER COLUMN to add an encoding type to a projection column, or change its current encoding type. Encoding projection columns can help reduce their storage footprint, and enhance performance. Until now, you could not add encoding types to an existing projection; instead, you had to recreate the projection and refresh its data. Doing so for very large projections was liable to incur significant overhead, which could be prohibitively expensive for a running production server.

When you add or change a column's encoding type, it has no immediate effect on existing projection data. Vertica applies the encoding only to newly loaded data, and to existing data on mergeout.

Security and Authentication

New Parameter: LDAPLinkJoinAttr

The attribute used to associate users and groups can vary between standards. For example, POSIX groups use the memberUid attribute. To provide support for these standards, the LDAPLinkJoinAttr parameter allows you to specify the attribute with which to associate users to their roles in the LDAP Link.

For more information on this and other parameters, see LDAP Link Parameters.

KERBEROS_CONFIG_CHECK: Credential Cache Files

To help with troubleshooting, KERBEROS_CONFIG_CHECK now prints the path of KerberosCredentialCache files.

SQL Functions and Statements

Some Array Functions Renamed

The array_min, array_max, array_sum, array_avg, array_count, and array_length functions perform aggregate operations on the elements of an array. They have been expanded to operate on other collection types and have been renamed to apply_min, apply_max, apply_sum, apply_avg, apply_count_elements, and apply_length. The array_* functions are deprecated and automatically call the corresponding apply_* functions. For more information, see Collection Functions.

SQL Support for User-Level Configuration

Two SQL statements support setting configuration parameters for individual users:

ALTER USER now supports setting user-level configuration parameters for the specified user.

New statement SHOW USER returns all configuration parameter values that are set for the specified user.

New EXPLODE Array Function

The new EXPLODE array function expands a 1D array column and returns query results where each row is an array element. The query results include a position column for the array element index, and a value column for the array element.

Statistics

Statistics Collection Improvements

Statistics that are collected for a given range of partition keys now supersede statistics that were previously collected for a subset of that range. For details, see Collecting Partition Statistics.

System Tables

New system table V_MONITOR.TABLE_STATISTICS

TABLE_STATISTICS displays statistics that have been collected for tables and their respective partitions.

New column in RESOURCE_POOL_STATUS

RESOURCE_POOL_STATUS now includes a RUNTIMECAP_IN_SECONDS column, which specifies in seconds the maximum time a query in the pool can execute. If a query exceeds this setting, it tries to cascade to a secondary pool.

Users and Privileges

Increased Security for Privileges through Non-Default Roles

The Vertica database is now more strict with privileges through non-default roles. Some actions have privilege requirements that depend on the effective privileges of other users. If these other users have the prerequisite privileges exclusively through a role, the role must be a default role for the action to succeed.

For example, for Naomi to change Zinn's default resource pool to RP, Zinn must already have USAGE privileges on RP. If Zinn only has USAGE privileges on RP through the Historian role, it must be set as a default role, otherwise the action will fail.

For more information on changing a user's default roles, see Enabling Roles Automatically.

Removal of Support for Write-Only Storage (WOS)

Over the past several releases, Vertica has significantly improved small batch loading into ROS, which now provides WOS-equivalent performance. In order to simplify product usage, Vertica began in release 9.3.0 to phase out support for WOS-related functionality. With Vertica 10.0, this process is complete; support for all WOS-related functionality has been removed.

What's Deprecated in Vertica 10.0

This section describes the two phases Vertica follows to retire Vertica functionality:

Deprecated

The following Vertica functionality was deprecated and will be retired in future versions:

Functionality Notes

Array-specific functions:

  • array_min
  • array_max
  • array_sum
  • array_avg
  • array_count
  • array_length

Superseded by new functions that operate on collections, including arrays:

  • apply_min
  • apply_max
  • apply_sum
  • apply_avg
  • apply_count
  • apply_length

Configuration parameters:

  • HiveMetadataCacheSizeMB
  • DMLTargetDirect
  • MoveOutInterval
  • MoveOutMaxAgeTime
  • MoveOutSizePct
Setting these parameters has no effect.
vbr configuration parameter SnapshotEpochLagFailureThreshold With WOS removal, full and object backups no longer use SnapShotEpochLagFailureThreshold. If a vbr configuration file contains this parameter, vbr returns a warning that it was ignored.
DBD meta-function DESIGNER_SET_ANALYZE_CORRELATIONS_MODE Calls to this meta-function return with a warning message.
Eon Mode meta-function BACKGROUND_DEPOT_WARMING Foreground depot warming is no longer supported. Nodes only warm depots in the background. Calling this function has no effect.

Removed

The following functionality is no longer accessible as of the releases noted:

Functionality Notes
Write‑only storage (WOS) As of 10.0, WOS and related functionality has been removed from Vertica.
Vertica Python client The Vertica Python client is now open source.


For more information see Deprecated and Retired Functionality in the Vertica documentation.

Vertica 10.0: Resolved Issues

Release Date: 5/7/2020

This release addresses the issues below.

Issue

Component

Description

VER-68548 UI - Management Console A problem occurred in MC where javascript was caching the TLS checkbox state while importing a database. This problem has been fixed.
VER-71398 UI - Management Console Previously, an exception occurred if a database that had no password was used as the production database for Extended Monitoring, This issue has been fixed.
VER-69103 AP-Advanced Queries using user-defined aggregates or the ACD library would occasionally return the error "DataArea overflow". This issue has been fixed.
VER-52301 Kafka Integration When an error occurs while parsing Avro messages, Vertica now provide a more helpful error message in the rejection table.
VER-68043 Kafka Integration Previously, the KafkaAvroParser would report a misleading error message when the schema_registry_url pointed to a page that was not an Avro schema. KafkaAvroParser now reports a more accurate error message.
VER-69208 Kafka Integration The example vkconfig launch script now uses nohup to prevent the scheduler from exiting prematurely.
VER-69988 Kafka Integration, Supported Platforms In newer Linux distributions (RHEL 8 or Debian 10, for example) the rdkafka library had an issue with the new glibc thread support. This issue could cause the database to go down when executing a COPY statement via the KafkaSource function. This issue has been resolved.
VER-70919 Kafka Integration Applied patch to librdkafka issue #2108 to fix infinite loop that would cause COPY statements using KafkaSource() and their respective schedulers to hang until the vertica server processes are restarted.
https://github.com/edenhill/librdkafka/issues/2108
VER-71114 Execution Engine Fixed an issue where loading large Kafka messages would cause an error.
VER-69437 Supported Platforms The vertica_agent.service can now stop gracefully on Red Hat Enterprise Linux version 7.7.
VER-70932 License The license auditor may core dump in some cases on tables containing columns with SET USING expressions.
VER-67024 Client Drivers - ODBC Previously, most batch insert operations (performed with the ODBC driver) which resulted in the server rejecting some rows presented the user with an inaccurate error message:

Row rejected by server; see server log for details

No such details were actually available in the server log. This error message has been changed to:

Row rejected by server; check row data for truncation or null constraint violations
VER-69654 Third Party Tools Integration Previously, the length of the returned object was based on the input's length. However, numeric formats do not generally preserve size, since 1 may encrypt to 1000000, and so on. This leads to problems such as copying a 20 byte object into a 4 byte VString object. This fix ensures that the length of the output buffer is at least the size of the input length + 100 bytes.
VER-54779 Spread When Vertica is installed with a separate control network (by using the "--control-network" option during installation), replacing an existing node or adding a new one to the cluster might require restarting the whole database. This issue has been fixed.
VER-69112 Security, Third Party Tools Integration NULL input to VoltageSecureProtect and VoltageSecureAccess now returns NULL value.
VER-70371 Admin Tools Log rotation now functions properly in databases upgraded from 9.2.x to 9.3.x.
VER-70488 Admin Tools Starting a subcluster in an EON Mode database no longer writes duplicate entries to the admintools.log file.
VER-70973 Admin Tools The Database Designer no longer produces an error when it targets a schema other than the "public" schema.
VER-68453 Tuple Mover In previous releases, the TM resource pool always allocated two mergeout threads to inactive partitions, no matter how many threads were specified by its MAXCONCURRENCY parameter. The inability to increase the number of threads available to inactive partitions sometimes caused ROS pushback. Now, the Tuple Mover can allocate up to half of the MAXCONCURRENCY-specified mergeout threads to inactive partitions.
VER-70836 Optimizer In some cases, the optimizer required an unusual amount of time to generate plans for queries on tables with complex live aggregate projections. This issue has been resolved.
VER-71748 Optimizer Queries with a mix of single-table predicates and expressions over several EXISTS queries in their WHERE clause sometimes returned incorrect results. The issue has been fixed.
VER-71953 Optimizer, Recovery Projection data now remains consistent following a MERGE during node recovery in Enterprise mode.
VER-71397 Execution Engine In some cases, malformed queries on specific columns, such as constraint definitions with unrecognized values or tranformations with improper formats, caused the database to fail. This problem has been resolved.
VER-71457 Execution Engine Certain MERGE operations were unable to match unique hash values between the inner and outer of optimizedmerge join. This resulted in setting the hash table key and value to null. Attempts to decrement the outer join hash count failed to take into account these null values, and this caused node failure. This issue has been resolved.
VER-71148 DDL - Table If a column had a constraint and its name contained ASCII and non-ASCII characters, attempts to insert values into this column sometimes caused the database to fail. This issue has been resolved.
VER-71151 DDL - Table ALTER TABLE...RENAME was unable to rename multiple tables if projection names of the target tables were in conflict. This issue was resolved by maintaining local snapshots of the renamed objects.
VER-62046 DDL - Projection When partitioning a table, Vertica first calculated the number of partitions, and then verified that columns in the partition expression were also in all projections of that table. The algorithm has been reversed: now Vertica checks that all projections contain the required columns before calculating the number of partitions.
VER-71145 Data Removal - Delete, Purge, Partitioning Vertica now provides clearer messages when meta-function drop_partitions fails due to an insufficient resource pool.
VER-70607 Catalog Engine In Eon Mode, projection checkpoint epochs on down nodes now become consistent with the current checkpoint epoch when the nodes resume activity.
VER-61279 Hadoop Previously, loading data into an HDFS storage location would occasionally fail with a "Error finalizing ROS DataTarget" message. This is now fixed.
VER-63413 Hadoop Previously, export of huge tables to parquet format would fail sometimes with "file not found exception", this is now fixed.
VER-68830 Hadoop On some HDFS distributions, if a datanode is killed during a Vertica query to HDFS, the namenode fails to respond for a long time causing Vertica to time out and roll back the transaction. In such cases Vertica used to log a confusing error message, and if the timeout happened during the SASL handshaking process, Vertica would hang without the possibility of being canceled. Now we log a better error message (saying that dfs.client.socket-timeout value should be increased on the HDFS cluster), and the hang during SASL handshaking is now cancelable.
VER-71088 Hadoop Previously, Vertica queries accessing HDFS would fail immediately if an HDFS operation returned 403 error code (such as Server too busy). Vertica now retries the operation.
VER-71542 Hadoop The Vertica function INFER_EXTERNAL_TABLE_DDL is now compatible with Parquet files that use 2-level encoding for LIST types.
VER-71047 Backup/DR When logging with level 3, vbr won't print out the content of dbPassword, dest_dbPassword and serviceAccessPass any more.

Known issues Vertica 10.0.0

Updated: 5/7/20

Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.

Issue

Component

Description

VER-72422 Nimbus In Eon Mode, library files that are queued for deletion are not removed from S3 or GCS communal storage. Running FLUSH_REAPER_QUEUE and CLEAN_COMMUNAL_STORAGE does not remove the files.
VER-72380 ComplexTypes Insert-selecting an array[varchar] type can result in an error when the source column varchar element length is smaller than the target column.
VER-71761 ComplexTypes The result of an indexed multi-dimension array cannot be compared to an un-typed string literal without a cast operation.
VER-70468 Documentation Creating a load balance group for a subcluster with a ROUNDROBIN load balance policy only has an effect if you set the global load balancing policy to ROUNDROBIN as well. This is the case, even though the LOAD_BALANCE_GROUPS system table shows the group's policy as ROUNDROBIN.
VER-69803 Hadoop The infer_external_table_ddl meta-function will hang if you do not set authentication information in the current session when analyzing a file on GCP.
VER-69797 ComplexTypes When referencing elements from an array, Vertica cannot cast to other data types without an explicit reference to the original data type.
VER-69442 Client Drivers - VSQL, Supported Platforms On RHEL 8.0, VSQL has an additional dependency on the libnsl library. Attempting to use VSQL in Vertica 9.3SP1 or 10.0 without first installing libnsl will fail and produce the following error: Could not connect to database (EOF received)/opt/vertica/bin/vsql: error while loading shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory
VER-67228 AMI, License An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing.
VER-64352 SDK Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2:

CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python';
DROP LIBRARY lib1;
CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python';

Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails.
VER-63720 Backup/DR Refuse to full restore if the number of nodes participating in restore doesn't match the number of mapping nodes in the configuration file.
VER-62983 Hadoop When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas.
VER-61420 Data Removal - Delete, Purge, Partitioning Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM.
VER-61069 Execution Engine In very rare circumstances, if a vertica process crashes during shutdown, the remaining processes might hang indefinitely.
VER-60409 AP-Advanced, Optimizer APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also outputs many columns may fail with error "Request size too big" due to additional memory requirement in parsing.
VER-48041 Admin Tools On some systems, occasionally admintools cannot parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient.
VER-48020 Hadoop Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load.
VER-41895 Admin Tools On some systems, admintools fails to parse output while running SSH commands on hosts in the cluster.

Legal Notices

Warranty

The only warranties for products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein.

The information contained herein is subject to change without notice.

Restricted Rights Legend

Confidential computer software. Valid license from Micro Focus required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.

Copyright Notice

© Copyright 2006 - 2020 Micro Focus, Inc.

Trademark Notices

Adobe® is a trademark of Adobe Systems Incorporated.

Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.

UNIX® is a registered trademark of The Open Group.