Release Notes

Vertica
Software Version: 9.3.x

 

IMPORTANT: Before Upgrading: Identify and Remove Unsupported Projections

With version 9.2, Vertica has removed support for pre-join and range segmentation projections. If a table's only superprojection is one of these projection types, the projection is regarded as unsafe.

Before upgrading to 9.2 or higher, you are strongly urged to check your database for unsupported projections. If the upgrade encounters these projections, it is liable to fail. You must then revert to the previous installation.

Solution: Run the pre-upgrade script

Vertica has provided a pre-upgrade script that examines your current database and sends to standard output its analysis and recommendations. The script identifies and lists any unsupported projections. If the script finds projection buddies with different SELECT and ORDER BY clauses, it generates a deploy script. Run this script to remedy projections so they comply with system K-safety.

https://www.vertica.com/pre-upgrade-script/

For more information, see Fixing Unsafe Buddy Projections in the Vertica documentation.

Updated: 10/31/2019

About Vertica Release Notes

What's New in Vertica 9.3

What's Deprecated in Vertica 9.3

Vertica 9.3.0-1: Resolved Issues

Vertica 9.3: Resolved Issues

Vertica 9.3: Known Issues

About Vertica Release Notes

The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 9.3.x.

They also contain information about issues resolved in:

Downloading Major and Minor Releases, and Service Packs

The Premium Edition of Vertica is available for download at https://support.microfocus.com/downloads/swgrp.html.

The Community Edition of Vertica is available for download at https://www.vertica.com/download/vertica/community-edition.

The documentation is available at https://www.vertica.com/docs/9.3.x/HTML/index.htm.

Downloading Hotfixes

Hotfixes are available to Premium Edition customers only. Each software package on the https://support.microfocus.com/downloads/swgrp.html site is labeled with its latest hotfix version.

What's New in Vertica 9.3

Take a look at the Vertica 9.3 New Features Guide for a complete list of additions and changes introduced in this release.

Vertica on the Cloud

New Support for AWS Instance Types

Vertica has added two Amazon Web Service (AWS) instance types to the list of supported types available in MC:

* Instance type does not support ephemeral storage

For a list of all supported instance types, see Supported Instance Types

Optimization Type Supports EBS Storage Supports Ephemeral Storage

Computing

c5.xlarge

c5.large

Yes

Yes

No

No

Constraints

Exported DDL of Table Constraints

Previously, Vertica meta-functions that exported DDL, such as EXPORT_TABLES, exported all table constraints as ALTER statements, whether they were part of the original CREATE statement, or added later with ALTER TABLE. This was problematic for global temporary tables, which cannot be modified with new constraints other than foreign keys. Thus, the DDL that was exported for a temporary table with constraints could not be used to reproduce that table. This issue has been resolved: Vertica now exports table constraints (except foreign keys) as part of the table's CREATE statement.

Supported Data Types

External Tables and Structs

Columns in Parquet and ORC files can contain complex data types. One complex data type is the struct, which stores (typed) property-value pairs. Vertica previously supported reading structs only as expanded columns. In addition, you can now preserve the original structure by reading a struct as a single column. For more information, see Reading Structs as Inline Types.

Eon Mode

Improved Subcluster Feature

Subclusters help you isolate workloads to a smaller group of nodes in your database cluster. Queries only run on nodes within the subcluster that contains the initiator node. In previous versions of Vertica, you defined subclusters using the fault groups feature. Starting in 9.3.0, subclusters are their own unique feature and have become more useful.

All nodes in the database must belong to a subcluster. When you upgrade an Eon Mode database from a previous version to 9.3.0, Vertica converts any existing fault groups to subclusters. If there are nodes in the database that are not part of a fault group, Vertica creates a default subcluster and adds these nodes to it. When you create a new Vertica database, Vertica creates a default subcluster and adds the initial group of nodes to it. When adding a node to the database, Vertica adds the node to a default subcluster unless you specify a subcluster.

See Subclusters for more information.

Subcluster Conversion During Eon Mode Database Upgrades

When you upgrade an Eon Mode database to version 9.3.0, Vertica converts all of the fault groups in the database to subclusters. Any nodes in the default groups are automatically assigned to the converted subclusters. Any nodes that are not part of a fauilt group are assigned to a default subcluster that Vertica creates.

Primary and Secondary Subclusters

Subclusters come in two types:

Nodes in Eon Mode databases are also either primary or secondary, based on the type of subcluster that contains them. See Subcluster Types and Elastic Scaling for more information on primary and secondary nodes and subclusters.

Connection Load Balancing Policy Changes

Connection load balancing is now aware of subclusters. You can define connection load balancing groups based on subclusters. See About Connection Load Balancing Policies for more information.

When Vertica upgrades an Eon Mode database to version 9.3.0 or beyond, it does not convert load balancing groups based on fault groups into groups based on the converted subclusters. You must redefine these load balance groups to be based on the newly-created subclusters yourself.

Changes to ADMIN_TOOLS_EXE for Subcluster

The ADMIN_TOOLS_EXE command line interface has new tools to manipulate subclusters:

In addition, the add_node tool has a new --subcluster argument that lets you select the subcluster that Vertica adds the new node or nodes to.

For more information on the option, see Writing Administration Tools Scripts.

Changes to System Tables for Subclusters

There are several new system tables, as well as changes to existing tables for subclusters:

Depot Warming Can be Canceled or Performed in the Background

Before a newly-added node begins processing queries, it warms its depot by fetching data into it based on what other nodes in the subcluster have in their depots. You can now choose to cancel depot warming entirely, or have the node process queries while it continues to warm its depot. See Canceling or Backgrounding Depot Warming for more information.

Changes to Depot Monitoring Tables

The PLAN_ID columns in the DEPOT_FETCH_QUEUE and DEPOT_FETCHES system tables have been replaced with the TRANSACTION_ID column. This change makes it easier to determine the transaction that caused a node to fetch a file.

Query-Level Depot Fetching

Queries in Vertica mode now support the /*+DEPOT_FETCH*/ hint, which specifies whether to fetch data from communal storage when the depot does not have the queried data.

Depot Maximum Size Limit

Vertica lets you set a size for the depot. This size is set to 60% by default. Vertica also uses the filesystem containing the depot for other purposes such as temporary storage while loading data. To make sure there is enough space on this filesystem for these other needs, Vertica now limits you to setting aside 80% of the filesystem space for the depot. If you attempt to allocate more than 80% of the filesystem to the depot, Vertica returns an error.

New S3EXPORT Parameters

Vertica function S3EXPORT now supports the following parameters:

Clearing the Fetch Queue for a Specific Transaction

The CLEAR_FETCH_QUEUE function now accepts an optional transaction ID parameter. Supplying this parameter limits the clearing of the fetch queue to just those entries for the transaction.

Kafka

Vertica version 9.3.0 has been tested with the following versions of Apache Kafka: 

Vertica may work with other versions of Kafka. See Vertica Integration for Apache Kafka for more information.

Vertica now uses version 0.11.6 of the rdkafka library to communicate with Kafka. This change could affect you if you directly set Kafka library options. See Directly Setting Kafka Library Options for more information.

Parquet Export

Improved Stability

Memory allocation for EXPORT TO PARQUET is now part of the Vertica resource pool instead of being separately managed, reducing memory-allocation errors from large exports.

Projections

Better Correlation Between Table and Projection Names

When you rename a table with ALTER TABLE or copy an existing one with CREATE TABLE LIKE…INCLUDING PROJECTIONS, Vertica propagates the new table name to its projections. For details, see Projection Naming.

Support for UPDATE and DELETE on Live Aggregate Projections

You can now run DML operations on tables with live aggregate projections. For more details, see Live Aggregate Projections.

Spread

Vertica now uses Spread 5.

Supported Platforms

Pure Storage FlashBlade

Vertica now supports Pure Storage Flashblade storage for Eon Mode on premise.

 

What's Deprecated in Vertica 9.3

This section describes the two phases Vertica follows to retire Vertica functionality:

Deprecated

The following Vertica functionality was deprecated and will be retired in future versions:

Release Functionality Notes

9.3

7.2_upgrade vbr task This task remains available in earlier versions.
9.3 FIPS FIPS is temporarily unsupported due to incompatibility with OpenSSL OpenSSL 1.1x. FIPS is still available in Vertica 9.2.x, and FIPS will be reinstated in a future release.

Removed

The following functionality is no longer accessible as of the releases noted:

Release Functionality Notes
9.3

EXECUTION_ENGINE_PROFILES counters: file handles and memory allocated

 
9.3 Configuration parameter ReuseDataConnections  

For more information see Deprecated and Retired Functionality in the Vertica documentation.

Vertica 9.3.0-1: Resolved Issues

Release Date: 10/31/2019

This hotfix addresses the issues below.

Issue

Component

Description

VER-69388 SAL Queries could fail when Vertica 9.2 tried to read a ROS file format from Vertica 6.0 or earlier. Vertica now properly handles files created in this format.

Vertica 9.3: Resolved Issues

Release Date: 10/14/2019

This hotfix addresses the issues below.

Issue

Component

Description

VER-67087

Admin Tools

Sometimes during database revive, admintools treated s3 and hdfs user storage locations as local filesystem paths. This led to errors during revive. This issue has been resolved.

VER-68531

Admin Tools

Previously, environment variables used by admintools during SSH operations were set incorrectly on remote hosts. This issue has been fixed.

VER-64171

Backup/DR

If the hardlink failed during a hardlink backup, vbr switched to copying data instead of failing the backup and threw an error. This issue has been resolved.

VER-66956

Backup/DR

Dropping the user that owns an object involved in a replicate or restore operation during that replicate or restore operation can cause the nodes involved in the operation to fail.

VER-62334

Catalog Engine

Previously, if a DROP statement on an object failed and was rolled back, Vertica would generate a NOTICE for each dependent object. This was problematic in cases where the DROP operation had a large number of dependencies. This issue has been resolved: Vertica now generates up to 10 messages for dependent objects, and then displays the total number of dependencies.

VER-67734

Catalog Engine

In Eon mode, queries sometimes failed if they were submitted at the same time as a new node was added to the cluster. This issue has been resolved.

VER-68188

Catalog Engine

Exporting a catalog on a view that references itself could cause Vertica to fail. This issue has been resolved.

VER-68603

Catalog Sync and Revive

Database startup occasionally failed when catalog truncation failed because the disk was full. This issue has been resolved.

VER-68494

Client Drivers - Misc

Some kinds of node failures did not reset the list of available nodes for connection load balancing. These failures now update the available node list.

VER-67342

Data Removal - Delete, Purge, Partitioning Client Drivers - ODBC

Queries use the query epoch (current epoch -1) to determine which storage containers to use. A query epoch is set when the query is launched, and is compared with storage container epochs during the local planning stage. Occasionally, a swap partition operation was committed in the interval between query launch and local planning. In this case, the epoch of the post-commit storage containers was higher than the query epoch, so the query could not use them, and it returned empty results. This problem was resolved by making SWAP_PARTITIONS_BETWEEN_TABLES atomic.

VER-66546

Cloud - Amazon

The verticad script restarts the vertica process on a node when the operating system starts. On Amazon Linux, the script sometimes was unable to detect the operating system and returned an error. This issue has been resolved.

VER-53370

DDL - Projection

Previously, projections of a renamed table retained their original names, which were often derived from the table's previous name. This issue has been resolved: now, when you rename a table, all projections whose names are prefixed by the anchor table name are renamed with the new table name.

VER-66793

DDL - Projection

In earlier releases, you could not modify a projection's sort order. ALTER PROJECTION now supports a SORT ORDER BY option that specifies a comma‑delimited list of projection columns. This replaces the projection's previous ORDER BY list. The column list must begin with the first column of the previous ORDER BY list, and all successive columns must be in the same order as in the previous list.

VER-66882

DDL - Table

The database statistics tool SELECT ANALYZE_STATISTICS no longer acquires a GCL-X lock when running against local temp tables.

VER-67105

S3 Data Export

In previous releases, s3export could not export files in CSV format. The function now supports two new parameters, enclosed_by and escape_as, that enable exporting files in CSV format.

VER-68619

Hadoop Data Export

Date Columns now includes the Parquet Logical type, enabling other tools to recognize this column as a Date type.

VER-66272

Data Removal - Delete, Purge, Partitioning

Queries use the query epoch (current epoch -1) to determine which storage containers to use. A query epoch is set when the query is launched, and is compared with storage container epochs during the local planning stage. Occasionally, a swap partition operation was committed in the interval between query launch and local planning. In this case, the epoch of the post-commit storage containers was higher than the query epoch, so the query could not use them, and it returned empty results. This problem was resolved by making SWAP_PARTITIONS_BETWEEN_TABLES atomic.

VER-65427

Data load / COPY

Vertica could crash if its client disconnected during a COPY LOCAL with REJECTED DATA. This issue has been fixed.

VER-65659

Data load / COPY

Occasionally, a copy or external table query could crash a node. This has been fixed.

VER-68033

Data load / COPY

A bug in the Apache Parquet C++ library sometimes caused Vertica to fail when reading Parquet files with large Varchar statistics. This issue has been fixed.

VER-53981

Database Designer Core

In some cases, DESIGNER_DESIGN_PROJECTION_ENCODINGS mistakenly removed comments from the target projections. This issue has been resolved.

VER-62628

Execution Engine

If a subquery that is used as an expression returns more than one row, Vertica returns with an error. In past releases, Vertica used this error message: ERROR 4840: Subquery used as an expression returned more than one row In cases where the query contained multiple subqueries, this message did not help users identify the source of the problem. With this release, the error message now provides the Join label and localplan_id. For example: ERROR 4840: Subquery used as an expression returned more than one row DETAIL: Error occurred in Join [(public.t1 x public.t2) using t1_super and subquery (PATH ID: 1)] with localplan_id=[4]

VER-66224

Execution Engine

On rare occasions, a query that exceeded its runtime cap automatically restarted instead of reporting the timeout error. This issue has been fixed.

VER-67069

Execution Engine

Some queries with complex predicates ignored cancel attempts, manual or via the runtime cap, and continued to run for a long time. Cancel attempts themselves also caused the query to run longer than it would have otherwise. This issue has been fixed.

VER-67102

Execution Engine

Users were unable to cancel meta-function RUN_INDEX_TOOL. This problem has been resolved.

VER-69066

Execution Engine

In some cases, queries failed to sort UUIDs correctly if the ORDER BY clause did not specify the UUID column first. This problem has been resolved.

VER-64680

Kafka Integration

Previously, the version of the kafkacat utility distributed with Vertica had an issue that prevented it from working when TLS/SSL encryption was enabled. This issue has been corrected, and the version of kafkacat bundled with Vertica can now make TLS/SSL encrypted connections.

VER-68192

License

Upgrading 8.1.1-x databases with Autopass license installed to 9.0 or later can lead to license tampering startup issues. This problem has been fixed.

VER-67573

Nimbus

In the past, DDL transactions remained open until all pending file deletions were complete. In Eon mode, this dependency could cause significant delays. This issue has been resolved: now, DDL transactions can complete while file deletions continue to execute in the background.

VER-26260

Optimizer

Vertica can now optimize queries on system tables where the queried columns are guaranteed to contain unique values. In this case, the optimizer prunes away unnecessary joins on columns that are not queried.

VER-66423

Optimizer

The Optimizer's is better able to derive transitive selection predicates from subqueries.

VER-66902

Optimizer

EXPORT TO VERTICA returned an error if the table to export was a flattened table that already existed in the source and target databases. This issue has been resolved.

VER-66933

Optimizer

Previously, export operations exported all table constraints as ALTER statements, whether they were part of the original CREATE statement, or added later with ALTER TABLE. This was problematic for global temporary tables, which cannot be modified with any constraints other than foreign keys; attempts to add constraints on a temporary table with ALTER TABLE return with an error. Thus, the DDL that was exported for a temporary table with constraints could not be used to reproduce that table. This problem has been resolved: the DDL for constraints (except foreign keys) is now exported as part of the table's CREATE statement.

VER-66968

Optimizer

Queries that perform an inner join and group the results now return consistent results.

VER-67138

Optimizer

When flattening subqueries, Vertica could sometimes move subqueries to the ON clause which is not supported. This issue has been resolved.

VER-67443

Optimizer

ALTER TABLE...ALTER CONSTRAINT returned an error when a node was down. This issue was resolved: you can now enable enable or disable table constraints when nodes are down.

VER-67740

Optimizer

Vertica sometimes crashed when certain combinations of analytic functions were applied to the output of a merge join. This issue has been fixed.

VER-67908

Optimizer

Prepared statements do not support WITH clause materialization. Previously, Vertica threw an error when it tried to materialize a WITH clause for prepared statement queries. Now, Vertica throws a warning and processes the WITH clause without materializing it.

VER-68306

Optimizer

Queries with multiple analytic functions over complex expression arguments and different PARTITION BY/ORDER BY column sets would sometimes produce incorrect and inconsistent results between Enterprise and EON. This issue has been fixed.

VER-68379

Optimizer

Calls to REFRESH_COLUMNS can now be monitored in the new "dc_refresh_columns" table, which logs, for every call to REFRESH_COLUMNS, the time of the call, the name of the table, the refreshed columns, the mode used, the minimum and maximum key values, and the epoch.

VER-68594

Optimizer

Queries sorting on subquery outputs sometimes produced inconsistent results. This issue has been fixed.

VER-68828

Optimizer

When a session's transaction isolation mode was set to serializable, MERGE statements sometimes returned with the error message 'Can't run historical queries at epochs prior to the Ancient History Mark'. This issue has been resolved.

VER-49136

Optimizer Optimizer - Projection Chooser

TRUNCATE TABLE caused all existing table- and partition-level statistics to be dropped. This issue has been resolved.

VER-68826

Optimizer - Statistics and Histogram

If you partitioned a table by date and used EXPORT_STATISTICS_PARTITION to export results to a file, the function wrote empty content to the target file. This issue has been resolved.

VER-67647

Build and Release Performance tests

The ST_IsValid() geospatial function, when used with the geography data type, has seen a slight performance degradation of around 10%.

VER-66648

QA

When a query returned many rows with wide columns, Vertica threw the following error message: "ERROR 8617: Request size too big." This message incorrectly suggested that query output consumed excessive memory. This issue has been fixed.

VER-64783

Recovery

When applying partition events during recovery, Vertica determined that recovery was complete only after all table projections were recovered. This rule did not differentiate between regular and live aggregate projections of the same table, which typically recovered in different stages. If recovery was interrupted after all regular projections of a table were recovered but before recovery of its live aggregate projections was complete, Vertical returned an error. This problem has been resolved: Vertica now determines that recovery is complete when all regular projections are recovered, and disregards the recovery status of live aggregate projections.

VER-68504

Execution Engine S3

If you refreshed a large projection in Eon mode and the refresh operation used temp space on S3, the refresh operation occasionally caused the node to crash. This issue has been resolved.

VER-60036

SDK

All C++ UDXs require c++11 standard now. That can be done by setting -std=c++11 flag at compilation.

VER-61780

Scrutinize

Scrutinize previously generated a UnicodeEncodeError if the system locale was set to a language with non-ASCII characters. This issue has been fixed.

VER-66988

Scrutinize

When running scrutinize, the database password would be written to scrutinize_collection.log. This has been fixed: the "__db_password__" entry has been removed from the log file.

VER-62276

Security

Changes to the SSLCertificate, SSLPrivateKey, and SSLCA parameters take effect for all new connections and no longer require a restart.

VER-65168

Security

Before release 9.1, the default Linux file system permissions on scripts generated by the Database Designer were 666 (rw-rw-rw-). Beginning with release 9.1, default permissions changed to 600 (rw-------). With this release, default permissions have reverted to 666 (rw-rw-rw).

VER-65890

Security

Two system tables have been added to monitor inherited privileges on tables and views: inheriting_objects shows which catalog objects have inheritance enabled, and inherited_privileges shows what privileges users and roles inherit on those objects.

VER-68616

Security

When the only privileges on a view came via ownership and schema-inherited privileges instead of GRANT statements, queries on the view by non-owners bypassed the privilege check for the view owner on the anchor relation(s). Queries by non-owners with inherited privileges on a view now correctly ensure that the view owner has SELECT WITH GRANT OPTION on the anchor relation(s).

VER-67658

Spread

In unstable networks, some UDP-based Vertica control messages could be lost. This could result in hanging sessions that could not be cancelled. This issue has been fixed.

VER-47639

Supported Platforms

Vertica now supports the XFS file storage system.

VER-67234

Tuple Mover

The algorithm for prioritizing mergeout requests sometimes overlooked slow-loading jobs, especially when these competed with faster jobs that loaded directly into ROS. This could cause mergeout requests to queue indefinitely, leading to ROS pushback. This problem was resolved by changing the algorithm for prioritizing mergeout requests.

VER-67275

Tuple Mover

Previously, the mergeout thread dedicated to processing active partition jobs ignored eligible jobs in the mergeout queue, if a non-eligible job was at the top of the queue. This issue has been resolved: now the thread scans the entire queue for eligible mergeout jobs.

VER-68383

Tuple Mover

Mergeout did not execute purge requests on storage containers for partitioned tables if the requests had invalid partition keys. At the same time, the Tuple Mover generated and queued purge requests without validating their partition keys. As a result, mergeout was liable to generate repeated purge requests that it could not execute, which led to rapid growth of the Vertica log. This issue has been resolved.

VER-65744

UDX

Flex table views now properly show UTF-8 encoded multi-byte characters.

VER-62048

UI - Management Console

Certain design steps in the Events history tab of the MC Design page were appearing in the wrong order and with incorrect time stamps. This issue has been fixed.

VER-65561

UI - Management Console

Under some conditions in MC, a password could appear in the scrutinize collection log. This issue has been fixed.

VER-67903

UI - Management Console

MC was not correctly handling design tables when they contained Japanese characters. This issue has been resolved.

VER-68357

UI - Management Console

When Clockskew was detected, MC was incorrectly updating the alert during the subsequent checks with the timestamp of the last check, instead of the timestamp when the Clockskew was first detected. This issue was fixed, and also the status of the alert (resolved/unresolved) was added to the display.

VER-68448

UI - Management Console

MC catches Vertica’s SNMP traps and generates an alert for each one. MC was not generating an alert for the Vertica SNMP trap: CRC Mismatch. This issue has been fixed.

Known issues Vertica 9.3

Updated: 10/14/19

Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.

Known Issues

Issue

Component

Description

VER-67228 AMI, License An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing.
VER-64997 Backup/DR, Security A vbr restore operation can fail if your database is configured for SSL and has a server.key and a server.crt file present in the catalog directory.
>VER-64916 Kafka Integration When Vertica exports data collector information to Kafka via notifier, the sterilization logic does not properly handle Vertica internal data type translation. When importing or copying that data back into the Management Console, Vertica uses the raw data in Kafka and inserts garbled data into the extended monitoring database.
VER-64352 SDK Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2:

CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python';
DROP LIBRARY lib1;
CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python';

Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails.
VER-63720 Recovery When fewer nodes than the node number of the cluster are specified in a vbr configuration file, the catalog of the restored library will not be installed on nodes that were not specified in the vbr configuration file.
VER-62983 Hadoop When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas.
VER-62061 Catalog Sync and Revive If you revive a database on a one-node cluster, Vertica does not warn users against reviving the same database elsewhere. This can result in two instances of the same database running concurrently on different hosts, which can corrupt the database catalog.
VER-61584 Nimbus, Subscriptions It happens only while node(s) is shutting down or in unsafe status. No workaround.
VER-61420 Data Removal - Delete, Purge, Partitioning Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM.
VER-61069 Execution Engine In very rare circumstances, if a vertica process crashes during shutdown, the remaining processes might hang indefinitely.
VER-60409 AP-Advanced, Optimizer APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also outputs many columns may fail with error "Request size too big" due to additional memory requirement in parsing.
VER-58168 Recovery A transaction that started before a node began recovery is referred to as a dirty transaction in the context of recovery. A recovering node must wait for or cancel such a transaction in order to recover tables modified by the transaction. In some rare instances such transactions may be hung and can not be canceled. Tables locked by such a transaction can not be recovered and therefore a recovering node can not transition to 'UP'. Usually the hung transaction can be stopped by restarting the node on which the transaction was initiated, assuming this is not a critical node. In extreme cases or instances where the initiator node is a critical node, restart the cluster.
VER-57126 Data Removal - Delete, Purge, Partitioning Partition operations that use a range, for example, copy_partitions_to_table, must split storage containers that span the range in order to complete. For tables partitioned using a GROUP BY expression such split plans can require a relatively large amount of memory, especially when the table has a large number of columns. In some cases the partition operation may error with "ERROR 3587: Insufficient resources to execute plan on pool <poolname>".
VER-48020 Hadoop Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load.

Legal Notices

Warranty

The only warranties for products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein.

The information contained herein is subject to change without notice.

Restricted Rights Legend

Confidential computer software. Valid license from Micro Focus required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.

Copyright Notice

© Copyright 2006 - 2019 Micro Focus, Inc.

Trademark Notices

Adobe® is a trademark of Adobe Systems Incorporated.

Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.

UNIX® is a registered trademark of The Open Group.