9.3.x Release Notes

Vertica
Software Version: 9.3.x

 

IMPORTANT: Before Upgrading: Identify and Remove Unsupported Projections

With version 9.2, Vertica has removed support for pre-join and range segmentation projections. If a table's only superprojection is one of these projection types, the projection is regarded as unsafe.

Before upgrading to 9.2 or higher, you are strongly urged to check your database for unsupported projections. If the upgrade encounters these projections, it is liable to fail. You must then revert to the previous installation.

Solution: Run the pre-upgrade script

Vertica has provided a pre-upgrade script that examines your current database and sends to standard output its analysis and recommendations. The script identifies and lists any unsupported projections. If the script finds projection buddies with different SELECT and ORDER BY clauses, it generates a deploy script. Run this script to remedy projections so they comply with system K-safety.

https://www.vertica.com/pre-upgrade-script/

For more information, see Fixing Unsafe Buddy Projections in the Vertica documentation.

Updated: 4/2/2020

About Vertica Release Notes

What's New in Vertica 9.3.1

What's Deprecated in Vertica 9.3.1

Vertica 9.3.1-5: Resolved Issues

Vertica 9.3.1-4: Resolved Issues

Vertica 9.3.1-3: Resolved Issues

Vertica 9.3.1-2: Resolved Issues

Vertica 9.3.1-1: Resolved Issues

Vertica 9.3.1: Resolved Issues

Vertica 9.3.1: Known Issues

What's New in Vertica 9.3

What's Deprecated in Vertica 9.3

Vertica 9.3.0-2: Resolved Issues

Vertica 9.3.0-1: Resolved Issues

Vertica 9.3: Resolved Issues

Vertica 9.3: Known Issues

About Vertica Release Notes

The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 9.3.x.

They also contain information about issues resolved in:

Downloading Major and Minor Releases, and Service Packs

The Premium Edition of Vertica is available for download at https://support.microfocus.com/downloads/swgrp.html.

The Community Edition of Vertica is available for download at https://www.vertica.com/download/vertica/community-edition.

The documentation is available at https://www.vertica.com/docs/9.3.x/HTML/index.htm.

Downloading Hotfixes

Hotfixes are available to Premium Edition customers only. Each software package on the https://support.microfocus.com/downloads/swgrp.html site is labeled with its latest hotfix version.

What's New in Vertica 9.3.1

Take a look at the Vertica 9.3.1 New Features Guide for a complete list of additions and changes introduced in this release.

Backup, Restore, Recovery, and Replication

Support for Eon Mode On-Premise

An Eon Mode database can run entirely in the cloud (on AWS) or on-premise. You can now back up an on-premise database to AWS or to a Pure Storage FlashBlade appliance. You can perform full backups, full restores, and object restores from full backups.

To perform a cross-endpoint backup (or restore), you must set some additional environment variables. The vbr configuration file does not change. For more information, see Cross-Endpoint Backups in Eon Mode.

Configuration

Enforcement of Valid Parameter Input

Beginning with this release, Vertica will be more rigorous in how it validates user input for configuration parameters.

For example, in past releases you could set a configuration parameter to a value with invalid trailing data. If you set parameter MaxClientSessions to 50.83 or 42plus, in both cases Vertica stripped off the invalid data.

Now, Vertica validates user input for MaxClientSessions, checking whether the value's format is valid for an unsigned 32-bit integer. Otherwise, it returns an error.

Flex Tables

Avro Parser Support for Deflate Compression

The favroparser now supports deflate compression. This new feature allows you to load Avro data files compressed with the deflate codec into a flex table. See Loading Avro Data for more information.

Kafka

The Kafka integration in Vertica 9.3.1 has been tested with Kafka versions 2.2.1, 2.1, and 2.0. See Integrating with Apache Kafka for more information.

Machine Learning

Addition of Bisecting K-means

Vertica now provides the ability to cluster data using the bisecting k-means algorithm. See Clustering Data Hierarchically Using bisecting k-means for an introduction with an extended example. The following functions have been added:

Parquet Export

EXPORT TO PARQUET Logs Output Events in UDX_EVENTS

EXPORT TO PARQUET previously logged information about the files it exports in the Vertica log. Now it additionally logs information in the UDX_EVENTS system table, allowing you to see information from all participating nodes in one place. See Monitoring Exports for more information about how to use this table.

Export to Google Cloud Storage Supported

You can use EXPORT TO PARQUET to export data to GCS. The requirements are similar to those for AWS S3; you must specify an authentication token in a configuration parameter, and data is exported directly to the target directory instead of being written to a temporary location first. For more information, see Exporting to S3 and GCS.

Partitioning

Null Handling Support

Partition clauses now support the function ZEROIFNULL. This function can check a PARTITION BY expression for null values and evaluate them to 0.

Projections

Operations on Top-K Projections

You can now perform the following operations on anchor tables with Top-K projections:

Refreshing Top-K Projections

You can now use the following metafunctions on anchor tables with Top-K projections:

 

SQL Functions and Statements

S3EXPORT Null Handling

S3EXPORT now supports a new null_as parameter, which specifies how to export null values from the source data. If this parameter is included, S3EXPORT replaces all null values with the specified string.

New Metafunction: GET_PRIVILEGES_DESCRIPTION

Because privileges on database objects can come from several different sources such as explicit grants, roles, and inheritance, they can be difficult to monitor. To address this, the new GET_PRIVILEGES_DESCRIPTION metafunction has been introduced. It provides a unified view of their effective privileges across all sources on a specified database object.

Helper Functions for Parquet Data

The GET_METADATA function inspects a Parquet file and returns its metadata, including columns, row groups, and sizes. You can use this information to help you define external tables and to confirm the results of exports from Vertica.

The INFER_EXTERNAL_TABLE_DDL function inspects a Parquet file and returns a starting point for the definition of an external table. This function can be especially helpful for tables with many columns of mostly primitive types. For some types, the function cannot infer the data type and you must edit the output.

COMMENT ON COLUMN Table Compatibility

You can now comment on table columns with COMMENT ON COLUMN. For more information, see COMMENT ON TABLE COLUMN.

Security and Authentication

SSLCA Now Accepts Multiple Certificate Authorities

You can now trust more than one CA with the SSLCA parameter. For more information, see Security Parameters.

LDAP Dry Run Metafunctions

The LDAP_LINK_DRYRUN family of metafunctions allows you to tweak your LDAP Link settings before syncing with Vertica. Each dry run metafunction takes LDAP Link parameters as arguments and tests a separate part of LDAP Link:

These metafunctions are meant to be used and tested in succession, and their arguments are cumulative. That is, the parameters you use for configuring LDAP_LINK_DRYRUN_CONNECT are used for LDAP_LINK_DRYRUN_SEARCH, and the arguments for those functions are used for LDAP_LINK_DRYRUN_SYNC.

For detailed instructions on using these metafunctions, see Configuring LDAP Link with Dry Runs.

New LDAP Link Parameters

New parameters have been added for configuring LDAP Link.

See LDAP Link Parameters for more information.

Statistics

ANALYZE_STATISTICS Support for Global Temporary Tables

You can now call ANALYZE_STATISTICS on global temporary tables. As with local temporary tables, you can only obtain statistics on a global temporary table that is created with the option ON COMMIT PRESERVE ROWS. Otherwise, Vertica deletes table content on committing the current transaction, so no table data is available for analysis.

Supported Data Types

Arrays and Maps for External Tables Using Parquet Data

When working with Parquet files containing complex types, you can now define tables using arrays and maps of primitive types, allowing you to read a larger range of existing data.

You can query arrays, use them in joins and other operations, and use Array Functions on both array columns and literals. See Reading Arrays and the ARRAY data type for more information.

You can define maps, which allows you to read files containing them, but you cannot query them. See Reading Maps and the MAP data type for more information.

Arrays and maps may contain only primitive types. They cannot contain rows, arrays, or maps.

System Tables

MERGEOUT_PROFILES

The new MERGEOUT_PROFILES system table returns information about automatic mergeout operations, making them easier to monitor and troubleshoot.

New LOAD_SOURCES Columns

System table LOAD_SOURCES now includes four new columns that return how much time (in microseconds) is consumed by various user-defined load functions:

LDAP Dry Run Table

The LDAP_LINK_DRYRUN_EVENTS system table contains the results from the new LDAP dry run metafunctions.

UDX_EVENTS

The new UDX_EVENTS system table records events logged from user-defined functions, if they logged any. See Logging in Extending Vertica for information about how to log events in UDxs that you write.

Text Search

StringTokenizerDelim Optimizations and Changes

The preconfigured tokenizer StringTokenizerDelim has been optimized. It has also seen a slight change in behavior: an empty OVER() clause returns a single column for the tokenized values. Previously, the function would return an additional column for the original input. To use the old behavior, include a PARTITION BY clause in OVER().

For more information on this change and other preconfigured tokenizers, see Preconfigured Tokenizers.

admintools

create_db: New Behavior and Options

Previously, create_db would rollback the entire operation on failure, deleting log files and making it difficult to diagnose problems. The following changes have been made to modify this behavior.

What's Deprecated in Vertica 9.3.1

This section describes the two phases Vertica follows to retire Vertica functionality:

Deprecated

The following Vertica functionality was deprecated and will be retired in future versions:

No deprecated functionality

Removed

The following functionality is no longer accessible as of the releases noted:

For more information see Deprecated and Retired Functionality in the Vertica documentation.

Vertica 9.3.1-5: Resolved Issues

Release Date: 4/2/2020

This hotfix addresses the issues below.

Issue

Component

Description

VER-71866 Subscriptions In Eon Mode, when a subscription or node state changes, primary and ETL session nodes now realign to match session participation.

Vertica 9.3.1-4: Resolved Issues

Release Date: 3/13/2020

This hotfix addresses the issues below.

Issue

Component

Description

VER-71622 Backup/DR Vbr no longer includes the dbadmin password in logging files when debug level is set to 3 and the dbadmin password is stored in a password file.
VER-71688 License In rare cases, Vertica no longer fails if you audit a flextable created using SET USING.

Vertica 9.3.1-3: Resolved Issues

Release Date: 3/5/2020

This hotfix addresses the issues below.

Issue

Component

Description

VER-71128 Depot The tables DEPOT_FILES, DEPOT_EVICTIONS, and DEPOT_FETCHES now contain the column STORAGE_OID. This column contains the storage container oid associated with every file.
VER-71139 Optimizer Moving CONSTRAINTS as a part of CREATE TABLE statement from an ALTER TABLE statement now moves only the constraints of TEMP tables to the CREATE TABLE statement.
VER-71362 Admin Tools Log rotate now functions properly in databases upgraded from 9.2.x to 9.3.x.
VER-71363 Kafka Integration In rare cases, rdkafka that could create an infinite loop. This issue has been resolved. For more information, refer to https://github.com/edenhill/librdkafka/issues/2108
VER-71488 Supported Platforms The vertica_agent.service can now stop gracefully on Red Hat Enterprise Linux version 7.7.

Vertica 9.3.1-2: Resolved Issues

Release Date: 2/20/2020

This hotfix addresses the issues below.

Issue

Component

Description

VER-71253 Admin Tools The Database Designer no longer produces an error when it targets a schema other than the "public" schema.
VER-71067 Third Party Tools Integration NULL input to VoltageSecureProtect and VoltageSecureAccess now returns NULL value.

 

Vertica 9.3.1-1: Resolved Issues

Release Date: 2/6/2020

This hotfix addresses the issues below.

Issue

Component

Description

VER-70802 Build and Release The TZ environment variable now uses the latest time zone data from the Internet Assigned Numbers Authority (IANA).
VER-70962 Third Party Tools Integration The Vertica VoltageSecureProtect function now correctly sizes output when specifying custom Voltage number formats.
VER-71003 Catalog Engine In Eon Mode, projection checkpoint epochs on down nodes now become consistent with the current checkpoint epoch when the nodes resume activity.

Vertica 9.3.1: Resolved Issues

Release Date: 1/14/2020

This hotfix addresses the issues below.

Issue

Component

Description

VER-42720 Execution Engine Vertica occasionally generated misleading error messages in certain distributed query executions. This issue has been resolved.
VER-63714 Cloud - Google Previously, exporting parquet files to Google Cloud Storage caused an error. This is now fixed.
VER-66216 Admin Tools Passwords are now validated when creating databases.
VER-66903 Kafka Integration Users can now use kafka_conf to override the kafka ssl setting to allow SASL+SSL authentication to work.
VER-67029 Client Drivers - JDBC The maxconnection limitation check no longer fails after running a distribute query.
VER-67201 Client Drivers - ODBC Previously, the names of files used in copy local operations were not correctly represented within the Windows version of the ODBC/OLEDB drivers; as a result, non-ASCII characters could not be used in these filenames. The same problem existed for the names of rejection and exception filenames specified in a copy local operation. This problem has now been corrected, and non-ASCII characters can be used in all three types of files.
VER-67497 Client Drivers - VSQL Previously, the VSQL options -B (connection backup server/port) and -k/-K (Kerberos service and host name) were incompatible, and the latter would be ignored. Now, these options can be set together.
VER-67511 Tuple Mover Configuration parameter MergeOutInterval specifies in seconds (by default 600) how often the Tuple Mover checks the mergeout request queue for pending requests. If the queue is empty, the Tuple Mover processes storage location move requests. In previous releases, Tuple Mover threads that were triggered by DML operations ignored the MergeOutInterval parameter and processed all pending requests, including storage location move requests. This sometimes resulted in frequent checks for storage location move requests, which could adversely affect performance. To resolve this issue, Tuple Mover threads that are spawned by DML operations are now subject to MergeOutInterval.
VER-67701 LocalPlanner Vertica occasionally failed when running COUNT DISTINCT queries with predicates on lead columns in the projection sort order. This issue has been resolved.
VER-67964 Execution Engine In rare circumstances, if a cluster's partitioning was short-lived due to network problems, then the whole cluster may have gone down. This issue has been fixed.
VER-67966 Backup/DR If a user performed a copycluster task and a swap partition task concurrently, the data that participated in the swap partition could end up missing on the target cluster. This issue has been resolved.
VER-67980 Client Drivers - ODBC Error messages containing Japanese column names now display correctly.
VER-68414 SAL Queries could fail when Vertica 9.2 tried to read a ROS file format from Vertica 6.0 or earlier. Vertica now properly handles files created in this format.
VER-68549 UI - Management Console A problem occurred when an MC Admin user chose to bind to LDAP anonymously, where the setting did not take effect despite clicking 'Apply' and 'Done'. This problem has now been fixed.
VER-68633 Security To ensure that username case is consistent between SESSIONS and other system tables, the SESSIONS system tables are now populated with the username value from the catalog.
VER-68644 Tuple Mover In previous releases, Vertica considered unsafe projections when it tried to advance AHM. Now, Vertica ignores unsafe projections when it determines the AHM.
VER-68720 UDX GRANT and REVOKE statements now support ALTER and DROP privileges for user-defined functions.
VER-68726 Kafka Integration Vertica no longer fails when it encounters improper Kafka SSL key/certificate setup error information on RedHat Linux.
VER-68790 Backup/DR If a user deleted an object during a backup on EON mode, the backup could potentially fail. This issue has been resolved.
VER-68853 Subscriptions A failure in one EON Mode subcluster no longer has the ability to affect another subcluster.
VER-68898 Data Removal - Delete, Purge, Partitioning DROP_PARTITIONS requires that the range of partition keys to drop exclusively occupy the same ROS container. If the target ROS container contains partition keys that are outside the specified range of partition keys, the force-split argument must be set to true, otherwise the function returns with an error.

In rare cases, DROP_PARTITIONS executes at the same time as a mergeout operation on the same ROS container. When this happens, the function cannot execute the force-split operation and returns with an error. This error message has been modified to better identify the source of the problem, and also provides a hint to try calling DROP_PARTITIONS again.
VER-69184 Execution Engine In cases where COUNT (DISTINCT) and aggregates such as AVG were involved in a query of a numeric datatype input, the GroupGeneratorHashing Step was causing a memory conflict when the size of the input datatype (numeric) was greater than the size of output datatype (float for average), producing incorrect results. This issue has been fixed.
VER-69217 Data load / COPY Sending excessively long inputs to a NUMERIC column in a COPY statement could cause Vertica to crash. This issue has been fixed.
VER-69229 Execution Engine In some cases, queries failed to sort UUIDs correctly if the ORDER BY clause did not specify the UUID column first. This problem has been resolved.
VER-69292 Execution Engine, UDX, Vertica Log Text Search Previously, the v_txtindex.StringTokenizerDelim UDx made copies of the input for each token, which could lead to memory issues. This function no longer makes copies of the original input and now returns one column containing the tokens.

To produce the old behavior that maps the tokens to the original input, include a "PARTITION BY <input_col>" clause inside the OVER() clause of the UDx call.
VER-69383 Optimizer - Statistics and Histogram ANALYZE_ROW_COUNT can no longer change STATISTICS_TYPE from FULL to ROWCOUNT.
VER-69431 Data Export S3EXPORT now supports a null_as parameter that specifies how to export null values from the source data. If this parameter is included, S3EXPORT replaces all null values with the specified string.
VER-69486 Security Previously, LDAP Link required a global catalog lock during the entire operation. This would sometimes cause the database to fail if the LDAP server took too long to respond. Now, LDAP Link only requires a global catalog lock when it maps LDAP entries to Vertica users and roles.
VER-69489 Data load / COPY COPY statements could automatically generate rejected data file names that were too long for Vertica's file system, causing the COPY to error out. Vertica will now truncate these file names to prevent this from happening.
VER-69535 Data load / COPY A bug in tracking disk usage of rejected data tables would rarely cause Vertica to crash. This issue has been fixed.
VER-69606 Catalog Engine ETL queries no longer fail with the error "Active subscriptions changed during query planning" if you run the query while running REBALANCE_SHARDS on another subcluster.
VER-69702 Communications/messaging Queries performed against a virtual table no longer display duplicate error messages if performed while one or more nodes is down.
VER-70487 Admin Tools Starting a subcluster in an EON Mode database no longer writes duplicate entries to the admintools.log file.

Known issues Vertica 9.3.1

Updated: 1/5/20

Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.

Known Issues

Issue

Component

Description

VER-41895 Admin Tools

On some systems, admintools fails to parse output while running SSH commands on hosts in the cluster.

Workaround: If the admintools operation needs to run on just one node, users can avoid this issue by using SSH to connect to the target node and running the admintools command on that node directly.

VER-48020 Hadoop Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load.
VER-48041 Admin Tools On some systems, occasionally admintools cannot parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient.
VER-60409 AP-Advanced, Optimizer

APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also outputs many columns may fail with error "Request size too big" due to additional memory requirement in parsing.

Workaround: Increase configuration parameter MaxParsedQuerySizeMB or reduce the number of output columns. For the cases of APPLY_SVD and APPLY_PCA, you can limit the number of output columns either by setting the parameter num_components or by setting the parameter cutoff. In practice, cutoff=0.9 is usually enough.

Please note that If you increase MaxParsedQuerySizeMB to a larger value, for example 4096, each query you run may use 4 GB of memory during parsing. This means running multiple queries at the same time could cause out-of-memory (OOM) if your total memory is limited. Please refer to Vertica documentation for more information about MaxParsedQuerySizeMB.

VER-61069 Execution Engine

In very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely.

Workaround: Halt the remaining processes using admin tools.

VER-61420 Data Removal - Delete, Purge, Partitioning Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM.
VER-62061 Catalog Sync and Revive If you revive a database on a one-node cluster, Vertica does not warn users against reviving the same database elsewhere. This can result in two instances of the same database running concurrently on different hosts, which can corrupt the database catalog.
VER-62983 Hadoop When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas.
VER-64352 SDK

Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2: CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python'; DROP LIBRARY lib1; CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python'; Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails.

Workaround: One of the following:

  • In the same session, repeat the CREATE LIBRARY statement for the second library.
  • Start another session and create the second library.
  • Add the following lines of code to the second library's Python script before importing its module:

    import importlib

    importlib.reload(<library-1-module>)

    # e.g. importlib.reload(numpy)

VER-67228 AMI, License An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing.
VER-70139 Metadata Tables Security

The LDAP Link dry run metafunctions do not yet have full support for the LDAPLinkStartTLS and LDAPLinkTLSReqCert parameters.

Workaround: When using the LDAP Link dry run metafunctions, pass "1" and "allow" for LDAPLinkStartTLS and LDAPLinkTLSReqCert arguments, respectively

VER-70238 Backup/DR, Security

A vbr restore operation can fail if your database is configured for SSL and has a server.key and a server.crt file present in the catalog directory.

Workaround: Move your SSL authentication related files (server.crt, server.csr and server.key) from the catalog directory before performing the restore.

VER-70468 Nimbus

Creating a load balance group for a subcluster with a ROUNDROBIN load balance policy only has an effect if you set the global load balancing policy to ROUNDROBIN as well. This is the case, even though the LOAD_BALANCE_GROUPS system table shows the group's policy as ROUNDROBIN.

Workaround: Set the global load balancing policy using the SET_LOAD_BALANCE_POLICY function:

SELECT set_load_balance_policy('roundrobin');

VER-70549 Spread

When quickly stopping and then starting a different database, the new database may fail to start after attempting to connect to Spread processes associated with the stopped database.

Workaround: After stopping a database, ensure that all old Spread processes have stopped on the affected nodes before starting a different database. This typically takes no more than one minute, but may vary under certain circumstances.

 

What's New in Vertica 9.3

Take a look at the Vertica 9.3 New Features Guide for a complete list of additions and changes introduced in this release.

Vertica on the Cloud

New Support for AWS Instance Types

Vertica has added two Amazon Web Service (AWS) instance types to the list of supported types available in MC:

* Instance type does not support ephemeral storage

For a list of all supported instance types, see Supported Instance Types

Optimization Type Supports EBS Storage Supports Ephemeral Storage

Computing

c5.xlarge

c5.large

Yes

Yes

No

No

Constraints

Exported DDL of Table Constraints

Previously, Vertica meta-functions that exported DDL, such as EXPORT_TABLES, exported all table constraints as ALTER statements, whether they were part of the original CREATE statement, or added later with ALTER TABLE. This was problematic for global temporary tables, which cannot be modified with new constraints other than foreign keys. Thus, the DDL that was exported for a temporary table with constraints could not be used to reproduce that table. This issue has been resolved: Vertica now exports table constraints (except foreign keys) as part of the table's CREATE statement.

Supported Data Types

External Tables and Structs

Columns in Parquet and ORC files can contain complex data types. One complex data type is the struct, which stores (typed) property-value pairs. Vertica previously supported reading structs only as expanded columns. In addition, you can now preserve the original structure by reading a struct as a single column. For more information, see Reading Structs as Inline Types.

Eon Mode

Improved Subcluster Feature

Subclusters help you isolate workloads to a smaller group of nodes in your database cluster. Queries only run on nodes within the subcluster that contains the initiator node. In previous versions of Vertica, you defined subclusters using the fault groups feature. Starting in 9.3.0, subclusters are their own unique feature and have become more useful.

All nodes in the database must belong to a subcluster. When you upgrade an Eon Mode database from a previous version to 9.3.0, Vertica converts any existing fault groups to subclusters. If there are nodes in the database that are not part of a fault group, Vertica creates a default subcluster and adds these nodes to it. When you create a new Vertica database, Vertica creates a default subcluster and adds the initial group of nodes to it. When adding a node to the database, Vertica adds the node to a default subcluster unless you specify a subcluster.

See Subclusters for more information.

Subcluster Conversion During Eon Mode Database Upgrades

When you upgrade an Eon Mode database to version 9.3.0, Vertica converts all of the fault groups in the database to subclusters. Any nodes in the default groups are automatically assigned to the converted subclusters. Any nodes that are not part of a fauilt group are assigned to a default subcluster that Vertica creates.

Primary and Secondary Subclusters

Subclusters come in two types:

Nodes in Eon Mode databases are also either primary or secondary, based on the type of subcluster that contains them. See Subcluster Types and Elastic Scaling for more information on primary and secondary nodes and subclusters.

Connection Load Balancing Policy Changes

Connection load balancing is now aware of subclusters. You can define connection load balancing groups based on subclusters. See About Connection Load Balancing Policies for more information.

When Vertica upgrades an Eon Mode database to version 9.3.0 or beyond, it does not convert load balancing groups based on fault groups into groups based on the converted subclusters. You must redefine these load balance groups to be based on the newly-created subclusters yourself.

Changes to ADMIN_TOOLS_EXE for Subcluster

The ADMIN_TOOLS_EXE command line interface has new tools to manipulate subclusters:

In addition, the add_node tool has a new --subcluster argument that lets you select the subcluster that Vertica adds the new node or nodes to.

For more information on the option, see Writing Administration Tools Scripts.

Changes to System Tables for Subclusters

There are several new system tables, as well as changes to existing tables for subclusters:

Depot Warming Can be Canceled or Performed in the Background

Before a newly-added node begins processing queries, it warms its depot by fetching data into it based on what other nodes in the subcluster have in their depots. You can now choose to cancel depot warming entirely, or have the node process queries while it continues to warm its depot. See Canceling or Backgrounding Depot Warming for more information.

Changes to Depot Monitoring Tables

The PLAN_ID columns in the DEPOT_FETCH_QUEUE and DEPOT_FETCHES system tables have been replaced with the TRANSACTION_ID column. This change makes it easier to determine the transaction that caused a node to fetch a file.

Query-Level Depot Fetching

Queries in Vertica mode now support the /*+DEPOT_FETCH*/ hint, which specifies whether to fetch data from communal storage when the depot does not have the queried data.

Depot Maximum Size Limit

Vertica lets you set a size for the depot. This size is set to 60% by default. Vertica also uses the filesystem containing the depot for other purposes such as temporary storage while loading data. To make sure there is enough space on this filesystem for these other needs, Vertica now limits you to setting aside 80% of the filesystem space for the depot. If you attempt to allocate more than 80% of the filesystem to the depot, Vertica returns an error.

New S3EXPORT Parameters

Vertica function S3EXPORT now supports the following parameters:

Clearing the Fetch Queue for a Specific Transaction

The CLEAR_FETCH_QUEUE function now accepts an optional transaction ID parameter. Supplying this parameter limits the clearing of the fetch queue to just those entries for the transaction.

Kafka

Vertica version 9.3.0 has been tested with the following versions of Apache Kafka: 

Vertica may work with other versions of Kafka. See Vertica Integration for Apache Kafka for more information.

Vertica now uses version 0.11.6 of the rdkafka library to communicate with Kafka. This change could affect you if you directly set Kafka library options. See Directly Setting Kafka Library Options for more information.

Parquet Export

Improved Stability

Memory allocation for EXPORT TO PARQUET is now part of the Vertica resource pool instead of being separately managed, reducing memory-allocation errors from large exports.

Projections

Better Correlation Between Table and Projection Names

When you rename a table with ALTER TABLE or copy an existing one with CREATE TABLE LIKE…INCLUDING PROJECTIONS, Vertica propagates the new table name to its projections. For details, see Projection Naming.

Support for UPDATE and DELETE on Live Aggregate Projections

You can now run DML operations on tables with live aggregate projections. For more details, see Live Aggregate Projections.

Spread

Vertica now uses Spread 5.

Supported Platforms

Pure Storage FlashBlade

Vertica now supports Pure Storage Flashblade storage for Eon Mode on premise.

 

What's Deprecated in Vertica 9.3

This section describes the two phases Vertica follows to retire Vertica functionality:

Deprecated

The following Vertica functionality was deprecated and will be retired in future versions:

Release Functionality Notes

9.3

7.2_upgrade vbr task This task remains available in earlier versions.
9.3 FIPS FIPS is temporarily unsupported due to incompatibility with OpenSSL OpenSSL 1.1x. FIPS is still available in Vertica 9.2.x, and FIPS will be reinstated in a future release.

Removed

The following functionality is no longer accessible as of the releases noted:

Release Functionality Notes
9.3

EXECUTION_ENGINE_PROFILES counters: file handles and memory allocated

 
9.3 Configuration parameter ReuseDataConnections  

For more information see Deprecated and Retired Functionality in the Vertica documentation.

Vertica 9.3.0-2: Resolved Issues

Release Date: 12/18/2019

This hotfix addresses the issues below.

Issue

Component

Description

VER-69250 UDX GRANT and REVOKE statements now support ALTER and DROP privileges for user-defined functions.
VER-69666 Backup/DR If user performed a copycluster task and a swap partition task concurrently, the data that participated in the swap partition could end up missing on the target cluster. This issue has been resolved.
VER-69783 Execution Engine In cases where COUNT (DISTINCT) and aggregates such as AVG were involved in a query of a numeric datatype input, the GroupGeneratorHashing Step was causing a memory conflict when the size of the input datatype (numeric) was greater than the size of output datatype (float for average), producing incorrect results. This issue has been fixed.
VER-69787 Cloud - Google Previously, exporting parquet files to Google Cloud Storage caused an error. This is now fixed.
VER-70085 Backup/DR If a user deleted an object during a backup on Eon mode, the backup could potentially fail. This issue has been resolved.
VER-70144 Communications/
messaging
Queries performed against a virtual table no longer display duplicate error messages if performed while one or more nodes is down.
VER-70148 Client Drivers - VSQL Previously, the VSQL options -B (connection backup server/port) and -k/-K (Kerberos service and host name) were incompatible, and the latter would be ignored. Now, these options can be set together.
VER-70150 Catalog Engine ETL queries no longer fail with the error "Active subscriptions changed during query planning" if you run the query while running REBALANCE_SHARDS on another subcluster.
VER-70151 Subscriptions A failure in one Eon Mode subcluster no longer has the ability to affect another subcluster.
VER-70235 Data Removal - Delete, Purge, Partitioning

DROP_PARTITIONS requires that the range of partition keys to drop exclusively occupy the same ROS container. If the target ROS container contains partition keys that are outside the specified range of partition keys, the force-split argument must be set to true, otherwise the function returns with an error.

In rare cases, DROP_PARTITIONS executes at the same time as a mergeout operation on the same ROS container. When this happens, the function cannot execute the force-split operation and returns with an error. This error message has been modified to better identify the source of the problem, and also provides a hint to try calling DROP_PARTITIONS again.

VER-70460 AP-Advanced In rare situations, certain User-Defined Aggregate functions would cause a query to return an error "Data Area overflow". This issue has been fixed.
VER-70466 Admin Tools Starting a subcluster in an Eon Mode database no longer writes duplicate entries to the admintools.log file.

Vertica 9.3.0-1: Resolved Issues

Release Date: 10/31/2019

This hotfix addresses the issues below.

Issue

Component

Description

VER-69388 SAL Queries could fail when Vertica 9.2 tried to read a ROS file format from Vertica 6.0 or earlier. Vertica now properly handles files created in this format.

Vertica 9.3: Resolved Issues

Release Date: 10/14/2019

This release addresses the issues below.

Issue

Component

Description

VER-67087

Admin Tools

Sometimes during database revive, admintools treated s3 and hdfs user storage locations as local filesystem paths. This led to errors during revive. This issue has been resolved.

VER-68531

Admin Tools

Previously, environment variables used by admintools during SSH operations were set incorrectly on remote hosts. This issue has been fixed.

VER-64171

Backup/DR

If the hardlink failed during a hardlink backup, vbr switched to copying data instead of failing the backup and threw an error. This issue has been resolved.

VER-66956

Backup/DR

Dropping the user that owns an object involved in a replicate or restore operation during that replicate or restore operation can cause the nodes involved in the operation to fail.

VER-62334

Catalog Engine

Previously, if a DROP statement on an object failed and was rolled back, Vertica would generate a NOTICE for each dependent object. This was problematic in cases where the DROP operation had a large number of dependencies. This issue has been resolved: Vertica now generates up to 10 messages for dependent objects, and then displays the total number of dependencies.

VER-67734

Catalog Engine

In Eon mode, queries sometimes failed if they were submitted at the same time as a new node was added to the cluster. This issue has been resolved.

VER-68188

Catalog Engine

Exporting a catalog on a view that references itself could cause Vertica to fail. This issue has been resolved.

VER-68603

Catalog Sync and Revive

Database startup occasionally failed when catalog truncation failed because the disk was full. This issue has been resolved.

VER-68494

Client Drivers - Misc

Some kinds of node failures did not reset the list of available nodes for connection load balancing. These failures now update the available node list.

VER-67342

Data Removal - Delete, Purge, Partitioning Client Drivers - ODBC

Queries use the query epoch (current epoch -1) to determine which storage containers to use. A query epoch is set when the query is launched, and is compared with storage container epochs during the local planning stage. Occasionally, a swap partition operation was committed in the interval between query launch and local planning. In this case, the epoch of the post-commit storage containers was higher than the query epoch, so the query could not use them, and it returned empty results. This problem was resolved by making SWAP_PARTITIONS_BETWEEN_TABLES atomic.

VER-66546

Cloud - Amazon

The verticad script restarts the Vertica process on a node when the operating system starts. On Amazon Linux, the script sometimes was unable to detect the operating system and returned an error. This issue has been resolved.

VER-53370

DDL - Projection

Previously, projections of a renamed table retained their original names, which were often derived from the table's previous name. This issue has been resolved: now, when you rename a table, all projections whose names are prefixed by the anchor table name are renamed with the new table name.

VER-66882

DDL - Table

The database statistics tool SELECT ANALYZE_STATISTICS no longer acquires a GCL-X lock when running against local temp tables.

VER-67105

S3 Data Export

In previous releases, s3export could not export files in CSV format. The function now supports two new parameters, enclosed_by and escape_as, that enable exporting files in CSV format.

VER-68619

Hadoop Data Export

Date Columns now includes the Parquet Logical type, enabling other tools to recognize this column as a Date type.

VER-66272

Data Removal - Delete, Purge, Partitioning

Queries use the query epoch (current epoch -1) to determine which storage containers to use. A query epoch is set when the query is launched, and is compared with storage container epochs during the local planning stage. Occasionally, a swap partition operation was committed in the interval between query launch and local planning. In this case, the epoch of the post-commit storage containers was higher than the query epoch, so the query could not use them, and it returned empty results. This problem was resolved by making SWAP_PARTITIONS_BETWEEN_TABLES atomic.

VER-65427

Data load / COPY

Vertica could crash if its client disconnected during a COPY LOCAL with REJECTED DATA. This issue has been fixed.

VER-65659

Data load / COPY

Occasionally, a copy or external table query could crash a node. This has been fixed.

VER-68033

Data load / COPY

A bug in the Apache Parquet C++ library sometimes caused Vertica to fail when reading Parquet files with large Varchar statistics. This issue has been fixed.

VER-53981

Database Designer Core

In some cases, DESIGNER_DESIGN_PROJECTION_ENCODINGS mistakenly removed comments from the target projections. This issue has been resolved.

VER-62628

Execution Engine

If a subquery that is used as an expression returns more than one row, Vertica returns with an error. In past releases, Vertica used this error message: ERROR 4840: Subquery used as an expression returned more than one row In cases where the query contained multiple subqueries, this message did not help users identify the source of the problem. With this release, the error message now provides the Join label and localplan_id. For example: ERROR 4840: Subquery used as an expression returned more than one row DETAIL: Error occurred in Join [(public.t1 x public.t2) using t1_super and subquery (PATH ID: 1)] with localplan_id=[4]

VER-66224

Execution Engine

On rare occasions, a query that exceeded its runtime cap automatically restarted instead of reporting the timeout error. This issue has been fixed.

VER-67069

Execution Engine

Some queries with complex predicates ignored cancel attempts, manual or via the runtime cap, and continued to run for a long time. Cancel attempts themselves also caused the query to run longer than it would have otherwise. This issue has been fixed.

VER-67102

Execution Engine

Users were unable to cancel meta-function RUN_INDEX_TOOL. This problem has been resolved.

VER-69066

Execution Engine

In some cases, queries failed to sort UUIDs correctly if the ORDER BY clause did not specify the UUID column first. This problem has been resolved.

VER-64680

Kafka Integration

Previously, the version of the kafkacat utility distributed with Vertica had an issue that prevented it from working when TLS/SSL encryption was enabled. This issue has been corrected, and the version of kafkacat bundled with Vertica can now make TLS/SSL encrypted connections.

VER-68192

License

Upgrading 8.1.1-x databases with Autopass license installed to 9.0 or later can lead to license tampering startup issues. This problem has been fixed.

VER-67573

Nimbus

In the past, DDL transactions remained open until all pending file deletions were complete. In Eon mode, this dependency could cause significant delays. This issue has been resolved: now, DDL transactions can complete while file deletions continue to execute in the background.

VER-26260

Optimizer

Vertica can now optimize queries on system tables where the queried columns are guaranteed to contain unique values. In this case, the optimizer prunes away unnecessary joins on columns that are not queried.

VER-66423

Optimizer

The Optimizer's is better able to derive transitive selection predicates from subqueries.

VER-66902

Optimizer

EXPORT TO VERTICA returned an error if the table to export was a flattened table that already existed in the source and target databases. This issue has been resolved.

VER-66933

Optimizer

Previously, export operations such as export_tables() exported all table constraints as ALTER statements, whether they were part of the original CREATE statement, or added later with ALTER TABLE. This was problematic for global temporary tables, which cannot be modified with any constraints other than foreign keys; attempts to add constraints on a temporary table with ALTER TABLE return with an error. Thus, the DDL that was exported for a temporary table with constraints could not be used to reproduce that table. This problem has been resolved: now, export operations always export DDL for all constraints (except foreign keys) as part the CREATE statement. This change applies to all tables, temporary and otherwise.

VER-66968

Optimizer

Queries that perform an inner join and group the results now return consistent results.

VER-67138

Optimizer

When flattening subqueries, Vertica could sometimes move subqueries to the ON clause which is not supported. This issue has been resolved.

VER-67443

Optimizer

ALTER TABLE...ALTER CONSTRAINT returned an error when a node was down. This issue was resolved: you can now enable enable or disable table constraints when nodes are down.

VER-67740

Optimizer

Vertica sometimes crashed when certain combinations of analytic functions were applied to the output of a merge join. This issue has been fixed.

VER-67908

Optimizer

Prepared statements do not support WITH clause materialization. Previously, Vertica threw an error when it tried to materialize a WITH clause for prepared statement queries. Now, Vertica throws a warning and processes the WITH clause without materializing it.

VER-68306

Optimizer

Queries with multiple analytic functions over complex expression arguments and different PARTITION BY/ORDER BY column sets would sometimes produce incorrect and inconsistent results between Enterprise and Eon. This issue has been fixed.

VER-68379

Optimizer

Calls to REFRESH_COLUMNS can now be monitored in the new "dc_refresh_columns" table, which logs, for every call to REFRESH_COLUMNS, the time of the call, the name of the table, the refreshed columns, the mode used, the minimum and maximum key values, and the epoch.

VER-68594

Optimizer

Queries sorting on subquery outputs sometimes produced inconsistent results. This issue has been fixed.

VER-68828

Optimizer

When a session's transaction isolation mode was set to serializable, MERGE statements sometimes returned with the error message 'Can't run historical queries at epochs prior to the Ancient History Mark'. This issue has been resolved.

VER-49136

Optimizer Optimizer - Projection Chooser

TRUNCATE TABLE caused all existing table- and partition-level statistics to be dropped. This issue has been resolved.

VER-68826

Optimizer - Statistics and Histogram

If you partitioned a table by date and used EXPORT_STATISTICS_PARTITION to export results to a file, the function wrote empty content to the target file. This issue has been resolved.

VER-67647

Build and Release Performance tests

The ST_IsValid() geospatial function, when used with the geography data type, has seen a slight performance degradation of around 10%.

VER-66648

QA

When a query returned many rows with wide columns, Vertica threw the following error message: "ERROR 8617: Request size too big." This message incorrectly suggested that query output consumed excessive memory. This issue has been fixed.

VER-64783

Recovery

When applying partition events during recovery, Vertica determined that recovery was complete only after all table projections were recovered. This rule did not differentiate between regular and live aggregate projections of the same table, which typically recovered in different stages. If recovery was interrupted after all regular projections of a table were recovered but before recovery of its live aggregate projections was complete, Vertical returned an error. This problem has been resolved: Vertica now determines that recovery is complete when all regular projections are recovered, and disregards the recovery status of live aggregate projections.

VER-68504

Execution Engine S3

If you refreshed a large projection in Eon mode and the refresh operation used temp space on S3, the refresh operation occasionally caused the node to crash. This issue has been resolved.

VER-60036

SDK

All C++ UDXs require c++11 standard now. That can be done by setting -std=c++11 flag at compilation.

VER-61780

Scrutinize

Scrutinize previously generated a UnicodeEncodeError if the system locale was set to a language with non-ASCII characters. This issue has been fixed.

VER-66988

Scrutinize

When running scrutinize, the database password would be written to scrutinize_collection.log. This has been fixed: the "__db_password__" entry has been removed from the log file.

VER-62276

Security

Changes to the SSLCertificate, SSLPrivateKey, and SSLCA parameters take effect for all new connections and no longer require a restart.

VER-65168

Security

Before release 9.1, the default Linux file system permissions on scripts generated by the Database Designer were 666 (rw-rw-rw-). Beginning with release 9.1, default permissions changed to 600 (rw-------). With this release, default permissions have reverted to 666 (rw-rw-rw).

VER-65890

Security

Two system tables have been added to monitor inherited privileges on tables and views: inheriting_objects shows which catalog objects have inheritance enabled, and inherited_privileges shows what privileges users and roles inherit on those objects.

VER-68616

Security

When the only privileges on a view came via ownership and schema-inherited privileges instead of GRANT statements, queries on the view by non-owners bypassed the privilege check for the view owner on the anchor relation(s). Queries by non-owners with inherited privileges on a view now correctly ensure that the view owner has SELECT WITH GRANT OPTION on the anchor relation(s).

VER-67658

Spread

In unstable networks, some UDP-based Vertica control messages could be lost. This could result in hanging sessions that could not be cancelled. This issue has been fixed.

VER-47639

Supported Platforms

Vertica now supports the XFS file storage system.

VER-67234

Tuple Mover

The algorithm for prioritizing mergeout requests sometimes overlooked slow-loading jobs, especially when these competed with faster jobs that loaded directly into ROS. This could cause mergeout requests to queue indefinitely, leading to ROS pushback. This problem was resolved by changing the algorithm for prioritizing mergeout requests.

VER-67275

Tuple Mover

Previously, the mergeout thread dedicated to processing active partition jobs ignored eligible jobs in the mergeout queue, if a non-eligible job was at the top of the queue. This issue has been resolved: now the thread scans the entire queue for eligible mergeout jobs.

VER-68383

Tuple Mover

Mergeout did not execute purge requests on storage containers for partitioned tables if the requests had invalid partition keys. At the same time, the Tuple Mover generated and queued purge requests without validating their partition keys. As a result, mergeout was liable to generate repeated purge requests that it could not execute, which led to rapid growth of the Vertica log. This issue has been resolved.

VER-65744

UDX

Flex table views now properly show UTF-8 encoded multi-byte characters.

VER-62048

UI - Management Console

Certain design steps in the Events history tab of the MC Design page were appearing in the wrong order and with incorrect time stamps. This issue has been fixed.

VER-65561

UI - Management Console

Under some conditions in MC, a password could appear in the scrutinize collection log. This issue has been fixed.

VER-67903

UI - Management Console

MC was not correctly handling design tables when they contained Japanese characters. This issue has been resolved.

VER-68357

UI - Management Console

When Clockskew was detected, MC was incorrectly updating the alert during the subsequent checks with the timestamp of the last check, instead of the timestamp when the Clockskew was first detected. This issue was fixed, and also the status of the alert (resolved/unresolved) was added to the display.

VER-68448

UI - Management Console

MC catches Vertica’s SNMP traps and generates an alert for each one. MC was not generating an alert for the Vertica SNMP trap: CRC Mismatch. This issue has been fixed.

Known issues Vertica 9.3

Updated: 12/5/19

Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.

Known Issues

Issue

Component

Description

VER-45474 Optimizer When a node down, DELETE and UPDATE query performance can slow due to non-optimized query plans.
VER-68463 Cloud - Amazon, Data Export, Hadoop Export to parquet with partitions fails if any of the exported columns in the outer-most select statement is specified using "schema.table.column" notation. The workaround is to specify it using just "column" notation without "schema.table" part.
VER-67228 AMI, License An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing.
VER-64997 Backup/DR, Security A vbr restore operation can fail if your database is configured for SSL and has a server.key and a server.crt file present in the catalog directory.
VER-64916 Kafka Integration When Vertica exports data collector information to Kafka via notifier, the sterilization logic does not properly handle Vertica internal data type translation. When importing or copying that data back into the Management Console, Vertica uses the raw data in Kafka and inserts garbled data into the extended monitoring database.
VER-64352 SDK Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2:

CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python';
DROP LIBRARY lib1;
CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python';

Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails.
VER-63720 Recovery When fewer nodes than the node number of the cluster are specified in a vbr configuration file, the catalog of the restored library will not be installed on nodes that were not specified in the vbr configuration file.
VER-62983 Hadoop When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas.
VER-62061 Catalog Sync and Revive If you revive a database on a one-node cluster, Vertica does not warn users against reviving the same database elsewhere. This can result in two instances of the same database running concurrently on different hosts, which can corrupt the database catalog.
VER-61584 Nimbus, Subscriptions It happens only while node(s) is shutting down or in unsafe status. No workaround.
VER-61420 Data Removal - Delete, Purge, Partitioning Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM.
VER-61069 Execution Engine In very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely.
VER-60409 AP-Advanced, Optimizer APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also outputs many columns may fail with error "Request size too big" due to additional memory requirement in parsing.
VER-58168 Recovery A transaction that started before a node began recovery is referred to as a dirty transaction in the context of recovery. A recovering node must wait for or cancel such a transaction in order to recover tables modified by the transaction. In some rare instances such transactions may be hung and can not be canceled. Tables locked by such a transaction can not be recovered and therefore a recovering node can not transition to 'UP'. Usually the hung transaction can be stopped by restarting the node on which the transaction was initiated, assuming this is not a critical node. In extreme cases or instances where the initiator node is a critical node, restart the cluster.
VER-57126 Data Removal - Delete, Purge, Partitioning Partition operations that use a range, for example, copy_partitions_to_table, must split storage containers that span the range in order to complete. For tables partitioned using a GROUP BY expression such split plans can require a relatively large amount of memory, especially when the table has a large number of columns. In some cases the partition operation may error with "ERROR 3587: Insufficient resources to execute plan on pool <poolname>".
VER-48020 Hadoop Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load.

Legal Notices

Warranty

The only warranties for products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein.

The information contained herein is subject to change without notice.

Restricted Rights Legend

Confidential computer software. Valid license from Micro Focus required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.

Copyright Notice

© Copyright 2006 - 2019 Micro Focus, Inc.

Trademark Notices

Adobe® is a trademark of Adobe Systems Incorporated.

Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.

UNIX® is a registered trademark of The Open Group.