Release Notes

Vertica
Software Version: 9.1.x

 

IMPORTANT: Fixing  Unsafe Buddy Projections

Early versions of Vertica supported buddy projections with different sort orders. In version 7.2, we deprecated support for this and advised that buddy projections use the same sort order. As of version 9.1, two projections, whose sort orders differ, can no longer be treated as buddies of one another. A table can still have projections with different sort orders. However, in a k-safe database, at least k+1 projections of a table must have the same sort order, along with other characteristic that make them buddies.

Before upgrading to 9.1 or higher, current users are strongly urged to check that all projection buddies in the current database comply with these new requirements by running the pre-upgrade script found here:

https://my.vertica.com/9-1-pre-upgrade-script/

The pre-upgrade script analyzes your current database and identifies unsafe projections. It then generates a DDL script that you can use before the upgrade to remedy these projections. If unsafe projections are found, you must revert to the previous installation, run the DDL script generated, remedy these projections, and then upgrade.

For more information, see Fixing Unsafe Buddy Projections in the Vertica documentation.

Updated: October 12, 2018

About Vertica Release Notes

What's New in Vertica 9.1.1

What's New in Vertica 9.1

What's Deprecated in Vertica 9.1

Vertica 9.1.0: Resolved Issues

Vertica 9.1.0: Known Issues

About Vertica Release Notes

The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 9.1.x.

They also contain information about issues resolved in:

Downloading Major and Minor Releases, and Service Packs

The Premium Edition of Vertica is available for download at my.vertica.com.

The Community Edition of Vertica is available for download at the following sites:

The documentation is available at http://my.vertica.com/docs/9.1.x/HTML/index.htm.

Downloading Hotfixes

Hotfixes are available to Premium Edition customers only. Each software package on the http://my.vertica.com/downloads site is labeled with its latest hotfix version.

Take a look at the Vertica 9.1.x New Features Guide for a complete list of additions and changes introduced in this release.

What's New in Vertica 9.1.1

Take a look at the Vertica 9.1.1 New Features Guide for a complete list of additions and changes introduced in this release.

Performance Improvements

When nodes are down, the Optimizer can now generate plans that support late materialization. Plan creation is much simpler and incurs less overhead. Execution time of nodes-down query plans is significantly better, equivalent now to running query plans when all nodes are up.

Database Management

You can now transfer schema ownership with ALTER SCHEMA:

=> ALTER SCHEMA [database.]schema OWNER TO user‑name [CASCADE]

Supported Data Types

Vertica now supports explicit coercion (casting) from CHAR and VARCHAR data types to either BINARY or VARBINARY data types. For all supported coercion options, see Data Type Coercion Chart in the SQL Reference Manual.

Management Console

If Management Console is running in an AWS environment, you can now upgrade Management Console to the newest Vertica version through the MC interface .

You can upgrade using the new step-by-step upgrade wizard, available through Management Console Settings. Simply enter your AWS credentials, and then choose a new version and settings for the instance on which MC will run. A new version of MC will be provisioned onto a new AWS instance.

After you upgrade MC, you can automatically upgrade any database running in Eon Mode to the same version simply by reviving the database from the newly upgraded MC.

For more information see Using Management Console.

Monitoring Vertica

The new system table QUERY_CONSUMPTION provides a summary view of query resource usage, including data on queries that fail. For details, see Profiling Query Resource Consumption in the Administrator's Guide.

Data Analysis

New features include:

For more information, see Machine Learning for Predictive Analytics.

Security and Authentication

The new support for internode encryption allows you to use SSL to secure communication between nodes in a cluster. For more information, see Internode Communication and Encryption.

Table Data Management

You can now set active partition counts for individual tables, using CREATE TABLE and ALTER TABLE. The table-specific settings of ActivePartitionCount supersede the global setting.

For more information, see Active and Inactive Partitions in the Administrator's Guide.

Workload Management

CREATE RESOURCE POOL and ALTER RESOURCE POOL now support MAXQUERYMEMORYSIZE, which caps the amount of memory that a resource pool can allocate to process any query.

Backup, Restore, Recovery, and Replication

You can now use vbr to perform object replication concurrently with many other tasks. The replicate task can run concurrently with backup and with object replicate tasks in either direction. Replication cannot run concurrently with tasks that require that the database be down (full restore and copy cluster). Each concurrent task must have a unique snapshot name.

For more information about object replication, see Replicating Tables and Schemas to an Alternate Database.

Apache Hadoop Integration

Parquet Export Supports S3

You can now export data to S3, in addition to HDFS and the local disk with EXPORT TO PARQUET.

For more information, see Exporting to S3.

HCatalog Connector Supports Custom Hive Partitions

By default, Hive stores partition information under the path for the table definition, which might be local to Hive. However, a Hive user can choose to store partition information elsewhere, such as in a shared location like S3. Now, during query execution Vertica can query Hive for this information. Specify whether to look for custom partitions when creating the HCatalog schema.

For more information, see Defining a Schema Using the HCatalog Connector.

Integration with Apache Kafka

Supported Versions

For Vertica 9.1.1, the supported versions of Kafka are:

Vertica can work with older versions of Kafka. However, version 0.9 and earlier use an older revision of the Kafka protocol. To connect Vertica 9.1.1 or later to a Kafka cluster running 0.9 or earlier, you must change a setting in the rdkafka library that Vertica uses to communicate with Kafka.

For more information, see Configuring Vertica for Apache Kafka Version 0.9 and Earlier.

message.max.bytes

The meaning of Kafka's message.max.bytes setting has been changed for Kafka version 0.11. This change could cause performance issues when loading data using a streaming job scheduler that was created using Vertica version 9.1.0 or earlier.

For more information, see Changes to the message.max.bytes Setting in Kafka Version 0.11 and Later.

Consumer Groups

Vertica supports the Kafka consumer group feature. With Kafka, use this feature to balance the load of reading messages across consumers and ensure that consumers read messages only once. The Vertica streaming job scheduler prevents re-reading messages by managing message offsets on its own, and manages spreading the load across the entire Vertica cluster. The main use case for consumer groups with Vertica is to allow third-party applications to monitor its progress as it consumes messages.

For more information, see Monitoring Vertica Message Consumption with Consumer Groups.

Library Options

Version 9.1.1 adds the ability to pass options directly to the rdkafka library that Vertica uses to communicate with Kafka. This feature lets you change settings you cannot directly set in Vertica.

For more information, see Directly Setting Kafka Library Options.

Log files

In Vertica 9.1.1, the Apache Kafka integration logs messages to the standard vertica.log file.

For more information about viewing log messages, see Monitoring Log Files.

KafkaAvroParser and KafkaJSONParser

KafkaAvroParser and KafkaJSONParser now natively support the UUID data type and can parse UUID values into the Vertica UUID data type.

What's Deprecated in Vertica 9.1.1

The following Vertica functionality was deprecated in this release. This functionality will be retired in a future Vertica version:

For more information see Deprecated and Retired Functionality in the Vertica documentation.

Vertica 9.1.1-3: Resolved Issues

Release Date: 10/12/2018

This hotfix addresses the issues below.

Issue

Component

Description

VER-63830 Data Export, Execution Engine, Hadoop

Occasionally, the EXPORT TO PARQUET command wrote data to incorrect partition directories of data types timestamp, date, and time.

This issue has been fixed.

VER-64254 Optimizer

Query performance was sometimes sub-optimal if the query included a subquery that joined multiple tables in a WHERE clause, and the parent query included this subquery in an outer join that spanned multiple tables.

This issue has been fixed.

VER-64305 Optimizer

When a projection was created with only the "PINNED" keyword, Vertica incorrectly considered it a segmented projection. This caused optimizer internal errors and incorrect results when loading data into tables with these projections.

This issued has been fixed.

IMPORTANT: The fix only applies to newly created pinned projections. Existing pinned projection in the catalog are still incorrect and need to be dropped and recreated manually.

VER-64060 UDX

Previously, export to Parquet generated boolean partition values that could not be loaded correctly.

This issue has been fixed.

VER-64288 UI - Management Console

Long queries were failing in the Management Console Query Execution page.

This issue has been fixed.

Vertica 9.1.1-2: Resolved Issues

Release Date: 09/21/2018

This hotfix addresses the issues below.

Issue

Component

Description

VER-63992 DDL

Add Column Not Null gets transformed into two different statements: Alter Table Add Column and Alter Table Add Constraint.

The second statement can fail and results in a new column in your table but missing the constraint. It also produces a confusing rollback message that doesn't indicate that the column was added anyway.

The fix is to deal with Add Column Not Null in a same statement.

VER-63897 Data load / COPY, Hadoop

Queries involving a join of two external tables loading parquet files sometimes caused Vertica to crash. The crash happened in a very rare situation due to memory misalignment.

This issue has been fixed.

VER-63895 Execution Engine, Hadoop

If a Parquet file metadata is very large, Vertica can consume more memory than reserved and crash when the system runs out of memory.

This issue has been fixed.

VER-63863 Hadoop

Certain large Parquet files generated from ParquetExport could contain a lot of metadata leading to out of memory issues.

A new parameter 'fileSizeMB' has been added to ParquetExport to limit the file sizes being exported, thereby limiting the metadata size.

VER-63604 Data load / COPY

In a COPY statement, excessively long invalid inputs to any date or time columns could cause stack overflows, resulting in a crash.

This issue has been fixed.

VER-64356 DDL - Table

Previously, query label only applied to CTAS statements, but was missing in the implicit INSERT SELECT statement.

With this fix, query label will be applied to both statements.

Vertica 9.1.1-1: Resolved Issues

Release Date: 08/27/2018

This hotfix addresses the issues below.

Issue

Component

Description

VER-63524 Front end - Parse & Analyze

Vertica crashed when a function expression was assigned an alias with the same name as one of the ILM operations: copy_table, copy_partitions_to_table, move_partitions_to_table and swap_partitions_between_tables.

This issue is fixed.

VER-63693 S3

Before this release, S3Export was not thread safe when the data contains time/date values. This means S3Export should not be used with PARTITION BEST when exporting time/date values before this fix.

VER-63546 Data load / COPY Occasionally, a copy or external table query could crash a node. This has been fixed.
VER-63799 AMI, UI - Management Console

If the Vertica Management Console was deployed in AWS using the Basic or Advanced CFT and a non-zero CIDR (e.g., 2.22.222.22/32), then using the MC to revive a database failed with the error Cannot get initial Revive DB information from the Agent.

This issue has been resolved.

Vertica 9.1.1: Resolved Issues

Release Date: 08/10/2018

To see a complete list of additions and changes introduced in this release, refer to the Vertica 9.1.1 New Features Guide.

Issue

Component

Description

VER-53889 Error Handling, Execution Engine

If you tried to insert data that was too wide for a VARCHAR or VARBINARY column, the error message did not specify the column name. This error message now includes the column name.

VER-62550 Data load / COPY

COPY statements using the default parser could erroneously reject rows if the input data contained overflowing floating point values in one column and the maximum 64-bit integer value in another column. This issue has been fixed.

VER-61361 DDL - Projection If you create projections in a K-safe database using the OFFSET syntax, Vertica does not automatically create buddies for these projections. If primary key constraints are enforced, Vertica automatically creates buddy projections that satisfy system K-safety and enforce primary key constraints. If a table has some projections that are K-safe and others that are not, the Optimizer might be unable to find a K-safe projection to process queries. In this case the Optimizer assumes some nodes are down and generates an error. This issue has been fixed: Vertica now delays creation of projections for a table that enforces key constraints until the table has at least one K-safe superprojection. Until then, queries on that table return an error that projections for processing queries are unavailable.
VER-59297 ResourceManager In the system table RESOURCE_ACQUISITIONS, the difference between QUEUE_ENTRY_TIMESTAMP and ACQUISITION_TIMESTAMP now correctly shows how long a request was queued in a resource pool before acquiring the resources it needed to execute.
VER-59147 AP-Advanced, Sessions Using a Machine Learning function, which accepts a model as a parameter, in the definition of a view, and then feeding that view into another Machine Learning function as its input relation may cause failure in some special cases.
VER-60828 Optimizer Some instances of type casting on a datetime column that contained a value of 'infinity' triggered segmentation faults that caused the database to fail. This issue has been resolved.
VER-59645 Backup/DR Object replication to a target database running a newer version of Vertica sometimes incorrectly attempted to connect to the target database using its internal addresses. This issue has been resolved.
VER-59235 UI - Management Console With LDAP authentication turned on, users added before LDAP "default search path" was modified were not able to logon. This issue has been fixed.
VER-26440 DDL - Projection In previous Vertica versions, the configuration parameter SegmentAutoProjection could only be set at the database level. You can now set this parameter for individual sessions.
VER-58665 Optimizer Vertica stored incorrectly encoded min/max values in statistics for binary data types. This problem is now fixed.
VER-62710 Execution Engine Loads and copies from HDFS occasionally failed if Vertica was reading data through libhdfs++ when the NameNode abruptly disconnected from the network. libhdfs++ was being compiled with a debug flag on, which caused network errors to be caught in debug assertions rather than normal error handling and retry logic.
VER-62740 DDL - Projection Some versions of Vertica allowed projections to be defined with a subquery that referenced a view or another projection. These malformed projections occasionally caused the database to shut down. Vertica now detects these errors when it parses the DDL and returns with a message that projection FROM clauses must only reference tables.
VER-62255 Data load / COPY INSERT DIRECT failed when the configuration parameter ReuseDataConnections was set to 0 (disabled). When this parameter was disabled, no buffer was allocated to accept inserted data. Now, a buffer is always allocated to accept data regardless of how ReuseDataConnections is set.
VER-62358 Execution Engine, Optimizer A bug in the way null values were detected for float data type columns led to inconsistent results for some queries with median and percentile functions. This issue has been fixed.
VER-61903 Execution Engine In some workloads, the memory consumed by a long-running session could grow over time. This has been resolved.
VER-60943 Vertica Log Text Search Dependent index tables did not have their reference indices updated when the base table's columns were altered but not removed. This caused an internal error on reference. This issue has been fixed so that the indices are updated when you alter the base table's columns.
VER-60715 Optimizer - GrpBy & Pre Pushdown Including certain functions like nvl in a grouping function argument resulted in a "Grouping function arguments need to be group by expressions" error. This issue has been fixed.
VER-62902 Optimizer - Query Rewrite Applying an access policy on some columns in a table significantly impacted execution of updates on that table, even where the query excluded the restricted columns. Further investigation determined that the query planner materialized unused columns, which adversely affected overall performance. This problem has been resolved.
VER-62680 Optimizer - Join & Data Distribution Optimized MERGE was sometimes not used with NULL constants in varchar or numeric columns. This issue has been fixed.
VER-62222 Optimizer Attempts to swap partitions between flattened tables that were created in different versions of Vertica failed due to minor differences in how source and target SET USING columns were internally defined. Recent patches for these versions have resolved this issue.
VER-61899 Diagnostics Scrutinize was collecting the logs only on the node where it was executed, not on any other nodes. This issue has been fixed.
VER-63608 UI - Management Console The jackson-databind library used by MC has been upgraded to 2.9.5 to fix security vulnerabilities.
VER-53329 Data load / COPY Data loading into table with enforced key constraints used to be much slower when some nodes are down in the cluster. The issue has been improved in Vertica 9.1.1 and the difference between nodes-down and nodes-up cases is minimal.
VER-62271 Client Drivers - JDBC When dropping a user that owns roughly 10,000 or more tables using DROP USER, the Vertica JDBC driver would receive many notices and throw a StackOverflow exception. The driver now provides all notices and throws the correct exception (SQLDataException: [Vertica][VJDBC](3128) ROLLBACK: DROP failed due to dependencies).
VER-63158 Optimizer An internal error sometimes occured in queries with subqueries that contained a mix of outer and inner joins. This issue has been fixed.
VER-61969 UDX, Vertica Log Text Search The AdvancedTextSearch package leaks memory. This issue has been fixed.

Known Issues Vertica 9.1.1

Updated: August 10, 2018

Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.

Issue

Component

Description

VER-61299 Depot

Sometimes CLEAR_DATA_DEPOT() does not clear all the data on one or more nodes.

VER-60797 License

AutoPass format licenses do not work properly when installed in Vertica 8.1 or older. To replace these licenses with legacy Vertica license, users need to set the following parameter:


AllowVerticaLicenseOverWriteHP=1

VER-60409 AP-Advanced, Optimizer

APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also returns many columns may fail with the error "Request size too big" due to additional memory requirement in parsing.

Workaround: Increase the value of configuration parameter MaxParsedQuerySizeMB or reduce the number of output columns. For APPLY_SVD and APPLY_PCA, you can limit the number of output columns either by setting the parameter num_components or by setting the parameter cutoff. In practice, cutoff=0.9 is usually enough.

Note that If you increase MaxParsedQuerySizeMB to a larger value, for example 4096, each query you run may use 4 GB of memory during parsing. This means that running multiple queries at the same time could cause out-of-memory (OOM) issues if your total memory is limited. Please refer to the Vertica documentation for more information about MaxParsedQuerySizeMB.

VER-57126 Data Removal - Delete, Purge, Partitioning

Partition operations that use a range, for example, copy_partitions_to_table, must split storage containers that span the range in order to complete. For tables partitioned using a GROUP BY expression, such split plans can require a relatively large amount of memory, especially when the table has a large number of columns. In some cases, the partition operation may error with "ERROR 3587: Insufficient resources to execute plan on pool <poolname>".

Workaround: Increase the memorysize or decrease the plannedconcurrency of <poolname>.

Hint: A best practice is to group partitions so that it is never necessary to split storage containers. Following this guidance greatly improves the performance of most partition operations.

VER-58168 Recovery

A transaction that started before a node began recovery is referred to as a dirty transaction in the context of recovery. A recovering node must wait for, or cancel, such a transaction to recover tables modified by the transaction. In some rare instances such transactions may hang and cannot be canceled. Tables locked by such a transaction can not be recovered and therefore a recovering node cannot transition to 'UP'. Usually, the hung transaction can be stopped by restarting the node on which the transaction was initiated, assuming this is not a critical node. In extreme cases or instances where the initiator node is a critical node, the cluster must be restarted.

VER-61362 Subscriptions During cluster formation, when one of the up-to--date nodes is missing libraries and attempts to recover them, the recovery fails with a cluster shutdown.
VER-48041 Admin Tools

On some systems, occasionally admintools will not be able to parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient. There is no known work-around.

VER-41895 Admin Tools

On some systems admintools cannot parse output while running SSH commands on hosts in the cluster. In some situations, if the admintools operation needs to run on just one node, there is a workaround. Users can avoid this issue by using SSH to connect to the target node and running the admintools command on that node directly.

VER-61584 Subscriptions It happens only while a node(s) is shutting down or in unsafe status. No workaround.
VER-54924 Admin Tools

On databases with hundreds of storage locations, admintools SSH communication buffers can overflow. The overflow can interfere with database operations like startup and adding nodes. There is no known workaround.

VER-56679 SAL When generating an annotated query, the Optimizer does not recognize that the ETS flag is ON and produces the annotated query as if ETS is OFF. If the resulting annotated query is then run when ETS is ON, some hints might not be feasible.
VER-63077 Catalog Sync and Revive

When trying to revive the cluster, sometimes Vertica returns a "file not found" error. In most cases, this is just an intermittent error so the workaround is to try revive again.

VER-62897 Backup/DR

A PANIC happens when an object restore transaction involves the following:

- Has flatten table(s) to be restored.

- Is canceled at the middle of the transaction.

VER-62983 Hadoop

When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions is disabled on hcatalogconnector schemas.

VER-62414 Hadoop Loading ORC and Parquet files with a very small stripe or rowgroup size can lead to a performance degradation or cause an out of memory condition.
VER-63215 Catalog Engine, Elastic Cluster If a cluster is running with K-safety as 0, it is possible that some oids are regenerated which leads to potential conflict and eventual crash.
VER-62855 Tuple Mover

TM operations, such as moveout or re-partitioning, of an unsegmented projection may error out when a concurrent re-balance operation removes the projection from the node.

VER-62884 Storage and Access Layer A node crash after a DML may leave index files (.pidx files) under storage locations if the background reaper service didn't get an opportunity clean them.
VER-63551 Security, Spread

If the EncryptSpreadComm parameter is set to 'vertica' on a SUSE Linux Enterprise Server (SLES) 11 cluster, Vertica fails to start. Vertica has deprecated support for SLES 11 and will remove support in a later release.

Workaround: Delete the setting for EncryptSpreadComm.

VER-61351 Admin Tools Adding large numbers of nodes in a single operation could lead to an admintools error parsing output.
VER-61289 Execution Engine, Hadoop If a Parquet file metadata is very large, Vertica can consume more memory than reserved and crash when the system runs out of memory.
VER-55257 Client Drivers - ODBC

Issuing a query that returns a large result set and closing the statement before retrieving all of its rows can result in the following error when attempting subsequent operations with the statement:

"An error occurred during query preparation: Multiple commands cannot be active on the same connection. Consider increasing ResultBufferSize or fetching all results before initiating another command."

Workaround: Set the ResultBufferSize property to 0 or retrieve all rows associated with a result set before closing the statement.

VER-61834 Data Export, Execution Engine, Hadoop Occasionally the EXPORT TO PARQUET command writes data to incorrect partition directories of types timestamp, date, and time.

What's New in Vertica 9.1.0

Take a look at the Vertica 9.1 New Features Guide for a complete list of additions and changes introduced in this release.

Licensing

AWS Licensing Model

As of Vertica 9.1 you can use a Vertica by the hour license model that provides a pay-as-you-go model where you pay for only the number of nodes and number of hours you use. These Paid Listings are available in the AWS Marketplace:

An advantage of using the Paid Listing is that all charges appear on your Amazon AWS bill rather than purchasing a robust Vertica license. This eliminates the need to compute potential storage needs in advance.

See more: Vertica with CloudFormation Templates

Automatic License Auditing Now Includes ORC and Parquet Data

Vertica 9.1.0 now automatically audits ORC and Parquet data stored in external tables.

Vertica licenses can include a raw data allowance. Since 2016, Vertica licenses have allowed you to use ORC and Parquet data in external tables. This data has always counted against any raw data allowance in your license. Previously, the audit of data in ORC and Parquet format was handled manually. Because this audit was not automated, the total amount of data in your native tables and in external tables could exceed your licensed allowance for some time before being spotted.

Starting in version 9.1.0, Vertica automatically audits ORC and Parquet data in external tables. This auditing begins soon after you install or upgrade to version 9.1.0. If your Vertica license includes a raw data allowance and you have data in external tables based on Parquet or ORC files, review your license compliance before upgrading to Vertica 9.1.x. Verifying that your database is compliant with your license terms avoids having your database become non-compliant soon after you upgrade.

See more: Verify License Compliance for ORC and Parquet Data

Upgrade and Installation

AWS Installation Methods

As of release 9.1 you can install Vertica on Amazon Web Services (AWS) using the following products in the AWS Marketplace:

Each of these products have three CloudFormation Templatess and one AMI, as follows:

CFTs

AMI

See more: Installing Vertica with CloudFormation Templates

Eon Mode

Eon Mode, a database mode that was previously in beta, is now live.

You can now choose to operate your database in Enterprise Mode (the traditional Vertica architecture where data is distributed across your local nodes) or in Eon Mode, an architecture in which the storage layer of the database is in a single, separate location from compute nodes. You can rapidly scale an Eon Mode database to meet variable workload needs, especially for increasing the throughput of many concurrent queries.

After you create a database, the functionality of the actual database is largely the same regardless of the mode. The differences in these two modes lay in their architecture, deployment, and scalability.

See more: Using Eon Mode

Loading Data

Better Support for S3 Session Parameters

When you read from S3 using COPY FROM with S3 URLs, Vertica uses the configuration parameters described in AWS Parameters. Previously, these parameters could be set only globally, which made it harder to read from different regions or with different credentials in parallel. You can now set these parameters at the session level using ALTER SESSION.

In addition, if you use ALTER SESSION to set an AWS parameter, Vertica automatically sets the corresponding UDParameter used by the UDSource described in Bulk Loading and Exporting Data From Amazon S3.

See more: Specifying COPY FROM Options

Management Console

Provision and Revive an Eon Mode Database

Management Console now provides the ability to revive an Eon Mode database. Eon Mode databases keep an up-to-date version of their data and metadata in their communal storage locations. After the database is shut down, you can restore it later in the same state in a newly provisioned cluster.

The Provision and Revive wizard is provided through a deployment of Vertica and Management Console available on the AWS Marketplace.

See more: Reviving an Eon Mode Database in MC

Monitor External Data

Previously, Management Console only provided monitoring information for internal Vertica tables. In Vertica 9.1.0, MC detects and monitors any external tables and HCatalog data included in your database.

To see this external data visualized, take a look at the Table Utilization charts on the MC Activity page. The table utilization charts on this page now reflect external tables and HCatalog data. The table information displayed now includes table types (external, internal, and HCatalog) and table definitions (applicable only to external tables).

You can also see changes in the Storage View page. When MC detects that your database contains external tables or references HCatalog data, it displays an option to view more details about those tables.

See more: Monitoring Table Utilization and Projections and Monitoring Database Storage

Security and Authentication

Audit Categories

This feature creates a set of audit categories that make it easy to search for queries, parameters, and tables with a similar purpose. There are three types of SQL objects you can audit in Vertica: queries, tables, and parameters. With this feature, you can see system tables that bring together changes to these SQL objects and track them more easily. Use the security and authentication audit category to better understand changes to your database.

This feature introduces four new system tables to better audit changes to your database:

AUDIT_MANAGING_USERS_PRIVILEGES

LOG_PARAMS

LOG_QUERIES

LOG_TABLES

See more: Database Auditing

Data Analysis

Machine Learning for Predictive Analytics

New features include:

See more: Machine Learning for Predictive Analytics

SDK Updates

All C++ and Java UDxs Support Cancellation

All UDx types now support cancellation callbacks. You can implement the CANCEL() function to perform any cleanup specific to your UDx. Previously, only some UDx types supported cancellation.

See more: Handling Cancel Requests

Python UDTFs

The SDK now supports writing user-defined transform functions (UDTFs) in Python, in addition to C++, Java, and R.

See more: Python API and Python SDK Documentation

Apache Hadoop Integration

Delegation Tokens and Proxy Users

An alternative to granting HDFS access to individual Vertica users is to use delegation tokens, either directly or with a proxy user. In this configuration, Vertica accesses HDFS on behalf of some other (Hadoop) user. The Hadoop users need not be Vertica users at all, and Vertica need not be Kerberized.

See more: Proxy Users and Delegation Tokens

Important Changes to Automatic Auditing of Data Usage

Vertica 9.1.0 now automatically audits data stored in external tables in ORC and Parquet format.

Vertica licenses can include a raw data allowance. Since 2016, Vertica licenses have allowed you to use ORC and Parquet data in external tables. This data has always counted against any raw data allowance in your license. Previously, the audit of ORC and Parquet data was handled manually. Because this audit was not automated, the total amount of data in your native tables and in external tables could exceed your licensed allowance for some time before being spotted.

Starting in 9.1.0, Vertica automatically audits ORC and Parquet data in external tables. This auditing begins soon after you install or upgrade to version 9.1.0. If your Vertica license includes a raw data allowance and you have data in external tables based on Parquet or ORC files, review your license compliance before upgrading to Vertica 9.1.x. Verifying your database is complaint with your license terms avoids having your database become non-compliant soon after you upgrade.

See more: Verify License Compliance for ORC and Parquet Data

Apache Spark Integration

The Spark Connector is now distributed as part of the Vertica server installation. Instead of downloading the connector from the myVertica portal, you can now get the Spark Connector file from a directory on a Vertica node.

The Spark Connector JAR file is now compatible with multiple versions of Spark. For example, the Connector for Spark 2.1 is also compatible with Spark 2.2.

See more: Getting the Spark Connector and Vertica Integration for Apache Spark in Support Platforms

Voltage SecureData Integration

Vertica 9.1.0 now integrates with the Voltage SecureData encryption. This feature lets you:

See more: Voltage SecureData

Apache Kafka Integration

Changes to the Kafka Parser Functions

Vertica 9.1 introduces the following new features to KafkaAvroParser and KafkaJSONParsers:

Kafka Integration and Eon Mode

The Vertica integration with Apache Kafka now works in Eon Mode. There are several details to consider when streaming data from Kafka into an Eon Mode Vertica cluster. See Vertica Eon Mode and Kafka for details.

See more: Integrating Apache with Kafka

SDK Updates

All C++ and Java UDxs Support Cancellation

All UDx types now support cancellation callbacks. You can implement the CANCEL() function to perform any cleanup specific to your UDx. Previously, only some UDx types supported cancellation.

Python UDTFs

The SDK now supports writing user-defined transform functions (UDTFs) in Python, in addition to C++, Java, and R.

See more: Handling Cancel Requests and Python API

What's Deprecated in Vertica 9.1

The following Vertica functionality was deprecated:

For more information see Deprecated and Retired Functionality in the Vertica documentation.

Vertica 9.1.0-5: Resolved Issues

Release Date: 07/31/2018

This hotfix addresses the issues below.

Issue

Component

Description

VER-63128 DDL - Table

In past releases, Vertica placed an exclusive lock on the global catalog while it created a query plan for CREATE TABLE AS <query>. Very large queries could prolong this lock until it eventually timed out. On rare occasions, the prolonged lock caused an out-of-memory exception that shut down the cluster.

This problem was resolved by moving the query planning process outside the scope of catalog locking.

VER-63295 Execution Engine

In some workloads, the memory consumed by a long-running session could grow over time.

This has been resolved.

Vertica 9.1.0-4: Resolved Issues

Release Date: 07/12/2018

This hotfix addresses the issues below.

Issue

Component

Description

VER-63072 Optimizer - Query Rewrite

Applying an access policy on some columns in a table significantly impacted execution of updates on that table, even where the query excluded the restricted columns. Further investigation determined that the query planner materialized unused columns, which adversely affected overall performance.

This problem has been resolved.

VER-63026 DDL - Projection

Some versions of Vertica allowed projections to be defined with a subquery that referenced a view or another projection. These malformed projections occasionally caused the database to shut down.

Vertica now detects these errors when it parses the DDL and returns with a message that projection FROM clauses must only reference tables.

VER-62887 Diagnostics

Scrutinize was collecting the logs only on the node where it was executed, not on any other nodes.

This issue has been fixed.

VER-63188 Optimizer - Join & Data Distribution

Optimized MERGE was, at times, not used with NULL constants in Varchar or Numeric columns.

This issue has been fixed.

VER-63040 UI - Management Console

With LDAP authentication turned on, users added before LDAP "default search path" modified were not able to logon.

This issue has been fixed.

VER-63029 Backup/DR

Object replication and restore left behind temporary files in the database Snapshot folder.

This fix ensures such files are properly cleaned-up when the operation completes.

VER-62971 Data load / COPY

COPY statements using the default parser could erroneously reject rows if the input data contained overflowing floating point values in one column and the maximum 64-bit integer value in another column.

This issue has been fixed.

VER-62969 Execution Engine

Loads and copies from HDFS occasionally failed if Vertica was reading data through libhdfs++ when the NameNode abruptly disconnected from the network. libhdfs++ was being compiled with a debug flag on, which caused network errors to be caught in debug assertions rather than normal error handling and retry logic.

This issue has been fixed.

VER-62953 Hadoop

The ORC Parser skipped rows when the predicate "IS NULL" was applied on a column containing all NULLs.

This issue has been fixed.

VER-63075 Data load / COPY

INSERT DIRECT failed when configuration parameter ReuseDataConnections was set to 0 (disabled). When this parameter was disabled, no buffer was allocated to accept inserted data.

Now, a buffer is always allocated to accept data regardless of how ReuseDataConnections is set.

VER-63260 Backup/DR

Object replication to a target database with a newer version failed during the catalog preprocessing phase when the source and destination cluster had different node names.

This issue has been fixed.

VER-63263 Optmizer

An Internal Error sometimes occurred in queries with subqueries contaning a mix of outer and inner joins.

This issue has been fixed.

Vertica 9.1.0-3: Resolved Issues

Release Date: 06/20/2018

This hotfix addresses the issues below.

Issue

Component

Description

VER-62510 AP-Advanced, Sessions

Using an ML function, which accepts a model as a parameter in the definition of a view, and then feeding that view into another ML function as its input relation caused failures in some special cases.

This issue has been fixed.

VER-62836 S3

There was an issue loading from the default region (us-east-1) in some cases where the communal location was in a different region.

This issue has been fixed.

VER-62852 Optimizer - GrpBy & Pre Pushdown

Including certain functions like nvl in a grouping function argument resulted in a "Grouping function arguments need to be group by expressions" error.

This issue has been fixed

VER-62419 Optimizer

Attempts to swap partitions between flattened tables that were created in different versions of Vertica failed, due to minor differences in how source and target SET USING columns were internally defined.

This issue has been fixed.

VER-62076 Hadoop

Hadoop impersonation status messages were not being properly logged.

This issue has been fixed to allow informative messages at the default logging level.

Vertica 9.1.0-2: Resolved Issues

Release Date: 05/30/2018

This hotfix addresses the issues below.

Issue

Component

Description

VER-62096 Data Export

Export to Parquet using local disk non-deterministically failed in some situations with a 'No such file or directory' error message.

This issue has been fixed.

VER-62465 Execution Engine, Optimizer

The way null values were detected for float data type columns returned inconsistent results for some queries with median and percentile functions.

This issue has been fixed.

VER-62463 Vertica Log Text Search

Dependent index tables did not have their reference indices updated when the base table's columns were altered but not removed. This caused an internal error on reference.

This issue has been fixed so the indices are updated when you alter the base table's columns.

VER-62144 Error Handling, Execution Engine

If you tried to insert data that was too wide for a VARCHAR or VARBINARY column, the error message did not specify the column name.

This error message now includes the column name.

Vertica 9.1.0-1: Resolved Issues

Release Date: 05/10/2018

This hotfix addresses the issues below.

Issue

Component

Description

VER-62043 UI - Management Console

When creating an Eon Mode database through Management Console on an AWS cluster, entering a valid communal storage location and selecting IAM Role authentication did not enable the Next button.

This issue occurred only during database creation, not when creating an Eon Mode database cluster.

This issue has been fixed.

VER-62063 UI - Management Console

In the Database and Clusters > VerticaDB activity > Detail screen of an Eon database, some column text was not properly formatted.

This issue has been fixed.

VER-62045 Optimizer, Sessions

An internal EE error occurred when running several query retries and, at the same time, sequence objects referenced in the query were dropped and re-created.

This issue has been fixed so the retried query now picks up the re-created sequence.

VER-62148 Cloud - Amazon, UI - Management Console

When entering the Communal Storage URL for an Eon Mode database in the Management Console, some invalid forms of the URL were allowed.

This issue has been fixed.

VER-62118 UI - Management Console

In the Vertica Management Console (MC) in AWS, with some Vertica BYOL licenses, when using Cluster Management to add an instance to a Vertica database cluster, you were prompted to upload a license file even if your Premium Edition license is already installed. In the Add Instance wizard, you saw the message 'Your database exceeds the free Community Edition limits, please upload a valid Premium Edition license'.

This issue has been fixed

Vertica 9.1.0: Resolved Issues

Release Date: 04/30/2018

To see a complete list of additions and changes introduced in this release, refer to the Vertica 9.1 New Features Guide.

Issue

Component

Description

VER-59892 Hadoop, SAL, Sessions Previously Vertica built up sockets stuck in a CLOSE_WAIT state on non-kerberized WEBHDFS connection. This indicates that the socket is closed on the HDFS side but Vertica has not called CLOSE(). This issue has been fixed.
VER-15347 Data load / COPY Previously, COPY created files for saving rejected data as soon as the COPY started. Now COPY waits to create any rejected data files once a row has been rejected.
VER-60277 Optimizer Grouping on Analytics arguments, which are complex expressions, sometimes resulted in an internal Optimizer error causing the database to crash. This issue has been fixed.
VER-58604 Execution Engine, FlexTable An INSERT query caused multiple nodes to crash with a segmentation fault. This issue has been fixed.
VER-53704 Cloud - Amazon, UDX AWS UDX cancellation on long-running operations was not working properly. That is improved in this release, and S3 export transactions get canceled eventually. S3 source still has some cancellation issues, however, S3 source is being deprecated in favor of S3FS which has none of those issues and is much faster.
VER-60368 AP-Advanced Upgrading a cluster from 8.1.1-4 to 9.0.1-2 no errors were reported, but the database would not start after the upgrade. The CLUSTER upgrade changes task kept looping and rolled back with an Invalid model name error. This issue has been fixed.
VER-60454 S3 Before this release some AWS UDX functions including S3EXPORT did not have explicit execution permission, and in some cases, would not be callable due to permission errors. This issue has been fixed.
VER-59055 Execution Engine If a query contained multiple PERCENTILE_CONT(), PERCENTILE_DISC(), or MEDIAN() functions with similar PARTITION BY clauses, and the query also had a LIMIT clause, execution occasionally failed due to a bug during cleanup. This problem has been resolved.
VER-56645 Basics The INSTR() function sometimes missed valid matches when the position parameter was set to a negative value. This problem was resolved.
VER-55542 Execution Engine Queries that specified both LIMIT and UNION ALL clauses failed to complete execution. This issue has been fixed.
VER-44795 Hadoop Sometimes when the datanode is overloaded and/or running out of memory, it starts sending incomplete HTTP messages over WEBHDFS, where the 'Content-Length' field does not correspond to the actual length of the payload. This caused a CURL error. Vertica now tries to recover by requesting the data again with a longer timeout. If it does not succeed, after about 3 minutes Vertica terminates with a message like: "Error Details: transfer closed with 109314115 bytes remaining to read".
VER-59857 Optimizer In some cases, upgrading Vertica introduced inconsistencies in the catalog that caused fatal errors when it tried to update non-existent objects. Vertica now verifies that statistics objects exist before invalidating statistics for a given column.
VER-59567 Catalog Engine Previously the TABLE_CONSTRAINTS system table incorrectly reflected a cached value for the constraint table name. There was no internal corruption. The code has been updated, so that that table name value reflects the correct value and not the cached one.
VER-58529 Optimizer In certain queries with outer joins over simple subqueries, the Optimizer chose a sub-optimal outer table projection. This led to inefficient resegmentation or broadcast of the join data. This problem has been resolved.
VER-59123 Execution Engine Queries with window functions may produce the wrong result intermittently.
VER-57129 Hadoop After a user connected to HDFS using the Vertica realm, users from other realms could not connect to HDFS. This behavior has been corrected.
VER-53488 Catalog Engine Vertica did not previously release the catalog memory for objects that were no longer visible by any current or subsequent transactions. The Vertica garbage collector algorithm has been improved.
VER-61021 DDL When using ALTER NODE to change the MaxClientSessions parameter, the node's state changes from Standby or Ephemeral to Permanent. This issue has been fixed.
VER-59791 Client Drivers - JDBC For lengthy string values, the hash value computed by the JDBC implementation differed from the HASH() function on the Vertica server. This issue has been fixed.
VER-57757 Kafka Integration When using start_point parameter, the KafkaJSONParser sometimes failed to parse nested JSON data, which led to all rows after the first being rejected. This issue has been fixed.
VER-60123 Client Drivers - ODBC The Vertica ODBC driver supports up to 133-digit precision for Numeric types bound to a decimal type. Previously, the Vertica ODBC driver threw a data conversion exception when the precision was over the 133-digit limit. Now, the ODBC driver truncates Numeric values with precision over 133 digits.
VER-53943 Client Drivers - ADO An error handling issue sometimes caused the ADO.NET driver to hang when a connection to the server was lost. This problem has been corrected.
VER-36453 Client Drivers - ODBC The COPY LOCAL statement can appear only once in a compound query. It must be the first statement in a compound query.
VER-60535 Optimizer Running a MERGE statement resulted in the error "DDL statement interfered with this statement". This issue has been fixed.
VER-60887 Backup/DR

Backups to S3 failed when both of the following occurred:

  • You backup to the root of the S3 bucket.
  • The backup location reaches the restorePointLimit.

This issue has been fixed.

VER-59314 Backup/DR Previously, Python script failures on the remote host triggered during a vbr task could result in the error message "No JSON object could be decoded." Vertica now displays a more meaningful error message.
VER-58149 Backup/DR The underneath problem is during restore, at stage of copying over snapshot metadata to the initiator, we didn't make sure that succeeded before using these files. That's why we are seeing this terrible error message. After the fix, we give a more meaningful error if this case occurs.
VER-58068 Scrutinize Sometimes scrutinize times out during diagnostic collection, leading to diagnostic output from a single host instead of from the cluster. The timeouts for scrutinize have been increased.
VER-60510 Kafka Integration Previously, only pseudosuperuser/dbadmin users could create Kafka schedulers. Non-privileged user would need to have operation privileges granted to them in order to use the scheduler. The scheduler tables always belonged to pseudosuperuser/dbadmin users. Now, non-privileged Vertica users can run the vkconfig utilities to create and operate a scheduler directly. The Kafka scheduler's tables automatically belong to the user who created the scheduler.
VER-59994 Optimizer

The default value of MaxParsedQuerySizeMB has changed. The original default was 512MB. This only bound a certain amount of "used" parse memory. The default is now 1024MB. This results in bounding all parse memory. There may be some queries that used to be able to run successfully, that will now encounter the "Request size too big. Please try to simplify the query" error. This is not a regression.

To successfully run the query, increase the value of MaxParsedQuerySizeMB and reset the session.

VER-60042 Optimizer When running a query with one or more nodes down, in some cases an inconsistency in plan pruning with buddy plan resulted in an Internal Optimizer error. This issue has been fixed.
VER-51143 Backup/DR Previously, vbr failed on full restore tasks and copy cluster tasks when there is only one database on the cluster and no dbName parameter specified in the vbr configuration file. This issue has been resolved.
VER-60665 Backup/DR Object restore/replication used to crash a node when restoring/replicating from a backup/snapshot that contains sequences with the default minimum value. This issue is resolved and such sequences are now restored gracefully with the correct minimum value.

Known issues Vertica 9.1

Updated: August 10, 2018

Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.

Known Issues

Issue

Component

Description

VER-61069 Execution Engine

In very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely.

Workaround: The remaining processes can be killed using admin tools.

VER-59235 UI - Management Console Previously, the MC LDAP user authentication didn't support changing default search path. Based on the existing design , the default search path is supposed to stay unchanged and served as a base LDAP search path. User should ONLY change the user search attribute to retrieve user information from LDAP server.
VER-60797 License

AutoPass format licenses do not work properly when installed in Vertica 8.1 or older. In order to replace them with legacy Vertica license, users need to set AllowVerticaLicenseOverWriteHP=1.

VER-60642 Data Export

Export to Parquet using local disk can non-deterministically fail in some situations, with "No such file or directory". Re-running the same export will likely succeed.

VER-58168 Recovery

A transaction that started before a node began recovery is referred to as a dirty transaction in the context of recovery. A recovering node must wait for or cancel such a transaction in order to recover tables modified by the transaction. In rare instances such transactions may be hung and can not be canceled. Tables locked by such a transaction can not be recovered and therefore a recovering node can not transition to 'UP'. Usually you can stop the hung transaction by restarting the node on which the transaction was initiated, assuming this is not a critical node. In extreme cases or instances where the initiator node is a critical node the cluster must be restarted.

VER-56679 Nimbus, SAL When generating an annotated query, the optimizer does not recognize that the ETS flag is ON and produces the annotated query as if ETS is OFF. If the resulting annotated query is then run when ETS is ON, some hints might not be feasible.
VER-48041 Admin Tools

On some systems, occasionally admintools cannot parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient. There is no known work-around.

VER-54924 Admin Tools

On databases with hundreds of storage locations, admintools SSH communication buffers can overflow. The overflow can interfere with database operations like startup and adding nodes. There is no known work-around.

VER-48020 Hadoop Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load.
VER-41895 Admin Tools

On some systems admintools cannot parse output while running SSH commands on hosts in the cluster. We do not know the root cause of this issue. In some situations, if the admintools operation needs to run on just one node, there is a workaround. Users can avoid this issue by using SSH to connect to the target node and running the admintools command on that node directly.

VER-61433 Hadoop Under heavy concurrency, when querying ORC files in a Kerberized High Availability HDFS environment, it is possible for the Vertica process on a single node to crash.
VER-57126 Data Removal - Delete, Purge, Partitioning

Partition operations that use a range, for example, COPY_PARTITIONS_TO_TABLE must split storage containers that span the range in order to complete. For tables partitioned using a GROUP BY, expression such split plans can require a relatively large amount of memory, especially when the table has a large number of columns. In some cases the partition operation may error with "ERROR 3587: Insufficient resources to execute plan on pool <poolname>".

Workaround: Increase the memorysize or decrease the plannedconcurrency of <poolname>.

Hint: A best practice is to group partitions such that it is never necessary to split storage containers. Following this guidance greatly improves the performance of most partition operations.

VER-60409 AP-Advanced, Optimizer

The APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also returns many columns may fail with error "Request size too big" due to additional memory requirement in parsing.

Workaround: Increase configuration parameter MaxParsedQuerySizeMB or reduce the number of output columns. For the cases of APPLY_SVD and APPLY_PCA, you can limit the number of output columns either by setting the parameter num_components or by setting the parameter cutoff. In practice, cutoff=0.9 is usually enough.

Please note that If you increase MaxParsedQuerySizeMB to a larger value, for example 4096, each query you run may use 4 GB of memory during parsing. This means that running multiple queries at the same time could cause out-of-memory (OOM) errors if your total memory is limited. Please refer to Vertica documentation for more information about MaxParsedQuerySizeMB.

VER-59147 AP Advanced

Using a machine learning (ML) function, which accepts a model as a parameter, in the definition of a view, and then feeding that view into another ML function as its input relation may cause failure in some special cases.

Workaround: You should always prefix a model_name with its appropriate schema_name when you use it in the definition of a view.

VER-61420 Data Removal - Delete, Purge, Partitioning Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Vertica 9.1.0 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Any extraneous containers created this way will eventually be merged by the Tuple Mover.
VER-60158 Client Drivers - JDBC Using a single connection in multiple threads could result in a hang if one of the threads does a COMMIT or ROLLBACK without joining the other thread first.
VER-61205 Basics, Catalog Engine

If a configuration parameter in vertica.conf begins with the # character, Vertica crashes and throws an unhandled error. For example:

# LDAPLinkBindPswd = #A^F&pGt2J9S#

LDAPLinkFilterGroup = #cn=EDW*

# LDAPLinkFilterUser = cn=*

Workaround: Avoid using parameter values beginning with "#" in the vertica.conf file.

VER-61584 Nimbus, Subscriptions The VAssert(madeNewPrimary) fails. This occurs only while nodes are shutting down or in unsafe status.
VER-62000 UI - Management Console

When creating an Eon Mode database through Management Console on an AWS cluster, entering a valid communal storage location and selecting IAM Role authentication does not enable the Next button. This issue occurs during database creation only, not when creating an Eon Mode database cluster.

Workaround: On the same wizard page, select “Use AWS Key Credentials”, enter enough text to enable the Next button, then select IAM Role authentication. Then click the Next button.

VER-61362 Nimbus, Subscriptions

During cluster formation, when one of the up-to--date nodes are missing libraries and attempts to recover them, the recovery fails with a cluster shutdown.

Workaround: Copy libraries into the node's Libraries/ directory from a peer node.

VER-61876 UI - Management Console

In the Vertica Management Console (MC) in AWS, with some Vertica BYOL licenses, when using Cluster Management to add an instance to a Vertica database cluster, you are prompted to upload a license file even if your Premium Edition license is already installed. In the Add Instance wizard, the message 'Your database exceeds the free Community Edition limits, please upload a valid Premium Edition license.' appears.

Workaround: Upload your license file again and the instance and database node are added.

 


Legal Notices

Warranty

The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

The information contained herein is subject to change without notice.

Restricted Rights Legend

Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.

Copyright Notice

© Copyright 2006 - 2018 Hewlett-Packard Development Company, L.P.

Trademark Notices

Adobe® is a trademark of Adobe Systems Incorporated.

Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.

UNIX® is a registered trademark of The Open Group.


Send documentation feedback to HPE