The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 9.0.x.
They also contain information about issues resolved in:
The Premium Edition of Vertica is available for download at www.vertica.com.
The Community Edition of Vertica is available for download at the following sites:
The documentation is available at http://www.vertica.com/docs/9.0.x/HTML/index.htm.
Hotfixes are available to Premium Edition customers only. Each software package on the vertica.com/downloads site is labeled with its latest hotfix version.
Take a look at the Vertica 9.0.1 New Features Guide for a complete list of additions and changes introduced in this release.
In Vertica 9.0.1, the Administration Tools, a graphical user interface that works in terminal windows, includes the option to create new Eon Mode Beta databases. Previously, this option was available only through the command line and Management Console with Provisioning.
You can now use ALTER TABLE to add and delete table columns. Previously this functionality was available only in Enterprise Mode.
The storage administrator can now use MC to view the path for the Depot.
Reduced AWS infrastructure costs:
See more: Eon Mode Beta
Now easier to use with new option EXPLAIN ANNOTATED which shows the Directed Query for a passed input query.
See more: Using Optimizer-Generated and Custom Directed Queries Together
Vertica version 9.0.0 introduced support for the UUID data type. Most 9.0.0 client libraries provided basic support for the new data type. With version 9.0.1, client libraries ADO.NET, ODBC, and OLE DB fully support the UUID data type.
See more: Client Libraries
Machine Learning for Predictive Analytics now supports the following functionality:
See more: Data Analysis
Backup, restore, and recovery now supports the following functionality:
VBR Wildcard Support: Vertica supports the use of wildcard characters to include or exclude database objects from your backup, restore, and replication tasks. You can use wildcards in your vbr .ini file or as vbr command line parameters. For more information, refer to Using Wildcards with Backup, Restore, and Replicate.
S3 Backup Encryption: Backups made to Amazon S3 can be encrypted using native server-side S3 encryption capability.
Object Restore and Replication to a Newer Version of Vertica: Beginning with version 9.0.0-2, Vertica supports object replication and restore to a version up to one minor version later than the current database version. For example, you can replicate or restore objects from a version 9.0.0-2 database to a version 9.0.1 database.
See more: Backup, Restore and Recovery
Apache Hadoop integration now supports the following functionality:
HCatalog Connector Supports Ranger for Hive Authorization: The HCatalog Connector now integrates with Ranger to manage authorization for Hive data. (Support for a similar service, Sentry, was added previously.) You must connect to Hive using HiveServer2 (the default), not WebHCat, to use this feature.
For more information, see the HCatalog Connector section of Configuring Kerberos in Integrating with Apache Hadoop.
HCatalog Connector Supports HA Metastore: If HiveServer2 uses High Availability Metastore, you can direct the HCatalog Connector to take advantage of it. When using CREATE HCATALOG SCHEMA, you can specify a comma-delimited list for the value of the HOSTNAME par ameter. Alternatively, you can omit the parameter and Vertica reads it from hive‑site.xml.
See more: Apache Hadoop Integration
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-65892 | Kafka Integration | Vertica has upgraded .librdkafka to 0.11.5 to improve performance of Kafka Copy. |
VER-66752 | Kafka Integration | Vertica now properly supports TLS certificate chains for use with Kafka Scheduler and UDx. |
VER-67418 | UDX | Flex table views now properly show UTF-8 encoded multi-byte characters. |
VER-68903 | Backup/DR | Object restore or replication to a cluster that contains a standby or execute node no longer has the potential to cause the destination node to fail. |
VER-69640 | Kafka Integration | Vertica no longer fails when it encounters improper Kafka SSL key/certificate setup error information on RedHat Linux. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-68431 | License | Upgrading 8.1.1-x databases with Autopass license installed to 9.0 or later could lead to license tampering startup issues. This problem has been fixed. |
VER-66707 | Scrutinize | The scrutinize command previously used the /tmp directory to store files during collection. It now uses the specified temp directory. |
Issue |
Component |
Description |
VER-67940 | Data load / COPY | Vertica could fail if its client disconnected during a COPY LOCAL with REJECTED DATA. This issue has been fixed. |
VER-67875 | Catalog Engine | Previously, if a DROP statement on an object failed and was rolled back, Vertica would generate a NOTICE for each dependent object. This was problematic in cases where the DROP operation had a large number of dependencies. This issue has been resolved: Vertica now generates up to 10 messages for dependent objects, and then displays the total number of dependencies. |
VER-67509 | Execution Engine |
Previously, RUN_INDEX_TOOL analyzed all projections, and executed on only one thread. In addition to general performance improvements, the meta-function now supports two new parameters that can significantly improve Index tool performance:
|
VER-67507 | Execution Engine | Users were unable to cancel meta-function RUN_INDEX_TOOL. This problem has been resolved. |
VER-67418 | UDX | Flex table views now properly show UTF-8 encoded multi-byte characters. |
VER-67416 | Optimizer | Queries that perform an inner join and group the results now return consistent results. |
VER-66707 | Scrutinize | The scrutinize command previously used the /tmp directory to store files during collection. It now uses the specified temp directory. |
Issue |
Component |
Description |
VER-66744 | UI - Management Console | The MC would not properly display the license chart on the license tab when license usage exceeded 10TB. This issue has been resolved. |
VER-66767 | Hadoop | Export to Parquet previously crashed when the export included a combination of Select statements. This issue has been fixed. |
VER-67005 | UI - Management Console | While logged into the MC as a non-admin user, the table in the Load page's "Continuous tab" would periodically not show all the current running Vertica schedulers that are in use. This issue has been resolved. |
VER-66732 | UI - Management Console | Running the 'Explain' query option for an unsegmented projection on the MC Query Plan page could trigger the error: "There is no metadata available for this projection". This issue has been resolved. |
VER-66720 | Data Removal - Delete, Purge, Partitioning | The SWAP_PARTITIONS_BETWEEN_TABLES meta-function is now atomic. |
VER-66943 | UI - Management Console | When there are multiple schedulers running on the Vertica, and one of them is configured or halted, the MC load page now properly shows data about the remaining schedulers. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-65301 | Hadoop |
Hive file formats partition pruning requires all partition directory names to be in lower-case. Previously, Export to Parquet sometimes generated partition directory names in mixed-case. This caused the partition pruning optimization to be skipped on a load. This issue has been fixed. |
VER-65360 | UI - Management Console |
License usage was showing 0% when Autopass license is used. This issue has been fixed. Vertica server hotfix upgrade is also required for this fix to take effect (same release number). |
VER-65564 | Kafka Integration |
In rare circumstances, sessions using the KafkaSource COPY command could hang during connection tear down. This issue has been fixed by upgrading the underlying Kafka client library, rdkafka, from 0.9 to 0.11. This upgraded library contains protocol support for newer versions of Kafka. When communicating with a Kafka cluster of version 0.10 or newer, Vertica's Kafka interactions can take advantage of newer protocols and optimizations. To do so, set api.version.request=true in your rdkafka settings. In addition, the message.max.bytes rdkafka parameter has changed from representing the maximum size of a single message to the maximum size of a batch of messages, for compatibility with newer versions of Kafka. Applications that set this parameter may need to increase the value to reflect the new protocol semantics. Both parameters can be changed via the kafka_conf argument of the Kafka UDX functions, or via uds_kv_parameters load-spec option of vkconfig. For additional details, consult the documentation. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-65162 | Catalog Engine |
If vertica.conf set a parameter to a value that started with hash (#), Vertica ignored that parameter and logged a warning. This issue has been fixed. |
VER-65029 | Admin Tools |
Adding a large number of nodes in a single operation could lead to an admintools error parsing output. This issues has been fixed. |
VER-65500 | Catalog Engine |
Having frequent create/insert/drop tables caused a memory leak in the catalog. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-64637 | Optimizer |
Very large expressions could run out of stack and crash the node. This issue has been fixed. |
VER-64815 | UI - Management Console |
The Database Designer wizard in Management Console did not correctly set the default value for design k-safety. This issues has been fixed. |
VER-64068 | AP-Advanced |
If you ran APPROXIMATE_COUNT_DISTINCT_SYNOPSIS on a database table that contained NULL values, the synopsis object that it returned sometimes was larger than the one it returned after the NULL values were removed. This issue has been resolved. |
VER-64789 | Execution Engine |
In some regular expression scalar functions, Vertica would crash for certain long input strings. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-63450 | DDL - Table |
Vertica placed an exclusive lock on the global catalog while it created a query plan for CREATE TABLE AS <query>. In this case, very large queries could prolong this lock until it eventually timed out. On rare occasions, the prolonged lock caused an out-of-memory exception that shut down the cluster. This issue has been fixed. |
VER-64497 | Catalog Engine, Spread |
If a control node and one of its child nodes went down, attempts to restart the second (child) node sometimes failed. This issue has been fixed. |
VER-63993 | DDL |
Add Column Not Null gets transformed into two different statements: Alter Table Add Column and Alter Table Add Constraint. Alter Table Add Constraint could fail and result in a new column in your table, but without the constraint. It also produced a confusing rollback message that did not indicate that the column was added. This issue has been fixed. |
VER-64572 | Tuple Mover |
Heavy workloads may have caused the Tuple Mover to perform operations for an extended period of time, This led to memory accumulation before the Tuple Mover exited its session and released the memory . This issue has been fixed. |
VER-64306 | Optimizer |
When a projection was created with only the "PINNED" keyword, Vertica incorrectly considered it a segmented projection. This caused optimizer internal errors and incorrect results when loading data into tables with these projections. This issued has been fixed. IMPORTANT: The fix only applies to newly created pinned projections. Existing pinned projection in the catalog are still incorrect and need to be dropped and recreated manually. |
VER-64494 | Data load / COPY |
The SKIP keyword of a COPY statement was not properly supported with the FIXEDWIDTH data format. This issue has been fixed. |
VER-64794 | DDL - Table |
Previously, query label only applied to CTAS statements, but was missing in the implicit INSERT SELECT statement. With this fix, query label will be applied to both statements. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-63605 | Data load / COPY | In a COPY statement, excessively long invalid inputs to any date or time columns could cause stack overflows, resulting in a crash. This issue has been fixed. |
VER-64289 | UI - Management Console |
Long queries were failing in the MC 'Query Execution' page. This issue was resolved. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-63366 | Admin Tools, UI - Agent |
On some Debian 7 systems, the vertica agent process would not start during installation. This issue has been fixed. |
VER-63678 | UDX, Vertica Log Text Search | The AdvancedTextSearch package leaks memory. This issue has been fixed. |
VER-63716 | UI - Management Console | The jackson-databind library used by MC has been upgraded to 2.9.5 to fix security vulnerabilities. |
VER-63896 | Execution Engine, Hadoop |
If a Parquet file metadata is very large, Vertica can consume more memory than reserved and crash when the system runs out of memory. This issue has been fixed. |
VER-63898 | Data load / COPY, Hadoop | Queries involving a join of two external tables loading parquet files sometimes caused Vertica to crash. The crash happened in a very rare situation due to memory misalignment and has been fixed. |
VER-63864 | Hadoop | Certain large Parquet files generated from ParquetExport could contain a lot of metadata leading to out of memory issues. A new parameter 'fileSizeMB' has been added to ParquetExport to limit the file sizes being exported, thereby limiting the metadata size. |
VER-63831 | Data Export, Execution Engine, Hadoop |
Occasionally the EXPORT TO PARQUET command will write data to incorrect partition directories of types timestamp, date, and time. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-63547 | Data load / COPY |
Occasionally, a copy or external table query could crash a node. This has been fixed. |
VER-63677 | UI - Management Console |
The error dialog was displayed on the Load page 'Instance' tab when a longer time interval was selected. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-63192 | Optimizer - Join & Data Distribution |
Optimized MERGE was sometimes not used with NULL constants in varchar or numeric columns. This issue has been fixed. |
VER-63252 | Optimizer |
An Internal Error would sometimes occur in queries with subqueries containing a mix of outer and inner joins. This issue has been fixed. |
VER-63296 | Execution Engine |
In some workloads, the memory consumed by a long-running session could grow over time. This issue has been fixed. |
VER-62853 | Optimizer - GrpBy & Pre Pushdown |
Including certain functions like nvl in a grouping function argument resulted in a "Grouping function arguments need to be group by expressions" error. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-62888 | Diagnostics |
Scrutinize was collecting the logs only on the node where it was executed, not on any other nodes. This issue has been fixed. |
VER-62972 | Data load / COPY |
COPY statements using the default parser could erroneously reject rows if the input data contained overflowing floating point values in one column and the maximum 64-bit integer value in another column. This issue has been fixed. |
VER-63073 | Optimizer |
Applying an access policy on some columns in a table significantly impacted execution of updates on that table, even where the query excluded the restricted columns. Further investigation determined that the query planner materialized unused columns, which adversely affected overall performance. This issue has been fixed. |
VER-63027 | DDL - Projection |
Some versions of Vertica allowed projections to be defined with a subquery that referenced a view or another projection. These malformed projections occasionally caused the database to shut down. Vertica now detects these errors when it parses the DDL and returns with a message that projection FROM clauses must only reference tables. |
VER-62970 | Execution Engine |
Loads and copies from HDFS occasionally failed if Vertica was reading data through libhdfs++ when the NameNode abruptly disconnected from the network. libhdfs++ was being compiled with a debug flag on, which caused network errors to be caught in debug assertions rather than normal error handling and retry logic. This issue has been fixed. |
VER-60984 | Hadoop |
The ORC Parser skipped rows when the predicate "IS NULL" was applied on a column containing all NULLs. This issue has been fixed. |
VER-63076 | Data load / COPY |
INSERT DIRECT failed when the configuration parameter ReuseDataConnections was set to 0 (disabled). When this parameter was disabled, no buffer was allocated to accept inserted data. Now, a buffer is always allocated to accept data regardless of how ReuseDataConnections is set. |
VER-63030 | Backup/DR |
Object replication and restore left behind temporary files in the database Snapshot folder. This fix ensures such files are properly cleaned-up when the operation completes. |
VER-62747 | Admin Tools, Nimbus |
Sometimes, adding many nodes in a single operation would fail with a message about invalid control characters. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-62508 | Execution Engine, Optimizer |
The way null values were detected for float data type columns led to inconsistent results for some queries with median and percentile functions. This issue has been fixed. |
VER-62511 | AP-Advanced, Sessions |
Using an ML function, which accepts a model as a parameter in the definition of a view, and then feeding that view into another ML function as its input relation caused failures in some special cases. This issue has been fixed. |
VER-62420 | Optimizer |
Attempts to swap partitions between flattened tables that were created in different versions of Vertica failed, due to minor differences in how source and target SET USING columns were internally defined. This issue has been fixed. |
VER-62464 | Vertica Log Text Search |
Dependent index tables did not have their reference indices updated when the base table's columns were altered but not removed. This caused an internal error on reference. This issue has been fixed so the indices are updated when you alter the base table's columns. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-61994 | Optimizer |
Some instances of type casting on a datetime column that contained a value of 'infinity' triggered segmentation faults that caused the database to fail. This issue has been fixed |
VER-62146 | Error Handling, Execution Engine |
If you tried to insert data that was too wide for a VARCHAR or VARBINARY column, the error message did not specify the column name. This error message now includes the column name. |
VER-62150 | FlexTable | Flex keys data type guessing will now guess UUID data as the native Vertica UUID type. |
VER-62095 | Data Export |
Export to Parquet using the local disk non-deterministically failed in some situations with a 'No such file or directory' error message. This issue has been fixed. |
This hotfix addresses the issue below.
Issue |
Component |
Description |
VER-60730 | Catalog Engine |
Vertica did not previously release the catalog memory for objects that were no longer visible by any current or subsequent transactions. The Vertica garbage collector algorithm has been improved. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-61346 | Execution Engine |
Running a large query with several analytic functions returned the wrong results. This issue has been fixed. |
VER-61488 | Client Drivers - JDBC |
For lengthy string values, the hash value computed by the JDBC implementation differed from the hash() function on the Vertica server. This issue has been fixed. |
VER-61487 | Kafka Integration |
A non-DBADMIN user could not launch the Kafka scheduler. This issue has been fixed. |
VER-61210 | Scrutinize |
Scrutinize was collecting the logs only on the node where it was executed, not on any other nodes. This issue has been fixed. |
VER-61409 | DDL |
When using ALTER NODE to change the MaxClientSessions parameter, the node's state changes from Standby or Ephemeral to Permanent. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-61078 | Client Drivers - ADO |
An issue with VStream in the Vertica client resulted in a Socket connection error handling error message and caused the client to crash. This issue has been fixed. |
VER-61190 | Backup/DR |
Backing up on S3 failed when both of the following occurred: 1. when you backup to the root of the S3 bucket, AND 2. when the backup location reaches the restorePointLimit. This issue has been fixed. |
VER-60995 | Backup/DR | Running replication on a schema that contains sequences resulted in an error causing the replication to fail. In addition, it caused a node crash on the secondary cluster. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-60762 | AP-Advanced |
Upgrading a cluster from 8.1.1-4 to 9.0.1-2 no errors were reported, but the database would not start after the upgrade. The Compute Cluster upgrade change task kept looping and rolled back with an Invalid model name error. This issue has been fixed. |
VER-60927 | Optimizer | Running SQL queries resulted in an internal Optimizer error causing the database to crash. This issue has been fixed. |
VER-60761 | Optimizer | Running a merge statement resulted in the error "DDL statement interfered with this statement". This issue has been fixed. |
VER-60791 | Optimizer | When running a query with one or more nodes down, in some cases an inconsistency in plan pruning with buddy plan resulted in an Internal Optimizer error. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-60150 | Optimizer |
The default value of MaxParsedQuerySizeMB has changed. The original default was 512MB. This only bound a certain amount of "used" parse memory. The default is now 1024MB. This results in bounding all parse memory. There may be some queries that used to be able to run successfully, that will now encounter the |
VER-60176 | Execution Engine, FlexTable | An insert query caused multiple nodes to crash with a segmentation fault. This issue has been fixed. |
VER-60357 | Catalog Engine |
After renaming a table and changing the schema to which table belongs, the TABLE_NAME field in TABLE_CONSTRAINTS system table was not updated. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-60055 | Hadoop |
After a user connected to HDFS using the Vertica realm, users from other realms could not connect to HDFS. This behavior has been corrected. |
VER-60120 | Hadoop, SAL, Sessions | Previously Vertica built up sockets stuck in a CLOSE_WAIT state on non-kerberized webhdfs connection. This indicates that the socket is closed on the HDFS side but Vertica has not called close(). This issue has been fixed. |
VER-60011 | Optimizer |
There was an internal issue with AnalyzeRowCount causing a node to crash. To work around this issue, disable AnalyzeRowCount using the following command:
|
VER-59992 | DDL - Table |
Dropping an Identity column resulted in loss of all the data in the table. This issue has been fixed. |
This hotfix addresses the issues below.
Issue |
Component |
Description |
VER-59826 | Execution Engine |
At times Vertica crashed when running queries with both of the following:
This issue has been fixed. |
VER-59753 | DDL | The database crashed when attempting to access the catalog when a catalog snapshot didn't exist. This issue has been fixed. |
VER-59754 | Recovery | If you turn off RecoverByContainer, LAP may be skipped to recover data on a recovered node. This issue has been fixed by preventing the node from recovering if it detects that LAP cannot perform a recover by container until you turn on RecoverByContainer. |
VER-59368 | AP-Advanced |
The functions APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS and APPROXIMATE_COUNT_DISTINCT_SYNOPSIS are not back compatible with 7.x. We provide a workaround to make APPROXIMATE_COUNT_DISTINCT_OF_SYNOPSIS and APPROXIMATE_COUNT_DISTINCT_SYNOPSIS perform the same as 7.x as a choice. Workaround: Make ACD perform the same as 7.x in 9.0SP1: 1. Drop the ACD library by running /opt/vertica/packages/approximate/ddl/uninstall.sql. 2. Install the deprecated ACD library by running approximate/ddl/install_deprecated.sql. Switch back to using the new ACD in 9.0SP1: 1. Drop the deprecated ACD library by running approximate/ddl/uninstall.sql. 2. Install the new ACD library by running approximate/ddl/install.sql. |
VER-59095 | Execution Engine | Queries with a combination of LIMIT and UNION ALL clauses were hanging during execution. This issue has been fixed. |
This hotfix contains a fix that allows Amazon cloud (AWS) customers to run Vertica in Eon Mode Beta in region us-west-2.
To see a complete list of additions and changes introduced in this release, refer to the Vertica 9.0.1 New Features Guide.
Issue |
Component |
Description |
VER-58319 | Catalog Engine | Catalog slowdown caused by memory cleanup in the catalog is fixed. |
VER-55961 | AP-Advanced, Database Designer Core | Analyze_correlations() would sometimes produce an error if run on columns with extremely high distinct value count. This issue has been fixed. |
VER-55605 | Backup/DR | Previously, vbr required a --task <task_name> when using the --showconfig command. --showconfig can now be used without --task. |
VER-55517 | Kafka Integration | Previously, if a node was shut down while the stream scheduler was running a microbatch, the scheduler would exit with an error. This issue has been resolved so that the scheduler will continue running when a node gets shut down. |
VER-58135 | Database Designer Core | When the configuration parameter EnableNewPrimaryKeysByDefault is set to true and the DBD tries to optimize a table with a large number of columns ( approximately >800), then the DBD could abort with the error "DDL statement interfered with Database Designer: DDL statement interfered with Database Designer". |
VER-56882 | Backup/DR | After taking backups to a shared filesystem such as NFS, deleting backups with the vbr "remove" task could fail with a "No such file or directory" error trying to open a local temporary file. This issue has been resolved. |
VER-57259 | DDL | CREATE VIEW statements sometimes hung and caused the server to fail if the view was selecting from hdfs sources. This issue has been resolved. |
VER-57233 | AP-Advanced | Use AGGREGATE FUNCTION "public.ApproxCountDistinctOfSynopsisDeprecated" to process existing synopsis objects. |
VER-56383 | ResourceManager, Security | After being granted usage and select privileges on a schema and table in a specific resource pool, the user was denied permission when trying to access the schema and table. This was because Vertica was checking permissions on secondary resource pools. |
VER-54866 | License | Running License Audit tasks took several hours and negatively affected performance. This issue has been resolved with several performance and optimization improvements. |
VER-55536 | UI-Management Console | Vertica MC now properly updates available memory information in response to hardware changes on your nodes. |
VER-57855 | DDL - Table | Previously, creating an external table with default expressions triggered an an error and canceled table creation. With this fix, you can now create external tables with default expressions. |
VER-55890 | Data Load / COPY | In a load from multiple files, COPY LOCAL no longer sometimes attributes rejected data to the wrong file names. |
VER-36735 | Data Load / COPY | The current_load_source() function now works correctly when used with COPY LOCAL. |
VER-56172 | Hadoop / SAL | Previously, querying external tables sometimes failed with an error if HDFS was redistributing data blocks at the same time the query tried to access a block. With this fix, the query will first attempt to read a replica of the block from a different datanode before failing. |
VER-53975 | Optimizer | Previously, executing a query using the ENABLE_WITH_CLAUSE_MATERIALIZATION hint sometimes returned different results depending on whether or not the query was executed in prepared statement mode. With this fix, use of ENABLE_WITH_CLAUSE_MATERIALIZATION in prepared statement mode is now disabled. |
VER-57528 | Hadoop | Vertica fixed a bug where no results were returned when a column had all NULL values and the predicate used is IS NULL. |
VER-56510 | AP - Advanced | Previously, some machine learning functions failed on views or subqueries with computed/constant columns. This is fixed in this release. |
VER-56461 | Data load / COPY, Security | Usage of COPY option REJECTED DATA AS TABLE incorrectly logged an error message. Usage of this option no longer generates an error. |
VER-58202 | Hadoop | Vertica implemented a caching optimization that reduces the number of calls made to the HDFS Namenode. |
VER-28482 | Data Removal - Delete, Purge, Partitioning | Previously, when using SET_OBJECT_STORE_POLICY, if you set the storage key to a date, in the policy_details column, the STORAGE_POLICIES system table displayed an internal value for the date. Now the STORAGE_POLICIES system table displays the user-readable date instead of the internal value under policy_details. |
VER-57990 | Hadoop | Queries to external tables that use the hdfs scheme used to intermittently fail with an empty error message. This issue has been fixed. |
VER-56158 | SDK | The Amazon Web Services UDx library is now explicitly installed in the public schema. Doing so prevents the library from being installed in another schema when the schema in use is not the public schema. |
VER-55569 | UI - Management Console | Rejections were not reported in the MC Load screen when a COPY command was executed with rejections written to a file, as opposed to a rejections table. The code was fixed to check dc_load_events table for rejections count. Note that this being a Data Collector table it is subject to rolling over depending on configuration and database activity. |
VER-28239 | Data load / COPY | When using the REJECTED DATA AS <filename> clause with COPY statements, pre 9.0SP1 <filename> could be treated as either a directory or a file, and the choice was made independent of user input depending on the number of sources. This could lead to confusion as in the case of GLOBS. With GLOBS, if a glob expands to just one source, <filename> is treated as a file. If a glob expands to multiple sources, it is treated as a directory. This could result in unpredictable behavior depending on the actual files a user is trying to load. This has now been fixed. 1. In REJECTED DATA AS <filename>, if the filename is appended with '/', it is treated as a directory regardless of the number of sources. 2. If the '/' is omitted but <filename> already exists as a directory, it will still be treated as a directory regardless of the number of sources. 2. If the '/' is omitted and <filename> does not already exist, and there are multiple sources, it will be treated as a directory. (This is unchanged behavior, but listed here for completeness) |
VER-58624 | Optimizer | When inserting data to a flattened table from a SELECT query (i.e., INSERT INTO ... SELECT), the insert query incorrectly populated NULL into the column with a DEFAULT query, if implicit type casting was needed. This issue has been fixed. |
VER-43810 | AP-Advanced, SDK | Previously, when a user-defined transform function (UDTF) selected a column from a subquery where that column was a computed expression or a constant with an alias (e.g. "SELECT x/2 as half_x"), the SDK function for retrieving column names in the UDTF would return an empty string for that column name instead of returning the correct alias. Now, the correct alias will be returned. |
VER-57753 | AP-Advanced | Previously, some machine learning functions failed on views or subqueries with computed or constant columns. This was fixed in this release. |
VER-56089 | AP-Advanced | Kmeans clustering limited the number of clusters to 1,000. This issue has been resolved and now the limit is 10,000. |
VER-55439 | Optimizer | When DDL changes a catalog object that a query is reading, the query used to fail with an error stating "DDL interfered with this query". Now, in most cases, the query is automatically retried. |
VER-57776 | Client Drivers - ADO | Transferring decimal data from Vertica using an ADO.NET client sometimes produced incorrect results if the BinaryTransfer connection parameter was set to true. This issue has been resolved. |
VER-27811 | Optimizer |
Previously, querying the projections_column table sometimes incorrectly displayed the most recent analyze event as FULL if the rowcount of the table in question changed. Querying the projections_column table now correctly displays the most recent analyze event as ROWCOUNT after a change in table rowcount. |
VER-56891 | Security |
Previously, the network port connection used by the Management Console to communicate with nodes accepted the older DES and 3DES ciphers, which have a known vulnerability. This connection no longer accepts connections that use these ciphers. |
VER-45304 | Backup/DR | In previous releases, copycluster would overwrite the target database's spread control mode configuration setting to be the same as the source database's. When running copycluster from an on-premise database configured to use the default "broadcast" setting to a database running on AWS EC2, the database on EC2 would not be able to start, because EC2 requires a "point-to-point" setting. This issue has been resolved by preserving the spread control mode setting of the destination database during copycluster. |
VER-54255 | DDL, Security | Previously, when the 'SET ROLE' statement was run, queries on local temporary tables created afterwards in the same session could hit "DDL interfere" errors. This issue has been resolved. |
VER-57578 | Hadoop, Security | This fix adds SSL support for HCatalogConnector with HiveServer2. |
VER-57379 | Data load / COPY | The storage formats of some columns in Vertica primary key and foreign key tables negatively impacted the performance of COPY. These storage formats have been changed to improve performance. |
VER-56509 | Hadoop | CREATE TABLE AS queries of ORC or Parquet data with many partitions (tens of thousands) were slow. More intelligent partition pruning has improved the performance. |
VER-56988 | Data Load/COPY | After using COPY LOCAL, the function get_num_rejected_rows would sometimes return incorrect results. This has been fixed. |
VER-56891 | Security | Previously, the network port connection used by the Management Console to communicate with nodes accepted the older DES and 3DES ciphers, which have a known vulnerability. This connection no longer accepts connections that use these ciphers. |
VER-58562 | Catalog Engine, Performance tests | After statistics are collected, Vertica stores them in the catalog. In earlier releases, some parts of this was not tracked. As of the current release, Vertica tracks statistics as a part of the METADATA pool. |
Take a look at the Vertica 9.0 New Features Guide for a complete list of additions and changes introduced in this release.
To use Vertica 9.0.0-1 or later, you must use RHEL/CentOS rpms for Amazon Linux 2017.09.
There is no AMI based on Amazon Linux.
Running your database in Eon Mode Beta separates the computational processes from the storage layer of your database, thereby enabling rapid scaling of computational resources to accommodate variable demand workloads. Initial deployment of Eon Mode Beta is limited to Amazon Web Services, and is not supported for use in production environments.
See more: Eon Mode Beta
Previously, Vertica could interface with AWS S3, but there was no direct query of big data formats such as Parquet or ORC. Now you can create Vertica external tables to query Parquet or ORC data from AWS S3.
See more: Loading Data
From the AWS Marketplace, launch Management Console with Provisioning, a user-friendly GUI wizard for provisioning new Vertica clusters on AWS instances. MC also guides new you through GUI-assisted post-provisioning steps, such as loading and querying data.
See more: Installing Using Management Console with Provisioning
Vertica clusters on Google Cloud Platform (GCP) now operate on top of virtual machines (VMs) as part of the Google Compute Engine. The Vertica Cloud Launcher solution allows you to create up a 16-node cluster. The solution includes the Vertica Management Console (MC) as the primary UI for you to get started.
See more: Overview of Vertica on Google Cloud Platform
Vertica now supports universally unique identifiers (UUIDs). This support allows for more efficient storage of UUID columns than storing as text strings. Vertica also now provides the support function UUID_GENERATE for generating UUIDs that are based on high-quality randomness from /dev/urandom.
See more: Supported Data Types
This hotfix addresses the issue below.
Issue |
Component |
Description |
VER-58977 | License | Running License Audit tasks took several hours and negatively affected performance. This issue has been resolved with several performance and optimization improvements. |
VER-58833 | Resource Manager, Security |
After being granted usage and select privileges on a schema and table in a specific resource pool, the user was denied permission when trying to access the schema and table. This was because Vertica was checking permissions on secondary resource pools. This issue is resolved and Vertica no longer check permissions on secondary resource pools. |
VER-58832 | Catalog Engine | Catalog slowdown caused by memory cleanup in the catalog is fixed. |
VER-58757 | DDL, Security |
Modifying roles in Vertica, for example SET ROLE DEFAULT, generated an error. This issue has been resolved. |
VER-58591 | Execution Engine, Eon | Recovering a node from Scratch was restricted to a single threaded storage transfer, this has been fixed to use the recovery resource pool maxconcurrency. |
VER-58354 | Hadoop, SAL | Previously, querying external tables sometimes failed with an error if HDFS was redistributing data blocks at the same time the query tried to access a block. With this fix, the query will first attempt to read a replica of the block from a different datanode before failing. |
VER-58081 | Client Drivers, ADO | Transferring decimal data from Vertica using an ADO.NET client sometimes produced incorrect results if the BinaryTransfer connection parameter was set to true. |
VER-57675 | AP - Advanced |
Upgrading from Vertica 7.2 to 8.1.1-2 caused a loss of historical data due to changes in data format of synopsis objects. Use AGGREGATE FUNCTION "public.ApproxCountDistinctOfSynopsisDeprecated" to process existing synopsis objects. |
Issue |
Component |
Description |
VER-58074 | Data load/COPY | The storage formats of some columns in Vertica primary key and foreign key tables negatively impacted the performance of COPY. These storage formats have been changed to improve performance. |
VER-57861 | DDL |
CREATE VIEW statements sometimes hung and caused the server to fail if the view was selecting from HDFS sources. This issue has been resolved. |
VER-58071 | Hadoop | CREATE TABLE AS queries of ORC or Parquet data with many partitions (tens of thousands) were slow. More intelligent partition pruning has improved the performance. |
VER-57873 | Hadoop, Security |
This fix adds SSL support for HCatalogConnector with HiveServer2. |
To see a complete list of additions and changes introduced in this release, refer to the Vertica 9.0 New Features Guide.
Issue |
Component |
Description |
VER-56058 | Admin Tools | This fix resolves an issue in which admintools incorrectly interpreted login messages using the word "password" as password prompts. |
VER-55506 | Admin Tools | Vertica has improved admintools error message handling. Specifically, errors that caused admintools to issue the generic error message "'NoneType' object has no attribute '__getitem__'" now include messages specific to the actual error. |
VER-55349 | Admin Tools | The output for "admintools --help_all" previously did not show all sub-commands. This issue is resolved. |
VER-55393 | AMI, UI - Management Console | A diagnostic restart of Management Console failed on some Linux environments. The underlying Linux command for restart has been changed to /etc/init.d/vertica-consoled to resolve this issue. |
VER-57398 | AP-Advanced | Vertica 8.1 introduced a k-means clustering limit of 1000 clusters. With this fix, the limit is now 10000. |
VER-57130 | AP-Advanced, Catalog Engine | As of this release, you can create models only in public and user-created schemas (excluding HCatalog). Attempts to create models elsewhere return an error. If the Vertica 9.0 upgrade finds a model that was created in a schema that does not conform with this restriction, it moves the model to the public schema. For more information, see https://vertica.com/docs/9.0.x/HTML/index.htm#Authoring/NewFeatures/9.0/UpgradeandInstall.htm |
VER-56065 | AP-Geospatial | In some circumstances, the database failed during expression analysis for the following user-defined scalar function: STV_ Intersect(x, y USING PARAMETERS INDEX = 'index_name') |
VER-43471 | Backup/DR | When restoring a subset of objects from a backup, the vbr utility incorrectly identified objects on a case-sensitive basis. |
VER-56726 | Backup/DR | Object replication/restore failed to replicate/restore statistics. |
VER-54793 | Backup/DR | If vbr task listbackup fails due to the backup location running out of space, the error message now identifies the specific server. |
VER-54940 | Backup/DR | The quick-repair task slowed down or hung when performed on hundreds of snapshots. To increase performance, this fix improves the algorithm used to merge snapshot manifests into a new backup manifest, and improves parallelism in rebuilding backup manifests. |
VER-54124 | Basics, Catalog Engine, Data Collector | The issue where the segment_range column in the Projections table returned incorrect percentages has been fixed. |
VER-53488 | Catalog Engine | Vertica did not previously release the catalog memory for objects that were no longer visible by any current or subsequent transactions. The Vertica garbage collector algorithm has been improved. |
VER-47816 | Client Drivers - ADO | The ADO.net client library sometimes had poor performance when parsing a query with many string literals. The performance of such queries has been improved. |
VER-54702 | Client Drivers - ADO | The RESET_SESSION session management function now functions properly when run against a cluster with a DOWN node. |
VER-54968 | Client Drivers - Misc | The JDBC driver did not correctly handle timestamps with infinity values. |
VER-55336 | Client Drivers - Misc | The HPE Vertica client driver did not support SSRS in SQL Server 2016. SSRS stopped working or failed to launch if you installed the Microsoft Connectivity Pack. |
VER-54336 | Client Drivers - ODBC | 8.0.1 Vertica driver installations failed on Windows server 2012 R2, due to failing to install .NET 3.5. This issue is resolved. Note that installing Vertica on Windows OS requires an internet connection. |
VER-54254 | Client Drivers – Python | The Vertica Python library failed to generate errors when running compound statements. With this fix, any errors from the first query in the compound statement display as soon as execution completes. To view all errors for subsequent queries in the statement, call the nextset() method on the cursor object until None is returned. |
VER-55726 | Cloud - Amazon | Vertica incorrectly reused objects without properly cleaning up their old states, which sometimes caused a client error when copying from Amazon S3. |
VER-55364 | Cloud - Amazon, Data load / COPY, S3 | When loading files from S3, reusing the authentication manager object from a previous request sometimes caused an invalid request signature, and the load failed with an error. |
VER-56534 | Data load / COPY | When using cooperative parse and apportioned load together, Vertica could drop N rows of data if the COPY statement included the SKIP N directive. |
VER-56392 | Data Removal - Delete, Purge, Partitioning | In Vertica 8.1.1-2, loads and queries on certain tables failed with "DDL statement (SomeCatalotEvent) interfered with this statement" even though no such DDL statement had executed recently. This was caused by replication copying some catalog events that should not have been copied. Replication has been fixed and upgrade removes such extraneous events. |
VER-55205 | Database Designer Core | ILM statements (such as SWAP_PARTITIONS_BETWEEN TABLES, MOVE_PARTITIONS, COPY_PARTITIONS, and COPY_TABLE) caused the Vertica server to go down if the target table had enabled primary key constraints. |
VER-57166 | DDL | If you moved a table with foreign key constraints to another schema, the constraints remained associated with the previous schema. If you dropped the previous schema, the foreign key constraints were also dropped from the table. With this release, a table's foreign key constraints move with the table to its new schema. |
VER-52610 | DDL | If you altered a table populated by one sequence to be populated by another sequence, then attempted to drop the sequence, Vertica incorrectly generated an error: "NOTICE 4927: The Table depends on Sequence." |
VER-56547 | DDL - Projection, DDL - Table | Copying a table more than 99 times caused projection naming conflict errors. |
VER-56921 | Execution Engine | Case expressions of type "case <Const> when <condition> then <result>..." repeated in the same query, or case expressions of type "case when <Comparison involving a function with no arguments> ... " sometimes caused server failure. |
VER-55487 | Execution Engine | A Vertica 8.1.0 feature improved performance by sharing work for multiple partitions of data when evaluating multiple percentile functions. In some cases, Vertica did not correctly reset a flag when moving from one partition to the next. |
VER-55288 | FlexTable | The MapJSONExtractor used to fail on large JSON data due to space constraints. The JSON extractor output now sizes proportionally to the input size and the error message is more precise. |
VER-56522 | FlexTable | MapAggregate sometimes used too much memory which could lead to a node down. This issue has been fixed. |
VER-54913 | Hadoop | The locking mechanism shared between Vertica and libhdfs++ did not protect against certain race conditions if there was an error while accessing HDFS. |
VER-55380 | Hadoop |
When using HCatalog connector, non-superusers could not read from an HCatalog schema despite having usage granted. |
VER-55409 | Hadoop |
In rare circumstances, if multiple users attempted to concurrently authenticate with HDFS, an unnecessary assert caused core dump. |
VER-42294 | Kafka Integration | NOT NULL violations rolled back COPY statements and prevented the Kafka scheduler from updating its progress in the stream, causing loads to hang. |
VER-55396 | Kafka Integration | The KafkaAVROParser incorrectly wrote byte data types as NULL. |
VER-51561 | Kafka Integration | Adding an operator to a Kafka scheduler produced a null pointer exception. |
VER-55962 | Kafka Integration | Vertica was incorrectly setting eof_timeout in microseconds instead of milliseconds. This issue is resolved. |
VER-55995 | Kafka Integration | Performing a COPY with the KafkaAVROparser using a schema registry with multiple partitions no longer causes Vertica to fail. |
VER-42282 | Kafka Integration | In some cases, Vertica failed while loading data from Kafka, usually preceded by an error indicating a discrepancy between the bytes read by the DataBuffer and LengthBuffer. |
VER-43977 | Optimizer, Refresh | Refresh performance is now satisfactory if a table has only one live aggregate projection (LAP) associated with it. Refresh performance for tables with multiple LAPs remains sub-optimal. This use case will be addressed in a future release. |
VER-55169 | Optimizer | The query planner now follows join inner/outer inputs specified by the syntactic optimizer during outer join to inner join conversion. |
VER-51593 | Optimizer | Overly complex query expressions, such as a WITH clause with an extremely large collection of unions, could cause stack overflow and result in a segmentation fault and node failure. To avoid this problem, Vertica now returns with an error message before the stack reaches its limit: ERROR 4964: The query contains an expression that is too complex to analyze |
VER-56802 | Optimizer - LAP, Recovery | After recovering a database where recovery by table was disabled, a placeholder Tuple Mover marker inside system table V_MONITOR.TUPLE_MOVER_OPERATIONS was not always cleaned up. This prevented other Tuple Mover operations from executing. This problem has been resolved. |
VER-55139 | Optimizer - LAP, Recovery | Previously, all live aggregate projections were marked out-of-date when the cluster underwent an administrator-specified recovery. This issue has been resolved so that only live aggregate projections whose anchor tables' data was modified since the recovery epoch would be marked out-of-date during recovery. |
VER-53762 | ResourceManager | In some cases, the status of the max_memory_size_kb parameter of user-defined resource pools was not updated correctly and showed values larger than the maximum memory of the general pool. This issue was resolved. |
VER-55996 | SDK | If you previously installed ApproximateLib while search_path was set to a non-default value, then it was installed in the wrong schema. install.sql now always installs the library in the public schema. |
VER-56157 | SDK | If you previously installed ParquetExportLib while search_path was set to a non-default value, then it was installed in the wrong schema. install.sql now always installs the library in the public schema. |
VER-47220 | DDL, Security | If a table with an identity column is moved to a new schema, the associated Identity sequence will now also move to the new schema. |
VER-50865 | Security | Access to a view was denied to its owner unless the owner was explicitly granted SELECT privileges to the view's base table. With this fix, access to a view can be enabled through privileges that its base table inherits from the container schema. |
VER-56452 | Security | Vertica incorrectly allowed users without the correct permissions to read and delete from the RejectionTableData directory. |
VER-55472 | Security | Performing CREATE OR REPLACE VIEW previously did not preserve grants on the view. |
VER-55268 | Security | Granting a user access to a view caused permission errors if the view had a subquery or UNION clause that queried tables that user did not have access to. |
VER-30244 | Security | Previously, users were unable to revoke usage on the general resource pool, when a non-dbadmin superuser created the target user. This issue has been resolved. |
VER-55748 | Tuple Mover | The Tuple Mover incorrectly held transaction locks until the transaction completed, instead of releasing when the mergeout completed. |
VER-55375 | UI - Management Console | Management Console displayed an incorrect time in the timestamp of queries in the Query Repository for Database Designer. This issue is resolved. |
VER-51910 | UI - Management Console | When a Management Console user that was mapped to a non-dbadmin Vertica user viewed MC charts, MC did not properly close connections to Vertica. |
VER-54822 | UI - Management Console | This issue fixes the default value of K-safety in the Database Designer Wizard. |
VER-56146 | UI - Management Console | A diagnostic restart of Management Console failed on some Linux environments. The underlying Linux command for restart has been changed to /etc/init.d/vertica-consoled to resolve this issue. |
Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.
Backup operations are not currently supported on Vertica implementations using HDFS storage locations.
HCatalog connector does not currently support Kerberos authentication. Currently only HDFS Connector supports Kerberos authentication.
Issue |
Component |
Description |
VER-47840, VER-47885, VER-47886 | AP-Advanced |
Upgrading from 7.2.3 to Vertica 8.0 or above can cause naming conflicts due to new UDx names introduced in for Machine Learning in Vertica 8.0. Two types of conflicts can occur: a. If a 7.2.3 UDx name conflicts with UDx's introduced in the Vertica 8.0 Machine Learning Package, the Machine Learning Package installation fails when you start the database for the first time after upgrading Vertica. In this case, you can find the reason for failure and name conflict details in the AdminTools log file. b. If a 7.2.3 UDx name conflicts with Vertica Machine Learning functions introduced in Vertica 8.0, no failure occurs. However, after upgrading, the Vertica 7.2.3 UDx's are not available for execution due to the naming conflict. Vertica 8.0 introduced the following functions: • KMEANS • LINEAR_REG • LOGISTIC_REG • BALANCE • DELETE_MODEL • SUMMARIZE_MODEL • RENAME_MODEL • NORMALIZE Workaround: In both cases, rename the UDx causing the conflict and retry the installation. |
VER-43040 | Client Drivers - ODBC | ENABLE_WITH_CLAUSE_MATERIALIZATION is not supported for WITH CLAUSE prepared statements. |
VER-55384 | Client Drivers - ODBC |
After installing the Vertica ODBC version 8.1.1 client on a Windows 7 pre-SP1 operating system, an attempt to connect using the ODBC DSN fails with the message, “The specified module could not be found. (Vertica, C:\Program Files\Vertica Systems\ODBC64\lib\vertica_8.1_odbc_3.5.dll)”. Workaround: To resolve this issue, Vertica recommends you upgrade to Windows 7 Service Pack 1. |
VER-42714 | Execution Engine |
Be aware that if Vertica cancels a query that generated an error, Vertica sometimes additionally generates the error "Send: Connection not open" during the cancel, even though that is not the cause of the original error. |
VER-57129 | Hadoop |
After a user authenticates on a node using the Vertica realm, users from other realms cannot log in on that node. Other nodes continue to work until a user in the Vertica realm authenticates on them. Workaround: To get a clean state, restart the database. |
VER-48062 | Security | When determining valid ciphers to set on a FIPS-enabled system, be aware that SELECT SET_CONFIG_PARAMETER('EnabledCipherSuites','....'); can accept invalid values. For example, it could accept a cipher suite not allowed by FIPS. However, invalid cipher suites are not present in the FIPS-enabled system, so their algorithms are not used. |
VER-57209 | Hadoop |
The HCatalog Connector produces an error if users kinit using a realm other than the default_realm as defined in the system krb5.conf. Workaround: Set the "default_realm" inside krb5.conf to be the realm of the connecting user. Take care in changing this default, as it affects all of Vertica and not just the HCatalog Connector. |
VER-57129 | Hadoop |
After a user authenticates on a node using the Vertica realm, users from other realms cannot log in on that node. Other nodes continue to work until a user in the Vertica realm authenticates on them. Workaround: To get a clean state, restart the database. |
VER-58520 | Hadoop | After a HDFS failover with the active namenode stopped, copying from HDFS hangs. |
VER-57752 | Recovery |
If all partitions are moved from a table while a node is down and then a new projection is created on that table, the epoch for the new projection will be more recent than the epoch for the MOVE PARTITION event. In that rare case, the down node fails to recover. This problem also occurs if, when the node is down, Vertica swaps out all partitions, but no partitions have been swapped in. Workaround: Find the newly created projection in the PROJECTION_CHECKPOINT_EPOCHS system table. Drop that projection and restart the down node. |
VER-58891 | Catalog Engine | Only superusers can rename a System table projection with ALTER PROJECTION RENAME. |
VER-58475 | Data Removal - Delete, Purge, Partitioning | You cannot split a partitions using the force_split parameter if you do not have enough system resources. |
VER-57129 | Hadoop |
After a user authenticates on a node using the Vertica realm, users from other realms cannot log in on that node. Other nodes continue to work until a user in the Vertica realm authenticates on them. Workaround: To get a clean state, restart the database. |
The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
The information contained herein is subject to change without notice.
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
© Copyright 2006 - 2018 Hewlett-Packard Development Company, L.P.
Adobe® is a trademark of Adobe Systems Incorporated.
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.
Send documentation feedback to HPE