Vertica for SQL on Hadoop is licensed per node, on an unlimited number of central processing units or CPUs and an unlimited number of users. Vertica for SQL on Hadoop is for deployment on Hadoop nodes. Includes 1 TB of Vertica ROS formatted data on HDFS.
This 1 TB of ROS limit enforcement is currently only contractual, but it will begin to be technically enforced in Vertica 10.1. Starting with Vertica 10.1, if you are using Vertica for SQL on Hadoop, you will not be able to load more than 1 TB of ROS data into your Vertica database. If you were unaware of this limitation and already have more than 1 TB of ROS data in your database at this time, please make any necessary adjustments to stay below the limit, or contact our sales team to explore other licensing options.
With version 9.2, Vertica has removed support for pre-join and range segmentation projections. If a table's only superprojection is one of these projection types, the projection is regarded as unsafe.
Before upgrading to 9.2 or higher, you are strongly urged to check your database for unsupported projections. If the upgrade encounters these projections, it is liable to fail. You must then revert to the previous installation.
Solution: Run the pre-upgrade script
Vertica has provided a pre-upgrade script that examines your current database and sends to standard output its analysis and recommendations. The script identifies and lists any unsupported projections. If the script finds projection buddies with different SELECT and ORDER BY clauses, it generates a deploy script. Run this script to remedy projections so they comply with system K-safety.
For more information, see Fixing Unsafe Buddy Projections in the Vertica documentation.
The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 10.1.x.
They also contain information about issues resolved in:
The Premium Edition of Vertica is available for download at https://support.microfocus.com/downloads/swgrp.html.
The Community Edition of Vertica is available for download at https://www.vertica.com/download/vertica/community-edition.
The documentation is available at https://www.vertica.com/docs/10.1.x/HTML/index.htm.
Hotfixes are available to Premium Edition customers only. Contact Vertica support for information on downloading hotfixes.
Take a look at the Vertica 10.1 New Features Guide for a complete list of additions and changes introduced in this release.
You can now perform object-level backup, restore, and replication in Eon Mode databases. You can back up individual schemas or tables, and restore or replicate individual tables or schemas from full or object-level backups.
See Creating Backups and Restoring Backups for information about object-level operations.
Vertica now supports backup and restore of Enterprise Mode and Eon Mode databases on Google Cloud Platform (GCP) to Google Cloud Storage (GCS). Supported backup and restore tasks include:
Support for backup and restore to GCS is limited to databases on GCP. For example, you cannot backup a database with communal storage on AWS to a GCS location.
See Configuring Backups to and from Cloud Storage for details about configuring backups from Vertica on GCP to GCS. For a complete guide on Vertica and GCP, see Vertica on Google Cloud Platform.
Configuration files have a new [CloudStorage] section that replaces the [S3] section. S3 configuration parameter names were changed from s3_parameter_name to cloud_storage_parameter_name. For databases that use the S3 protocol, the renamed parameters are functionally identical to the original S3-prefixed parameters.
Using S3-prefixed names generate a warning and deprecation message.
For details, see [CloudStorage] and Deprecated and Removed Functionality.
vbr configuration files support new configuration parameters:
concurrency_delete lets you configure the number of delete threads created when deleting files in Enterprise Mode.
For details, see [CloudStorage] and [Transmission] Data Transmission .
A new sample vbr .ini file, backup_restore_cloud_storage.ini, provides guidance on creating a configuration file to back up or restore an Enterprise Mode or Eon Mode database to a supported cloud storage location.
The new sample configuration file supersedes backup/restore .ini files from earlier Vertica versions: backup_restore_S3.ini and eon_backup_restore.ini.
Automatic node health checks now check the following local storage locations for read-write access:
Columns in the Parquet format can contain structs, represented in Vertica by the ROW type, and arrays, represented by the ARRAY type. For external tables and the Parquet format only, you can define a column for heterogeneous combinations of these two types: a struct containing array fields or an array of structs. These types can be nested up to the maximum nesting depth.
For details, see Reading Combinations of Structs and Arrays.
When using external tables with ORC data, you can now use the ROW data type to represent ORC structs. The syntax is the same as for Parquet files. With this change, the previous method of handling struct fields as expanded columns is deprecated.
You can now declare, but not query, columns for arrays and maps in external tables, allowing you to consume the column and use the rest of the ORC data.
For details, see Reading Complex Types from ORC Files.
New meta-functions support pinning projections to database depots:
Also, SET_DEPOT_PIN_POLICY_PARTITION now supports depot pinning partitions for specific projections.
You can now create a duplicate of an existing subcluster. When you pass the new --like option to the admintools db_add_subcluster tool, it creates a subcluster that inherits most configuration settings from the source subcluster. This feature is useful if you want to scale your database for query throughput by creating more secondary subclusters. For details, see Duplicating a Subcluster.
You can now configure encryption for new and existing notifiers with the following parameters:
For details on these and other parameters, see CREATE NOTIFIER and ALTER NOTIFIER.
A new version of the Spark Connector supports Scala 2.12 with Spark versions 2.4.1 and 3.0. See Vertica Integration for Apache Spark in the supported platforms section for details.
When reading data in the Parquet format, Vertica now caches the metadata for each Parquet file during planning for use during execution. This caching reduces the number of reads and the network latency they add. If the file storage location charges for API calls, this caching can also reduce costs.
Vertica uses TEMP storage for the cache, and only if TEMP storage is on the local file system.
For details, see Loading Parquet Data.
You can now load one-dimensional arrays of primitive types from JSON and Avro data. You can continue to use flexible complex types to load arrays of any dimensionality. For details, see Loading JSON Data and Complex Data Types for favroparser.
By default, the Parquet parser uses strong schema matching. This means that the load must consume all columns in the Parquet data in the order they occur in the data. You can, instead, use loose schema matching, which allows you to select the columns you want and ignore the rest. Loose schema matching depends on the names of the columns in the data rather than their order, so the column names in your table must match those in the data. Types must match or be coercible. For details on how to use loose schema matching, see Loose Schema Matching on the PARQUET (Parser) reference page.
You can use loose schema matching for columns of primitive types and one-dimensional arrays of primitive types. For other complex types, use flexible complex types instead. See Using Flexible Complex Types.
In 10.1 Vertica has added several enhancements to Machine Learning:
Vertica now provides robust support for XGBoost (Extreme Gradient Boosting), a popular supervised-learning algorithm used for regression or classification on large datasets. XGBoost relies on sequentially-built shallow decision trees to provide high accuracy with a low probability of overfitting the data and a highly scalable training method. The name of this tree boosting method is a direct reference to its ability to perform training very fast in a distributed computing environment.
In release 10.1, Vertica provides the ability to use the XGBoost algorithm for regression and classification of data in Vertica, by providing new and updated functions which support the XGBoost algorithm.
XGBoost training functions create XGBoost models, either an XGBoost classifier model or XGBoost regressor model. All actions for Vertica models are supported for XGBoost models, just as they are for Random Forest for example, including exporting and importing the model. The XGBoost algorithm and XGBoost functions and models are fully implemented within Vertica, and do not require the use of any outside tools.
Vertica now supports Soundex, a widely-used phonetic approximation algorithm that turns English words into a 4-character code. Ideally, this code is shared by phonetically-similar words. It was patented in the 1920s and is present in some form in DB2, Ingres, MySQL, MS SQL Server, Oracle, PostgreSQL, and SQLite.
Soundex is useful for queries along the lines of "Find me people with names similar to Smith (e.g., Smythe)".
You can now set additional requirements for MC user passwords and login security. These include new options for managing:
For details, see Managing MC User Passwords.
Users must now set security questions when logging in for the first time. They can then reset their passwords from the MC login screen with the Forgot Password function.
There is a new Management Console wizard to create an Eon Mode database cluster on Amazon Web Services (AWS). The wizard includes an intuitive design and tooltips to provide guidance when making selections.
To create a database cluster using the new workflow, select Create a New Vertica Database Cluster from the Management Console home page.
When you are provisioning a subcluster on AWS or Google Cloud Platform (GCP), you can create a subcluster using any instance type, volume type, and volume size. Previously, you had to use the same instance type and data settings as the existing cluster instances. Using different instance types provides flexibility to create subclusters for specific workloads or user groups.
Because a cluster can have different instance types with different volume allocations, you can no longer drop or create a database from Database Cluster View on AWS or GCP. This prevents Vertica from using incorrect instance types when creating the database. You have the option to terminate the cluster on Database Cluster View, and you can create a database from the Management Console home page.
See Managing Workload Resources in an Eon Mode Database for details about optimizing subclusters by workload.
There is new option to load Vertica TestDrive data when you are provisioning a 4 node AWS cluster using CFT. Loading TestDrive data lets you preview how Vertica operates in production. By default, this option is set to No.
See CloudFormation Template (CFT) Overview and Launch MC and AWS Resources with a CloudFormation Template for information about using CFTs with Vertica.
Directed queries now support :v as an alias of IGNORECONST.
A user-defined scalar function (UDSF) can now return more than one value. Multiple values, of scalar types, are returned in a ROW. For details on how to work with multiple values, see Scalar Functions (UDSFs). For an example, see C++ Example: Returning Multiple Values.
For details, see Hash Authentication Parameters.
For details, see Profiles.
CREATE USER and ALTER USER now accept pre-hashed passwords. For details, see Setting and Changing User Passwords
You can now create Certificate Authority (CA) bundles. These contain root CA certificates that can be used to validate connections. For details, see Managing CA Bundles.
The IMPLODE function takes a column of values and returns an array containing them. It is the inverse of EXPLODE. The two functions can be used together to explode an array, filter values from it, and then re-assemble the array.
For details, see Arrays and Sets (Collections).
The EXPLODE function can now expand more than one array in a single call, removing the need to use subqueries to explode multiple arrays. The function can now also return columns in any order; the array column(s) need not be first. A new parameter specifies the number of arrays to explode; all non-array columns and all array columns after the specified number are passed through unmodified.
CREATE privileges on a view's schema and DROP privileges on the view are now sufficient for replacing an existing view with CREATE OR REPLACE VIEW. Previously, only the view owner could use CREATE OR REPLACE VIEW while specifying an existing view.
For details on these privileges, see Creating Views.
Vertica supports Avro logical types. Use Avro logical types to annotate primitive or complex types to create a derived type. Below is a list of the logical types that Vertica supports:
For details, see Logical Data Types for favroparser.
Microsoft Edge is a supported web browser for Vertica Management Console.
Vertica tables now support up to 9800 column definitions.
DEFAULT and SET USING expressions in flattened tables now support correlated subqueries, and can combine these with expressions that reference values in the current table. For details, see Expressions with correlated subqueries.
User-defined functions, such as transform and scalar functions, now support up to 9800 arguments.
This section describes the two phases Vertica follows to retire Vertica functionality:
The following Vertica functionality was deprecated and will be retired in future versions:
|10.1||Reading structs from ORC files as expanded columns||Instead, define as ROW type (see Reading Complex Types from ORC Files).|
|MD5 hashing algorithm for user passwords||Instead, use SHA-512 (see Hash Authentication Parameters).|
S3 vbr configuration parameters:
Superseded by new vbr configuration parameters that accept values for cloud storage locations:
If a vbr configuration file contains an s3-prefixed parameter, vbr returns a warning.
|ARRAY_CONTAINS function||Instead, use CONTAINS.|
|flatten_complex_type_nulls parameter to the ORC and Parquet parsers|
The following functionality is no longer accessible as of the releases noted:
Eon Mode meta-functions:
|Reading structs from Parquet files as expanded columns||Superseded by Creating Tables Using Structs.|
|Parquet parser skip_strong_schema_match parameter||Superseded by Using Flexible Complex Types.|
|Superseded by functions that operate on collections, including arrays:
|HiveMetadataCacheSizeMB configuration parameter||Superseded by ParquetMetadataCacheSizeMB .|
For more information see Deprecated and Retired Functionality in the Vertica documentation.
This hotfix addresses the issues below.
|VER-75827||Hadoop||In some configurations, the Vertica server process would crash while logging HDFS events. This issue has been resolved.|
|VER-75901||Tuple Mover||When Mergeout Cache is enabled, the dc_mergeout_requests system table now contains valid transaction ids instead of zero.|
|VER-75913||Tuple Mover||The Tuple Mover logged a large number of PURGE requests on a projection while another MERGEOUT job was running on the same projection. This issue has been resolved.|
|VER-75973||Hadoop||Fixed a bug in Parquet Predicate Pushdown where sometimes the correct rowgroups were not pruned from the Parquet file based on predicates if the file was on HDFS.|
This hotfix addresses the issues below.
|VER- 75904||Security||Vertica no longer takes a catalog lock during authentication, after the user's password security algorithm has been changed from MD5 to SHA512. This is due to removing the updating of the user's salt, which is not used for MD5 hash authentication.|
This hotfix addresses the issues below.
|VER-36888||Admin Tools||stop_db no longer returns early and now returns the proper return code.|
|VER-68356||Security||LDAPLink now properly trims whitespace from LDAPLinkSearchBase.|
|VER-70760||Optimizer||Queries with subqueries on tables with TopK projections sometimes returned with an error. The issue has been resolved.|
|VER-71074||Security||Previously, CREATE OR REPLACE VIEW could only replace an existing view if the current user was the existing view's owner. Now, DROP privilege is sufficient to replace the view (in addition to CREATE on the SCHEMA, as before).|
|VER-72232||ComplexTypes||As a fix, the QUERY_EVENTS system table will no longer make the recommendation to collect stats on ARRAY & SET columns.|
|VER-72394||Execution Engine||This issue would occur when the root operator would treat non-DML queries as DML plans. The way Vertica collects the processed row-count statistics differs between these two types, so the results were incorrect. Vertica now correctly detects whether a query represents a DML plan.|
|VER-72735||Security||LDAPLink now checks for circular role/group assignment, assigns the first one returned (skipping the rest of the circular relationship), and then logs the error in DC_LDAP_LINK_EVENTS.|
|VER-72850||Optimizer||Incorrect usage of meta-function EXPORT_TABLES in the SELECT statement of EXPORT TO VERTICA caused the database to crash. This issue has been resolved: incorrect usage of meta-functions now returns with an error.|
|VER-72978||Depot||Resizing the database depot with alter_location_size were propagated sequentially to all nodes, which adversely affected performance in a database with many nodes. This issue has been resolved: depot size changes are now propagated to all nodes in parallel (simultaneously).|
|VER-73110||Backup/DR||Previously, when migrate_enterprise_to_eon found projections that were Inconsistent with cluster segmentation, it wrote to the migration log file an incorrect anchor table name; it also provided only one projection, <projection-name>_b0. This issue has been resolved: now the log file error log file lists the names of all tables with problematic projections. You can use these names as arguments to meta-function REBALANCE_TABLE.|
|VER-73382||Execution Engine||Queries sometimes threw a Sort Order Violation error when join output that was sorted like the target projection was consumed by a single Analytic function that only partially sorted the data as required by the target projection. This issue has been resolved.|
|VER-73811||Execution Engine||Expressions of the form <expression> IN (<list of constants>) were not handled correctly in cases where <expression> was of type LONG VARCHAR or LONG VARBINARY. If the IN list was long, query execution could be slow and use excessive memory. This problem has been resolved.|
|VER-73841||Admin Tools||Adding nodes on large, busy databases now runs fewer queries and takes less time.|
|VER-73881||Scrutinize||Scrutinize now collects a memory report log.|
|VER-73937||Execution Engine||Under some circumstances, the size of the METADATA resource pool was calculated incorrectly. This issue has been resolved.|
|VER-74043||Admin Tools||Relaxed the checks of user storage locations during revive Eon database.|
|VER-74116||Optimizer||Analytic functions triggered an Optimizer error when used with a LIMIT OVER clause. This issue has been resolved.|
|VER-74130||Depot, Tuple Mover||Calls to meta-function enforce_object_storage_policy() on an Eon-mode database wrote a large number of errors to the Vertica log file. This issue has been resolved: Vertica now returns with a single error message that the meta-function is not supported in Eon mode.|
|VER-74182||Admin Tools||The performance of db_remove_subcluster has been improved for subclusters with several nodes.|
|VER-74300||Backup/DR||migrate_enterprise_to_eon migrated grants on external procedures even though the procedures themselves were excluded from migration. This inconsistency caused the migrated database to crash and further attempts at startup to fail. This issue has been resolved: the migration now excludes privileges that are granted on external procedures, and also checks that all migrated objects have consistent dependencies.|
|VER-74350||Execution Engine||Adding support for tables with more than 1600 columns adversely affected performance of some operations. This issue has been resolved.|
|VER-74355||ComplexTypes||ALTER TABLE...ALTER COLUMN did not correctly handle attempts to create multi-dimensional arrays, reporting the dimensionality incorrectly. Multi-dimensional arrays are not supported for internal tables, and Vertica now treats this as an error.|
|VER-74512||Depot||System table DEPOT_PIN_POLICIES was erroneously populated with enterprise storage policies. This issue has been resolved: now, in Enterprise mode, DEPOT_PIN_POLICIES is always empty.|
|VER-74521||Optimizer||When a LIMIT OVER query executed on a multi-node database, the data was always sent back to the initiator which applied the LIMIT across all nodes. This interim step was unnecessary for INSERT...SELECT queries, which read data from the same node where it is stored. This issue is now resolved: Vertica no longer transfers data to the initiator for INSERT...SELECT queries, and thus avoids the extra overhead previously incurred.|
|VER-74575||Optimizer||In certain outer join operations that included a LIMIT OVER (PARTITION BY) clause, Vertica applied GUA (guaranteed uniqueness) optimization before populating the Top-K attribute, which pruned the inner view and caused an internal error. This issue has been resolved: now the optimizer defers GUA optimization until after it populates all query-related attributes.|
|VER-74576||Execution Engine||When computing COUNT window aggregates over Fixed type columns that included UUIDs, the aggregate incremented the count on reading a row from the Fixed type column. It then also hit an assertion failure intended for aggregating over invalid types. This issue has been resolved: the assertion failure no longer occurs in Fixed type window aggregates.|
|VER-74588||Hadoop||Queries to external tables with Hive partition columns could crash Vertica if a predicate on a partition column throws an error. This issue has been fixed.|
|VER-74616||Execution Engine||Enabling inter-node encryption adversely affected query performance across a variety of workloads. In many cases, this issue has been addressed with improved performance.|
|VER-74707||Tuple Mover||Enabling both mergeout caching (deprecated) and reflexive mergeout adversely affected performance on a new or restarted database. This issue has been resolved: when reflexive mergeout is enabled, mergeout caching is automatically disabled.|
|VER-74783||DDL - Table||ALTER TABLE...OWNER TO threw an error when the target table was already owned by the specified user. This issue has been resolved: it now returns with NOTICE instead of ERROR.|
|VER-74795||Optimizer||Previously, the Vertica optimizer ignored a PROJS hint that specified a live aggregate projection (LAP), and returned a warning. This issue has been resolved: the optimizer now supports PROJS hints that specify a LAP and uses the projection if it is feasible for that query.|
|VER-74918||Security||LDAPLink now properly supports instances where the number of members returned for an attribute exceeds 1500.|
|VER-75051||Execution Engine||An internal memory error sometimes caused simple CASE statements of this format to fail:
WHEN comparand THEN result
[ ELSE result ]
This issue has been resolved.
|VER-75105||Execution Engine||Passing a volatile expression such as RANDOM() as an argument to an inline function such as CORR returned incorrect output if the argument appeared multiple times in the function body. This issue has been resolved: now, when you pass a volatile expression as an argument multiple times to an inline function, Vertica returns an error message.|
|VER-75196||Execution Engine||Calling ALTER TABLE REORGANIZE immediately after setting group partitioning with ALTER TABLE sometimes caused Vertica to crash. This issue has been resolved.|
|VER-75265||Execution Engine||When a subquery was used in an expression and implied a correlated join with relations in the parent query, and also aggregated output, it sometimes returned multiple rows for each matched join key and returned unexpected results. This issue has been resolved.|
Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.
|VER-75057||ComplexTypes||In an external table, if a struct contains an array field and the struct is null, the whole struct (ROW) is treated as null in Vertica. This behavior differs from structs without arrays, where a null struct is treated as a ROW where all fields are null.|
The only warranties for products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein.
The information contained herein is subject to change without notice.
Confidential computer software. Valid license from Micro Focus required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
© Copyright 2006 - 2021 Micro Focus, Inc.
Adobe® is a trademark of Adobe Systems Incorporated.
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.