Best practices

In this tutorial, we will explore some best practices and optimizations to help you get the most out of Vertica and VerticaPy.

1. Restrict objects and operations to essential columns

As VerticaPy is effectively an abstraction of SQL, any database-level optimizations you make in your Vertica database carry over to VerticaPy. In Vertica, optimization is centered on projections, which are collections of table columns—from one or more tables—stored on disk in a format that optimizes query execution. When you write queries in terms of the original tables, the query uses the projections to return query results. For details about creating and designing projections, see the Projections section in the Vertica documentation.

Projections are created and managed in the Vertica database, but you can leverage the power of projections in VerticaPy with features such as the vDataFrame’s usecols parameter, which specifies the columns from the input relation to include in the vDataFrame. As columnar databases perform better when there are fewer columns in the query, especially when you are working with large datasets, limiting vDataFrames and operations to essential columns can lead to a significant performance improvement. By default, most vDataFrame methods use all numerical columns in the vDataFrame, but you can restrict the operation to specific columns.

In the following examples, we’ll demonstrate how to create a vDataFrame from specific columns in the input relation, and then run methods on that vDataFrame. First, load the titanic dataset into Vertica using the load_titanic() function:

[2]:
from verticapy.datasets import load_titanic

load_titanic()
[2]:
123
pclass
Integer
123
survived
Integer
Abc
Varchar(164)
Abc
sex
Varchar(20)
123
age
Numeric(8)
123
sibsp
Integer
123
parch
Integer
Abc
ticket
Varchar(36)
123
fare
Numeric(12)
Abc
cabin
Varchar(30)
Abc
embarked
Varchar(20)
Abc
boat
Varchar(100)
123
body
Integer
Abc
Varchar(100)
110male47.010PC 17757227.525C62 C64C[null]124
210male24.001PC 17558247.5208B58 B60C[null][null]
310male25.0001390526.0[null]C[null]148
410male42.00011048926.55D22S[null][null]
510male45.00011305026.55B38S[null][null]
610male46.010W.E.P. 573461.175E31S[null][null]
710male64.01419950263.0C23 C25 C27S[null][null]
810male[null]0011379642.4[null]S[null][null]
910male32.500113503211.5C132C[null]45
1010male55.00011378730.5C30S[null][null]
1110male37.001PC 1759629.7C118C[null][null]
1210male64.00069326.0[null]S[null]263
1310male56.00011379226.55[null]S[null][null]
1410male24.0101369560.0C31S[null][null]
1511male0.9212113781151.55C22 C26S11[null]
1611female18.010PC 17757227.525C62 C64C4[null]
1711male80.0002704230.0A23SB[null]
1811female42.000PC 17757227.525[null]C4[null]
1911male25.0101196791.0792B49C7[null]
2011female45.000PC 17608262.375[null]C4[null]
2111female22.00111350555.0E33S6[null]
2211female[null]001777027.7208[null]C5[null]
2311female22.000113781151.55[null]S11[null]
2411female64.002PC 1775683.1583E45C14[null]
2511female54.0103694778.2667D20C4[null]
2611male43.0101776527.7208D40C5[null]
2711female22.0021356849.5B39C5[null]
2811male23.001PC 1775963.3583D10 D12C7[null]
2911female35.01011378952.0[null]S8[null]
3011female[null]101746451.8625D21S8[null]
3111male42.0101175352.5542D19S5[null]
3211female45.0101175352.5542D19S5[null]
3311female16.001PC 1759239.4D28S9[null]
3411female21.0001350277.9583D9S10[null]
3511male36.000PC 1747326.2875E25S7[null]
3611male[null]00F.C. 1299825.7417[null]C7[null]
3711female33.000PC 1761327.7208A11C11[null]
3811female48.013PC 17608262.375B57 B59 B63 B66C4[null]
3911female23.0102122882.2667B45S7[null]
4011female31.00236928164.8667C7S8[null]
4120male57.00024434613.0[null]S[null][null]
4220female29.010SC/AH 2903726.0[null]S[null][null]
4320male29.000W./C. 1426310.5[null]S[null][null]
4420female30.00023724913.0[null]S[null][null]
4520male24.00024872613.5[null]S[null]297
4620male30.00025064613.0[null]S[null]305
4720male52.00025064713.0[null]S[null]19
4820male44.0102670726.0[null]S[null][null]
4920male57.00021953312.35[null]Q[null][null]
5020male32.00023721613.5[null]S[null]209
5120male70.000C.A. 2458010.5[null]S[null][null]
5220male54.0002901114.0[null]S[null][null]
5320male16.000S.O./P.P. 310.5[null]S[null][null]
5420male62.0002402769.6875[null]Q[null][null]
5520male27.000SC/PARIS 216815.0333[null]C[null][null]
5620male36.000C.A. 1724810.5[null]S[null][null]
5720female27.0101166821.0[null]S[null][null]
5820male27.01022841426.0[null]S[null]293
5920male66.000C.A. 2457910.5[null]S[null][null]
6021female48.002C.A. 3311236.75[null]S14[null]
6121female30.010SC/PARIS 214813.8583[null]C12[null]
6221female34.00024388013.0[null]S12[null]
6321female24.00225064914.5[null]S4[null]
6421female48.01222084565.0[null]S9[null]
6521female3.012SC/Paris 212341.5792[null]C14[null]
6621male3.01123008026.0F2SD[null]
6721female2.0112636026.0[null]S11[null]
6821female18.00225065213.0[null]S16[null]
6930male30.000C 70767.25[null]S[null]72
7030female40.01075469.475[null]S[null][null]
7130male35.0003734508.05[null]S[null][null]
7230female18.001269114.4542[null]C[null][null]
7330male[null]0026647.225[null]C[null][null]
7430female32.01136484915.5[null]Q[null][null]
7530male22.0003500457.7958[null]S[null][null]
7630male35.0003645128.05[null]S[null][null]
7730male[null]10268914.4583[null]C[null][null]
7830female[null]10268914.4583[null]C[null][null]
7930male31.0003350977.75[null]Q[null][null]
8030female22.000755210.5167[null]S[null][null]
8130male0.330234708014.4[null]S[null][null]
8230male34.01134708014.4[null]S[null]197
8330female28.01134708014.4[null]S[null][null]
8430male25.0003492037.8958[null]S[null][null]
8530male25.0003492507.8958[null]S[null][null]
8630male[null]003492387.8958[null]S[null][null]
8730male[null]003492257.8958[null]S[null][null]
8830male22.000A/5 211727.25[null]S[null][null]
8930male[null]0026747.225[null]C[null][null]
9030male[null]0026317.225[null]C[null][null]
9130male40.500C.A. 621215.1[null]S[null]187
9230male40.5003672327.75[null]Q[null]68
9330male18.0003500367.7958[null]S[null][null]
9430female[null]003648597.75[null]Q[null][null]
9530male[null]003492547.8958[null]C[null][null]
9630male40.016CA 214446.9[null]S[null][null]
9730male51.000214408.05[null]S[null][null]
9830male26.0103500257.8542[null]S[null][null]
9930female[null]003826497.75[null]Q[null][null]
10030male[null]003492207.8958[null]S[null][null]
Rows: 1-100 | Columns: 14

Supposing we are only interested in the ‘survived’, ‘pclass’, ‘age’, ‘parch’, and ‘sibsp’ columns, we can create a vDataFrame with just those columns by specifying them in the usecols parameter:

[3]:
import verticapy as vp

vdf = vp.vDataFrame("public.titanic",
                    usecols = ["survived", "pclass", "age", "parch", "sibsp"])
display(vdf)
123
pclass
Integer
123
survived
Integer
123
age
Numeric(8)
123
sibsp
Integer
123
parch
Integer
11071.000
21017.000
31027.010
41037.011
510[null]00
61031.010
710[null]00
81037.010
91024.000
101045.010
111042.000
1210[null]00
131041.010
1410[null]00
151029.000
161047.000
171058.002
181050.010
191057.010
201064.010
211051.001
221153.020
231136.001
241114.012
2511[null]01
261136.002
2711[null]00
281135.000
291122.001
301125.010
311135.010
321139.000
331137.010
341130.010
351131.010
361122.010
371143.001
381133.000
391135.000
401148.010
411160.010
422044.010
432029.010
442036.000
452030.000
462044.000
472021.000
482047.000
492022.020
502031.000
5120[null]00
522035.000
532041.000
54214.021
55210.8302
562117.000
572124.011
582150.001
592133.002
602130.030
612131.000
622131.000
632112.000
643042.000
653026.000
663032.000
673022.000
68306.011
69309.011
703019.000
713028.000
723029.000
733020.000
743027.000
753030.000
763027.000
773022.000
783032.000
793023.010
803016.000
813018.022
823033.011
833041.000
843019.000
853021.000
863022.000
873029.000
883030.000
893022.020
9030[null]00
913036.010
9230[null]00
9330[null]00
943055.500
953021.000
9630[null]00
973022.000
9830[null]00
993028.000
1003020.000
Rows: 1-100 | Columns: 5

If we run the avg() method without specifying columns, all numerical vDataFrame columns are included in the operation:

NOTE: To examine the generated SQL for each command, turn on the “sql_on” option.

[4]:
vp.set_option("sql_on", True)
vdf.avg()

Computing the different aggregations.

  SELECT
    /*+LABEL('vDataframe.aggregate')*/ AVG("pclass"),
    AVG("survived"),
    AVG("age"),
    AVG("sibsp"),
    AVG("parch")  
  FROM
(  
  SELECT
    "pclass",
    "survived",
    "age",
    "sibsp",
    "parch"  
  FROM
"public"."titanic")  
VERTICAPY_SUBTABLE  
  LIMIT 1
[4]:
avg
"pclass"2.28444084278768
"survived"0.364667747163695
"age"30.1524573721163
"sibsp"0.504051863857374
"parch"0.378444084278768
Rows: 1-5 | Columns: 2

To restrict the operation to specific columns in the vDataFrame, provide the column names in the columns parameter:

[5]:
vdf.avg(columns = ["age", "survived"])
[5]:
avg
"age"30.1524573721163
"survived"0.364667747163695
Rows: 1-2 | Columns: 2

As we are working with a small dataset, the perfomance impact of excluding unncessary columns is not very significant. However, with large datasets (e.g. greater than a TB), the impact is much greater, and choosing essential columns becomes a key step in improving performance.

Instead of specifying essential columns to include, some methods allow you to list the columns to exclude with the exclude_columns parameter:

[6]:
vdf.numcol(exclude_columns = ["parch", "sibsp"])
[6]:
['"pclass"', '"survived"', '"age"']

NOTE: To list all columns in a vDataFrame, including non-numerical columns, use the get_columns() method.

You can then use this truncated list of columns in another method call; for instance, to compute a correlation matrix:

[7]:
vdf.corr(columns = vdf.numcol(exclude_columns = ["parch", "sibsp"]))

Computing the pearson Corr Matrix.

  SELECT
    /*+LABEL('vDataframe._aggregate_matrix')*/ CORR_MATRIX("pclass", "survived", "age") OVER ()  
  FROM
(  
  SELECT
    "pclass",
    "survived",
    "age",
    "sibsp",
    "parch"  
  FROM
"public"."titanic")  
VERTICAPY_SUBTABLE
../../../_images/notebooks_introduction_best-practices_index_17_2.png

To turn off the SQL code generation option:

[8]:
vp.set_option("sql_on", False)

2. Save the current relation

The vDataFrame works like a view, a stored query that encapsulates one or more SELECT statements. If the generated relation uses many different functions, the computation time for each method call is greatly increased.

Small transformations don’t drastically slow down computation, but heavy transformations (multiple joins, frequent use of advanced analytical funcions, moving windows, etc.) can result in noticeable slowdown. When performing computationally expensive operations, you can aid performance by saving the vDataFrame structure as a table in the Vertica database. We will demonstrate this process in the following example.

First, create a vDataFrame, then perform some operations on that vDataFrame:

[9]:
vdf = vp.vDataFrame("public.titanic")
vdf["sex"].label_encode()["boat"].fillna(method = "0ifnull")["name"].str_extract(
    ' ([A-Za-z]+)\.').eval("family_size", expr = "parch + sibsp + 1").drop(
    columns = ["cabin", "body", "ticket", "home.dest"])["fare"].fill_outliers().fillna()
print(vdf.current_relation())
795 elements were filled.
(
   SELECT
     "pclass",
     "survived",
     "name",
     "sex",
     "age",
     "sibsp",
     "parch",
     COALESCE("fare", 32.9113074018842) AS "fare",
     "embarked",
     "boat",
     "family_size"
   FROM
 (
   SELECT
     "pclass",
     "survived",
     REGEXP_SUBSTR("name", ' ([A-Za-z]+)\.') AS "name",
     DECODE("sex", 'female', 0, 'male', 1, 2) AS "sex",
     COALESCE("age", 30.1524573721163) AS "age",
     "sibsp",
     "parch",
     (CASE WHEN "fare" < -176.6204982585513 THEN -176.6204982585513 WHEN "fare" > 244.5480856064831 THEN 244.5480856064831 ELSE "fare" END) AS "fare",
     COALESCE("embarked", 'S') AS "embarked",
     DECODE("boat", NULL, 0, 1) AS "boat",
     parch + sibsp + 1 AS "family_size"
   FROM
 (

   SELECT

                    "pclass",
     "survived",
     "name",
     "sex",
     "age",
     "sibsp",
     "parch",
     "fare",
     "embarked",
     "boat"

   FROM
 "public"."titanic")
VERTICAPY_SUBTABLE)
VERTICAPY_SUBTABLE)
VERTICAPY_SUBTABLE

To understand how Vertica executes the different aggregations in the above relation, let’s take a look at the query plan:

NOTE: Query plans can be hard to interpret if you don’t know how to parse them. For more information, see query plan information and structure.

[10]:
print(vdf.explain())
------------------------------
QUERY PLAN DESCRIPTION:

EXPLAIN SELECT /*+LABEL('vDataframe.explain')*/ * FROM (SELECT "pclass", "survived", "name", "sex", "age", "sibsp", "parch", COALESCE("fare", 32.9113074018842) AS "fare", "embarked", "boat", "family_size" FROM (SELECT "pclass", "survived", REGEXP_SUBSTR("name", ' ([A-Za-z]+)\.') AS "name", DECODE("sex", 'female', 0, 'male', 1, 2) AS "sex", COALESCE("age", 30.1524573721163) AS "age", "sibsp", "parch", (CASE WHEN "fare" < -176.6204982585513 THEN -176.6204982585513 WHEN "fare" > 244.5480856064831 THEN 244.5480856064831 ELSE "fare" END) AS "fare", COALESCE("embarked", 'S') AS "embarked", DECODE("boat", NULL, 0, 1) AS "boat", parch + sibsp + 1 AS "family_size" FROM ( SELECT "pclass", "survived", "name", "sex", "age", "sibsp", "parch", "fare", "embarked", "boat" FROM "public"."titanic") VERTICAPY_SUBTABLE) VERTICAPY_SUBTABLE) VERTICAPY_SUBTABLE

Access Path:
+-STORAGE ACCESS for titanic [Cost: 129, Rows: 1K (NO STATISTICS)] (PATH ID: 1)
|  Projection: public.titanic_super
|  Materialize: titanic.pclass, titanic.survived, titanic.name, titanic.sex, titanic.age, titanic.sibsp, titanic.parch, titanic.fare, titanic.embarked, titanic.boat
|  Execute on: v_vertica_eon_node0001, v_vertica_eon_node0002, v_vertica_eon_node0003


-----------------------------------------------
PLAN: BASE QUERY PLAN (GraphViz Format)
-----------------------------------------------
digraph G {
graph [rankdir=BT, label = "BASE QUERY PLAN
        Query: EXPLAIN SELECT /*+LABEL(\'vDataframe.explain\')*/ * FROM (SELECT \"pclass\", \"survived\", \"name\", \"sex\", \"age\", \"sibsp\", \"parch\", COALESCE(\"fare\", 32.9113074018842) AS \"fare\", \"embarked\", \"boat\", \"family_size\" FROM (SELECT \"pclass\", \"survived\", REGEXP_SUBSTR(\"name\", \' ([A-Za-z]+)\.\') AS \"name\", DECODE(\"sex\", \'female\', 0, \'male\', 1, 2) AS \"sex\", COALESCE(\"age\", 30.1524573721163) AS \"age\", \"sibsp\", \"parch\", (CASE WHEN \"fare\" \< -176.6204982585513 THEN -176.6204982585513 WHEN \"fare\" \> 244.5480856064831 THEN 244.5480856064831 ELSE \"fare\" END) AS \"fare\", COALESCE(\"embarked\", \'S\') AS \"embarked\", DECODE(\"boat\", NULL, 0, 1) AS \"boat\", parch + sibsp + 1 AS \"family_size\" FROM ( SELECT \"pclass\", \"survived\", \"name\", \"sex\", \"age\", \"sibsp\", \"parch\", \"fare\", \"embarked\", \"boat\" FROM \"public\".\"titanic\") VERTICAPY_SUBTABLE) VERTICAPY_SUBTABLE) VERTICAPY_SUBTABLE

        All Nodes Vector:

          node[0]=v_vertica_eon_node0001 (initiator) Up
          node[1]=v_vertica_eon_node0002 (executor) Up
          node[2]=v_vertica_eon_node0003 (executor) Up
          node[3]=v_vertica_eon_node0004 (executor) Down
          node[4]=v_vertica_eon_node0005 (executor) Down
          node[5]=v_vertica_eon_node0006 (executor) Down
          node[6]=v_vertica_eon_node0007 (executor) Down
          node[7]=v_vertica_eon_node0008 (executor) Down
          node[8]=v_vertica_eon_node0009 (executor) Down

        Participating subscriptions:

          replica: [<1, 2, 0>]
          segment0001: [<0>]
          segment0002: [<1>]
          segment0003: [<2>]
          segment0004: [<0>]
          segment0005: [<1>]
          segment0006: [<2>]
          segment0007: [<0>]
          segment0008: [<1>]
          segment0009: [<2>]
        ", labelloc=t, labeljust=l ordering=out]
0[label = "Root
        OutBlk=[UncTuple(11)]", color = "green", shape = "house"];
1[label = "NewEENode
        OutBlk=[UncTuple(11)]", color = "green", shape = "box"];
2[label = "Recv
        Recv from: v_vertica_eon_node0001, v_vertica_eon_node0002, v_vertica_eon_node0003
        Net id: 1000

        Unc: Integer(8)
        Unc: Integer(8)
        Unc: Varchar(164)
        Unc: Integer(8)
        Unc: Numeric(16, 13)
        Unc: Integer(8)
        Unc: Integer(8)
        Unc: Numeric(18, 13)
        Unc: Varchar(20)
        Unc: Integer(8)
        Unc: Integer(8)", color = "green", shape = "box"];
3[label = "Send
        Send to: v_vertica_eon_node0001
        Net id: 1000

        Unc: Integer(8)
        Unc: Integer(8)
        Unc: Varchar(164)
        Unc: Integer(8)
        Unc: Numeric(16, 13)
        Unc: Integer(8)
        Unc: Integer(8)
        Unc: Numeric(18, 13)
        Unc: Varchar(20)
        Unc: Integer(8)
        Unc: Integer(8)", color = "green", shape = "box"];
4[label = "StorageUnionStep: titanic_super
        Unc: Integer(8)
        Unc: Integer(8)
        Unc: Varchar(164)
        Unc: Integer(8)
        Unc: Numeric(16, 13)
        Unc: Integer(8)
        Unc: Integer(8)
        Unc: Numeric(18, 13)
        Unc: Varchar(20)
        Unc: Integer(8)
        Unc: Integer(8)", color = "purple", shape = "box"];
5[label = "ExprEval:
          titanic.pclass
          titanic.survived
          regexp_substr(titanic.name, E\' ([A-Za-z]+)\\.\', 1, 1, \'\', 0)
          CASE titanic.sex WHEN NULLSEQUAL \'female\' THEN 0 WHEN NULLSEQUAL \'male\' THEN 1 ELSE 2 END
          coalesce(titanic.age, 30.1524573721163)
          titanic.sibsp
          titanic.parch
          coalesce(CASE WHEN (titanic.fare \< (-176.6204982585513)) THEN (-176.6204982585513) WHEN (titanic.fare \> 244.5480856064831) THEN 244.5480856064831 ELSE titanic.fare END, 32.9113074018842)
          coalesce(titanic.embarked, \'S\')
          CASE titanic.boat WHEN NULLSEQUAL NULL THEN 0 ELSE 1 END
          ((titanic.parch + titanic.sibsp) + 1)
        Unc: Integer(8)
        Unc: Integer(8)
        Unc: Varchar(164)
        Unc: Integer(8)
        Unc: Numeric(16, 13)
        Unc: Integer(8)
        Unc: Integer(8)
        Unc: Numeric(18, 13)
        Unc: Varchar(20)
        Unc: Integer(8)
        Unc: Integer(8)", color = "brown", shape = "box"];
6[label = "ScanStep: titanic_super
        pclass
        survived
        name
        sex
        age
        sibsp
        parch
        fare
        embarked
        boat
        Unc: Integer(8)
        Unc: Integer(8)
        Unc: Varchar(164)
        Unc: Varchar(20)
        Unc: Numeric(6, 3)
        Unc: Integer(8)
        Unc: Integer(8)
        Unc: Numeric(10, 5)
        Unc: Varchar(20)
        Unc: Varchar(100)", color = "brown", shape = "box"];
1->0 [label = "V[0] C=11", color = "black", style="bold", arrowtail="inv"];
2->1 [label = "0", color = "blue"];
3->2 [label = "0", color = "blue"];
4->3 [label = "0", color = "blue"];
5->4 [label = "0", color = "blue"];
6->5 [label = "0", color = "blue"];}

Looking at the plan and its associated relation, it’s clear that the transformations we applied to the vDataFrame result in a complicated relation. Each method call to the vDataFrame must use this relation for computation.

NOTE: To better understand your queries, check out the `QueryProfiler <https://www.vertica.com/python/documentation/1.0.x/html/api/verticapy.performance.vertica.qprof.QueryProfiler.html>`__ function.

To save the relation as a table in the Vertica and replace the current relation in VerticaPy with the new table relation, use the `to_db() <https://www.vertica.com/python/documentation/1.0.x/html/verticapy.vDataFrame.to_db.html>`__ method with the inplace parameter set to True:

[11]:
vp.drop("public.titanic_clean", method = "table") # drops any existing table with the same schema and name
vdf.to_db("public.titanic_clean",
          relation_type = "table",
          inplace = True)
print(vdf.current_relation())
"public"."titanic_clean"

When dealing with very large datasets, it’s best to take caution before saving relations with complicated transformations. Ideally, you will perform a thorough data exploration, and only execute heavy transformations when essential.

3. Use the help function

For a quick and convenient way to view information about an object or function, use the help() function:

[12]:
help(vp.connect)
Help on function connect in module verticapy.connection.connect:

connect(section: str, dsn: Optional[str] = None) -> None
    Connects to the database.

    Parameters
    ----------
    section: str
        Name of the section in the
        configuration file.
    dsn: str, optional
        Path to the file containing
        the credentials. If empty,
        the Connection File will be
        used.

    Examples
    --------
    Display all available connections:

    .. code-block:: python

        from verticapy.connection import available_connections

        available_connections()

    ``['VML', 'VerticaDSN', 'VerticaDSN_test']``

    Connect using the VerticaDSN connection:

    .. code-block:: python

        from verticapy.connection import connect

        connect("VerticaDSN")

    .. seealso::

        | :py:func:`~verticapy.connection.available_connections` :
            Displays all available connections.
        | :py:func:`~verticapy.connection.get_connection_file` :
            Gets the VerticaPy connection file.
        | :py:func:`~verticapy.connection.new_connection` :
            Creates a new VerticaPy connection.
        | :py:func:`~verticapy.connection.set_connection` :
            Sets the VerticaPy connection.

4. Close your connections

Each connection to the database increases the concurrency on the system, so try to close connections when you’re done with them. VerticaPy simplifies the connection process by allowing the user to create an auto-connection, but the closing of connections must be done manually with the close_connection() function.

To demonstrate, create a database connection:

[ ]:
vp.connect("VerticaDSN")

When you are done making changes, close the connection with the close_connection() function:

[ ]:
vp.close_connection()

It is especially important to close connections when you are working in an environment with mutliple users.

5. Consider a method’s time complexity

Some techniques are significantly more computationally expensive than others. For example, a Kendall correlation is very expensive compared to a Pearson correlation because, unlike Pearson, Kendall correlations use a cross join, resulting in a time complexity of O(n*n) (where n is the number of rows). Let’s compare the time needed to compute these two correlations on the ‘titanic’ dataset:

[13]:
import time

vdf = vp.vDataFrame("public.titanic")

start_time = time.time()
x = vdf.corr(method = "pearson", show = False)
print("Pearson, time: {0}".format(time.time() - start_time))

start_time = time.time()
x = vdf.corr(method = "kendall", show = False)
print("Kendall, time: {0}".format(time.time() - start_time))
Pearson, time: 0.17660832405090332
100%|██████████| 6/6 [00:00<00:00, 2943.03it/s]
Kendall, time: 0.857501745223999

The Kendall correlation matrix takes longer to compute than the Pearson matrix. As we are using a relatively small dataset, this difference is not very large, but it increases in magntiude with the size of the dataset. Taking into account the time complexity of operations is a key step in optimzing performance, esepcially with large datasets.

6. Limit plot elements

Graphics are an essential tool to understand your data, but they can become difficult to parse if they contain too many elements. VerticaPy provides options that restrict plots to specified elements. To demonstrate, let’s first draw a multi-histogram with a categorical column with thousands of categories:

[3]:
vdf.bar(["name", "survived"])

VerticaPy outputs the bar chart, but the number of categories makes the graph basically incomprehensible. Instead, whenever possible, try to create graphics with as few categories as possible for your use case:

[4]:
vdf.hist(["pclass", "survived"])

To view the cardinality of your variables, use the nunique() method:

[17]:
vdf.nunique()
[17]:
approx_unique
"pclass"3.0
"survived"2.0
"name"1233.0
"sex"2.0
"age"96.0
"sibsp"7.0
"parch"8.0
"ticket"888.0
"fare"275.0
"cabin"181.0
"embarked"3.0
"boat"26.0
"body"118.0
"home.dest"355.0
Rows: 1-14 | Columns: 2

7. Filter unnecessary data

Filtering your data is a crucial step in data preparation. Proper filtering avoids unnecessary computations and greatly improves the performance of each method call. While the performance impact can be minimal for small datasets, filtering large datasets is key to improving performance.

For example, if we are only interested in analyzing Titanic passengers who didn’t have a lifeboat, we can filter on this requirement using the `filter() <https://www.vertica.com/python/documentation/1.0.x/html/verticapy.vDataFrame.filter.html>`__ method:

[18]:
vdf.filter("boat IS NOT NULL")
795 elements were filtered.
[18]:
123
pclass
Integer
123
survived
Integer
Abc
Varchar(164)
Abc
sex
Varchar(20)
123
age
Numeric(8)
123
sibsp
Integer
123
parch
Integer
Abc
ticket
Varchar(36)
123
fare
Numeric(12)
Abc
cabin
Varchar(30)
Abc
embarked
Varchar(20)
Abc
boat
Varchar(100)
123
body
Integer
Abc
Varchar(100)
111male37.0111175152.5542D35S5[null]
211male26.00011136930.0C148C5[null]
311female35.000PC 17760135.6333C99S8[null]
411female60.0001181376.2917D15C8[null]
511male45.000PC 1759429.7A9C7[null]
611female27.011PC 17558247.5208B58 B60C6[null]
711female35.01011380353.1C123SD[null]
811male53.00011378028.5C51CB[null]
911female19.00011205330.0B42S3[null]
1011female58.001PC 17582153.4625C125S