Skip to main content

VerticaPy simplifies data exploration, data cleaning, and machine learning in Vertica.

Project description

:star: 2023-12-01: VerticaPy secures 200 stars.

:loudspeaker: 2020-06-27: Vertica-ML-Python has been renamed to VerticaPy.

:warning: The old website has been relocated to and will be discontinued soon. We strongly recommend upgrading to the new major release before August 2024.

:warning: The following README is for VerticaPy 1.0.x and onwards, and so some of the elements may not be present in the previous versions.

:scroll: Some basic syntax can be found in the cheat sheet.

📰 Check out the latest newsletter here.


PyPI version Conda Version License Python Version codecov Code style: black linting: pylint

VerticaPy is a Python library with scikit-like functionality used to conduct data science projects on data stored in Vertica, taking advantage of Vertica’s speed and built-in analytics and machine learning features. VerticaPy offers robust support for the entire data science life cycle, uses a 'pipeline' mechanism to sequentialize data transformation operations, and offers beautiful graphical options.

Table of Contents


Vertica was the first real analytic columnar database and is still the fastest in the market. However, SQL alone isn't flexible enough to meet the needs of data scientists.

Python has quickly become the most popular tool in this domain, owing much of its flexibility to its high-level of abstraction and impressively large and ever-growing set of libraries. Its accessibility has led to the development of popular and perfomant APIs, like pandas and scikit-learn, and a dedicated community of data scientists. Unfortunately, Python only works in-memory as a single-node process. This problem has led to the rise of distributed programming languages, but they too, are limited as in-memory processes and, as such, will never be able to process all of your data in this era, and moving data for processing is prohobitively expensive. On top of all of this, data scientists must also find convenient ways to deploy their data and models. The whole process is time consuming.

VerticaPy aims to solve all of these problems. The idea is simple: instead of moving data around for processing, VerticaPy brings the logic to the data.

3+ years in the making, we're proud to bring you VerticaPy.

Main Advantages:

  • Easy Data Exploration.
  • Fast Data Preparation.
  • In-Database Machine Learning.
  • Easy Model Evaluation.
  • Easy Model Deployment.
  • Flexibility of using either Python or SQL.

:arrow_up: Back to TOC


To install VerticaPy with pip:

# Latest release version
root@ubuntu:~$ pip3 install verticapy[all]

# Latest commit on master branch
root@ubuntu:~$ pip3 install git+

To install VerticaPy from source, run the following command from the root directory:

root@ubuntu:~$ python3 install

A detailed installation guide is available at:

:arrow_up: Back to TOC

Connecting to the Database

VerticaPy is compatible with several clients. For details, see the connection page.

:arrow_up: Back to TOC


The easiest and most accurate way to find documentation for a particular function is to use the help function:

import verticapy as vp


Official documentation is available at:

:arrow_up: Back to TOC


Examples and case-studies:

:arrow_up: Back to TOC

Highlighted Features


VerticaPy, offers users the flexibility to customize their coding experience with two visually appealing themes: Dark and Light.

Dark mode, ideal for night-time coding sessions, features a sleek and stylish dark color scheme, providing a comfortable and eye-friendly environment.

On the other hand, Light mode serves as the default theme, offering a clean and bright interface for users who prefer a traditional coding ambiance.

Theme can be easily switched by:

import verticapy as vp

vp.set_option("theme", "dark") # can be switched 'light'.

VerticaPy's theme-switching option ensures that users can tailor their experience to their preferences, making data exploration and analysis a more personalized and enjoyable journey.

:arrow_up: Back to TOC

SQL Magic

You can use VerticaPy to execute SQL queries directly from a Jupyter notebook. For details, see SQL Magic:


Load the SQL extension.

%load_ext verticapy.sql

Execute your SQL queries.

SELECT version();

# Output
# Vertica Analytic Database v11.0.1-0

:arrow_up: Back to TOC

SQL Plots

You can create interactive, professional plots directly from SQL.

To create plots, simply provide the type of plot along with the SQL command.


%load_ext verticapy.jupyter.extensions.chart_magic
%chart -k pie -c "SELECT pclass, AVG(age) AS av_avg FROM titanic GROUP BY 1;"

:arrow_up: Back to TOC

Multiple Database Connection using DBLINK

In a single platform, multiple databases (e.g. PostgreSQL, Vertica, MySQL, In-memory) can be accessed using SQL and python.


/* Fetch TAIL_NUMBER and CITY after Joining the flight_vertica table with airports table in MySQL database. */
SELECT flight_vertica.TAIL_NUMBER, airports.CITY AS Departing_City
FROM flight_vertica
INNER JOIN &&& airports &&&
ON flight_vertica.ORIGIN_AIRPORT = airports.IATA_CODE;

In the example above, the 'flight_vertica' table is stored in Vertica, whereas the 'airports' table is stored in MySQL. We can associate special symbols "&&&" to the different databases to fetch the data. The best part is that all the aggregation is pushed to the databases (i.e. it is not done in memory)!

For more details on how to setup DBLINK, please visit the github repo. To learn about using DBLINK in VerticaPy, check out the documentation page.

:arrow_up: Back to TOC

Python and SQL Combo

VerticaPy has a unique place in the market because it allows users to use Python and SQL in the same environment.


import verticapy as vp

selected_titanic = vp.vDataFrame(
    "(SELECT pclass, embarked, AVG(survived) FROM public.titanic GROUP BY 1, 2) x"
selected_titanic.groupby(columns=["pclass"], expr=["AVG(AVG)"])

:arrow_up: Back to TOC


Verticapy comes integrated with three popular plotting libraries: matplotlib, highcharts, and plotly.

A gallery of VerticaPy-generated charts is available at:

:arrow_up: Back to TOC

Complete Machine Learning Pipeline

  • Data Ingestion

    VerticaPy allows users to ingest data from a diverse range of sources, such as AVRO, Parquet, CSV, JSON etc. With a simple command "read_file", VerticaPy automatically infers the source type and the data type.

    import verticapy as vp

Note: Not all columns are displayed in the screenshot above because of width restriction here.

As shown above, it has created a nested structure for the complex data. The actual file structure is below:

We can even see the SQL underneath every VerticaPy command by turning on the genSQL option:

  import verticapy as vp

  read_file("/home/laliga/2012.json", table_name="laliga", genSQL=True)
    ("away_score" INT, 
     "away_team" ROW("away_team_gender" VARCHAR, 
                     "away_team_group"  VARCHAR, 
                     "away_team_id"     INT, ... 
                                        ROW("id"   INT, 
                                            "name" VARCHAR)), 
     "competition" ROW("competition_id"   INT, 
                       "competition_name" VARCHAR, 
                       "country_name"     VARCHAR), 
     "competition_stage" ROW("id"   INT, 
                             "name" VARCHAR), 
     "home_score" INT, 
     "home_team" ROW("country" ROW("id"   INT, 
                                   "name" VARCHAR), 
                     "home_team_gender" VARCHAR, 
                     "home_team_group"  VARCHAR, 
                     "home_team_id"     INT, ...), 
     "kick_off"     TIME, 
     "last_updated" DATE, 
     "match_DATE"   DATE, 
     "match_id"     INT, ... 
                    ROW("data_version"          DATE, 
                        "shot_fidelity_version" INT, 
                        "xy_fidelity_version"   INT), 
     "season" ROW("season_id"   INT, 
                  "season_name" VARCHAR)) 
     COPY "v_temp_schema"."laliga" 
     FROM '/home/laliga/2012.json' 
     PARSER FJsonParser()

VerticaPy provides functions for importing other specific file types, such as read_json and read_csv. Since these functions focus on a particular file type, they offer more options for tackling the data. For example, read_json has a "flatten_arrays" parameter that allows you to flatten nested JSON arrays.

  • Data Exploration

    There are many options for descriptive and visual exploration.

from verticapy.datasets import load_iris

iris_data = load_iris()
    ["SepalWidthCm", "SepalLengthCm", "PetalLengthCm"], 

The Correlation Matrix is also very fast and convenient to compute. Users can choose from a wide variety of correaltions, including cramer, spearman, pearson etc.

from verticapy.datasets import load_titanic

titanic = load_titanic()

By turning on the SQL print option, users can see and copy SQL queries:

from verticapy import set_option

set_option("sql_on", True)
    /*+LABEL('vDataframe._aggregate_matrix')*/ CORR_MATRIX("pclass", "survived", "age", "sibsp", "parch", "fare", "body") OVER ()
    RANK() OVER (ORDER BY "pclass") AS "pclass",
    RANK() OVER (ORDER BY "survived") AS "survived",
    RANK() OVER (ORDER BY "age") AS "age",
    RANK() OVER (ORDER BY "sibsp") AS "sibsp",
    RANK() OVER (ORDER BY "parch") AS "parch",
    RANK() OVER (ORDER BY "fare") AS "fare",
    RANK() OVER (ORDER BY "body") AS "body"
"public"."titanic") spearman_table

VerticaPy allows users to calculate a focused correlation using the "focus" parameter:

titanic.corr(method="spearman", focus="survived")

import random
import verticapy as vp

data = vp.vDataFrame({"Heights": [random.randint(10, 60) for _ in range(40)] + [100]})

  • Machine Learning

    ML is the strongest suite of VerticaPy as it capitalizes on the speed of in-database training and prediction by using SQL in the background to interact with the database. ML for VerticaPy covers a vast array of tools, including time series forecasting, clustering, and classification.

# titanic_vd is already loaded
# Logistic Regression model is already loaded
stepwise_result = stepwise(

:arrow_up: Back to TOC

Loading Predefined Datasets

VerticaPy provides some predefined datasets that can be easily loaded. These datasets include the iris dataset, titanic dataset, amazon, and more.

There are two ways to access the provided datasets:

(1) Use the standard python method:

from verticapy.datasets import load_iris

iris_data = load_iris()

(2) Use the standard name of the dataset from the public schema:

iris_data = vp.vDataFrame(input_relation = "public.iris")

:arrow_up: Back to TOC


The following example follows the VerticaPy quickstart guide.

Install the library using with pip.

root@ubuntu:~$ pip3 install verticapy[all]

Create a new Vertica connection:

import verticapy as vp

    "host": "", 
    "port": "5433", 
    "database": "testdb", 
    "password": "XxX", 
    "user": "dbadmin"},

Use the newly created connection:


Create a VerticaPy schema for native VerticaPy models (that is, models available in VerticaPy, but not Vertica itself):


Create a vDataFrame of your relation:

from verticapy import vDataFrame

vdf = vDataFrame("my_relation")

Load a sample dataset:

from verticapy.datasets import load_titanic

vdf = load_titanic()

Examine your data:


Print the SQL query with set_option:

set_option("sql_on", True)

# Output
## Compute the descriptive statistics of all the numerical columns ##

  SUMMARIZE_NUMCOL("pclass", "survived", "age", "sibsp", "parch", "fare", "body") OVER ()
FROM public.titanic

With VerticaPy, it is now possible to solve a ML problem with few lines of code.

from verticapy.machine_learning.model_selection.model_validation import cross_validate
from verticapy.machine_learning.vertica import RandomForestClassifier

# Data Preparation
    " ([A-Za-z]+)\."
).eval("family_size", expr="parch + sibsp + 1").drop(
    columns=["cabin", "body", "ticket", "home.dest"]

# Model Evaluation
    RandomForestClassifier("rf_titanic", max_leaf_nodes=100, n_estimators=30),
    ["age", "family_size", "sex", "pclass", "fare", "boat"],

# Features importance, ["age", "family_size", "sex", "pclass", "fare", "boat"], "survived")

# ROC Curve
model = RandomForestClassifier(
    name = "public.RF_titanic",
    n_estimators = 20,
    max_features = "auto",
    max_leaf_nodes = 32, 
    sample = 0.7,
    max_depth = 3,
    min_samples_leaf = 5,
    min_info_gain = 0.0,
    nbins = 32
    "public.titanic", # input relation
    ["age", "fare", "sex"], # predictors
    "survived" # response

# Roc Curve


:arrow_up: Back to TOC

Help and Support


For a short guide on contribution standards, see the Contribution Guidelines.


:arrow_up: Back to TOC

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

verticapy-1.0.1.tar.gz (2.4 MB view hashes)

Uploaded source

Built Distribution

verticapy-1.0.1-py3-none-any.whl (2.9 MB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page