VerticaPy
Correlation and Dependency¶
Finding links between variables is a very important task. The main purpose of data science is to find relationships between variables, and to understand how these relationships can help us make better decisions.
Machine learning models are also sensitive to the number of variables and how they relate and affect each other, so finding correlations and dependencies can help us make better use of our machine learning algorithms.
Let's use the Telco Churn dataset to understand how we can find links between different variables in VerticaPy.
import verticapy as vp
vdf = vp.read_csv("data/churn.csv")
display(vdf)
The Pearson correlation coefficient is a very common correlation function. In this case, it helped us to find linear links between the variables. Having a strong Pearson relationship means that the two input variables are linearly correlated.
vdf.corr(method = "pearson")
We can see that 'tenure' is well-correlated to the 'TotalCharges', which makes sense.
vdf.scatter(["tenure", "TotalCharges"])
vdf.corr(["tenure", "TotalCharges"], method = "pearson")
Note, however, that having a low Pearson relationship imply that the variables aren't correlated. For example, let's compute the Pearson correlation coefficient between 'tenure' and 'TotalCharges' to the power of 20.
vdf["TotalCharges^20"] = vdf["TotalCharges"] ** 20
vdf.scatter(["tenure", "TotalCharges^20"])
vdf.corr(["tenure", "TotalCharges^20"], method = "pearson")
We know that the 'tenure' and 'TotalCharges' are strongly linearly correlated. However we can notice that the correlation between the 'tenure' and 'TotalCharges' to the power of 20 is not very high. Indeed, the Pearson correlation coefficient is not robust for monotonic relationships, but rank-based correlations are. Knowing this, we'll calculate the Spearman's rank correlation coefficient instead.
vdf.corr(method = "spearman")
The Spearman's rank correlation coefficient determines the monotonic relationships between the variables.
vdf.corr(["tenure", "TotalCharges^20"], method = "spearman")
We can notice that Spearman's rank correlation coefficient stays the same if one of the variables can be expressed using a monotonic function on the other. The same applies to Kendall rank correlation coefficient.
vdf.corr(method = "kendall")
Notice that the Kendall rank correlation coefficient will also detect the monotonic relationship.
vdf.corr(["tenure", "TotalCharges^20"], method = "kendall")
However, the Kendall rank correlation coefficient is very computationally expensive, so we'll generally use Pearson and Spearman when dealing with correlations between numerical variables.
Binary features are considered numerical, but this isn't technically accurate. Since binary variables can only take two values, calculating correlations between a binary and numerical variable can lead to misleading results. To account for this, we'll want to use the 'Biserial Point' method to calculate the Point-Biserial correlation coefficient. This powerful method will help us understand the link between a binary variable and a numerical variable.
vdf.corr(method = "biserial")
Lastly, we'll look at the relationship between categorical columns. In this case, the 'Cramer's V' method is very efficient. Since there is no position in the Euclidean space for those variables, the 'Cramer's V' coefficients cannot be negative (which is a sign of an opposite relationship) and they will range in the interval [0,1].
vdf.corr(method = "cramer")
Sometimes, we just need to look at the correlation between a response and other variables. The parameter 'focus' will isolate and show us the specified correlation vector.
vdf.corr(method = "cramer", focus = "Churn")
Sometimes a correlation coefficient can lead to incorrect assumptions, so we should always look at the coefficient p-value.
vdf.corr_pvalue("Churn", "customerID", method = "cramer",)
We can see that churning correlates to the type of contract (monthly, yearly, etc.) which makes sense: you would expect that different types of contracts differ in flexibility for the customer, and particularly restrictive contracts may make churning more likely.
The type of internet service also seems to correlate with churning. Let's split the different categories to binaries to understand which services can influence the global churning rate.
vdf["InternetService"].one_hot_encode()
vdf.corr(method = "spearman",
focus = "Churn",
columns = ["InternetService_DSL",
"InternetService_Fiber_optic"])
We can see that the Fiber Optic option in particular seems to be directly linked to a customer's likelihood to churn. Let's compute some aggregations to find a causal relationship.
vdf["contract"].one_hot_encode()
vdf.groupby(["InternetService_Fiber_optic"],
["AVG(tenure) AS tenure",
"AVG(totalcharges) AS totalcharges",
'AVG("contract_month-to-month") AS "contract_month-to-month"',
'AVG("monthlycharges") AS "monthlycharges"'])
It seems that users with the Fiber Optic option tend more to churn not because of the option itself, but probably because of the type of contracts and the monthly charges the users are paying to get it. Be careful when dealing with identifying correlations! Remember: correlation doesn't imply causation!
Another important type of correlation is the autocorrelation. Let's use the Amazon dataset to understand it.
from verticapy.datasets import load_amazon
vdf = load_amazon()
display(vdf)
Our goal is to predict the number of forest fires in Brazil. To do this, we can draw an autocorrelation plot and a partial autocorrelation plot.
vdf.acf(column = "number",
ts = "date",
by = ["state"],
p = 48,
method = "pearson")
vdf.pacf(column = "number",
ts = "date",
by = ["state"],
p = 48)
We can see the seasonality forest fires.
It's mathematically impossible to build the perfect correlation function, but we still have several powerful functions at our disposal for finding relationships in all kinds of datasets.
