Interpret correlation coefficient using correlation matrix - python

I want to apply a regression algorithm but before that I decided to study the relationship between my target output(Temperature value) and the remaining variables so I decided to plot the correlation matrix as a first step.
I got the next picture :
As it is shown the Image below : the Temperature is strongly correlated with the first 4 variables
Based on a the difference between VBATT and VIN I decided to split the total samples into two subsets : subset1 and subset 2. Then, I plotted the correlation variables
Subset 1: VIN-VBATT> X
Subset 2: VIN-VBATT=< x
I can not understand why the correlation between the Temperature and VIN has changed completely ?
And why all the correlation coefficients has been decreased ?
Thank you in adavance !

Related

How calculate weighting factors in order to perform weighted average of a dataframe in python?

Imagine we have a dataframe of 5 columns. These columns represent temperature values of different climate models. Also we have 1 vector that has the true values of the observations. I want to create a multi-model. There are 2 ways to go:
1st way --> just find the average of the 5 columns
2nd way --> find the weighted average of the 5 columns
My question is how can I calculate the weights for each model? I mean I can substract the temperature of a model with the temperature of the observations and find which model is closer to the real value, either by using absolute mean error or squared mean error to see which models should have a bigger weight. But is there a way that python can calculate the most appropriate weights?

Regression analysis for linear regression

I have a regression model where my target variable (days) quantitative values ranges between 2 to 30. My RMSE is 2.5 and all the other X variables(nominal) are categorical and hence I have dummy encoded them.
I want to know what would be a good value of RMSE? I want to get something within 1-1.5 or even lesser but I am unaware what I should do to achieve the same.
Note# I have already tried feature selection and removing features will less importance.
Any ideas would be appreciated.
If your x values are categorical then it does not necessarily make much sense binding them to a uniform grid. Who's to say category A and B should be spaced apart the same as B and C. Assuming that they are will only lead to incorrect representation of your results.
As your choice of scale is the unknowns, you would be better in terms of visualisation to set your uniform x grid as being the day number and then seeing where the categories would place on the y scale if given a linear relationship.
RMS Error doesn't come into it at all if you don't have quantitative data for x and y.

How to calculate covariance Matrix with Pandas

I'm trying to figure out how to calculate a covariance matrix with Pandas.
I'm not a data scientist or a finance guy, i'm just a regular dev going a out of his league.
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0,100,size=(252, 4)), columns=list('ABCD'))
print(df.cov())
So, if I do this, I get that kind of output:
I find that the number are huge, and i was expecting them to be closer to zero. Do i have to calculate the return before getting the cov ?
Does anyone familiar with this could explain this a little bit or point me to a good link with explanation ? I couldn't find any link to Covariance Matrix For Dummies.
Regards,
Julien
Covariance is a measure of the degree to which returns on two assets (or any two vector or array) move in tandem. A positive covariance means that asset returns move together, while a negative covariance means returns move inversely.
On the other side we have:
The correlation coefficient is a measure that determines the degree to which two variables' movements are associated. Note that the correlation coefficient measures linear relationship between two arrays/vector/asset.
So, portfolio managers try to reduce covariance between two assets and keep the correlation coefficient negative to have enough diversification in the portfolio. Meaning that a decrease in one asset's return will not cause a decrease in return of the second asset(That's why we need negative correlation).
Maybe you meant correlation coefficient close to zero, not covariance.
The fact that you haven't provided a seed for your randomly generated numbers makes th reproducibility of your experiment difficoult. However, I tried the code you are providing here and the closer covariance matrix I get is this one :
To understand why the numbers in your cov_matrix are so huge you should first understand what is a covarance matrix. The covariance matrix is is a matrix that has as elements in the i, j position the the covariance between the i-th and j-th elements of a random vector.
A good link you might check is https://en.wikipedia.org/wiki/Covariance_matrix . Also understanding the correlation matrix might help : https://en.wikipedia.org/wiki/Correlation_and_dependence#Correlation_matrices

AgglomerativeClustering on a correlation matrix

I have a correlation matrix of typical structure that is of size 288x288 that is defined by:
from sklearn.cluster import AgglomerativeClustering
df = read_returns()
correl_matrix = df.corr()
where read_returns gives me a dataframe with a date index, and columns of the returns of assets.
Now - I want to cluster these correlations to reduce the population size.
By doing some reading and experimenting I discovered AgglomerativeClustering - and it appears at first pass to be an appropriate solution to my problem.
I define a distance metric as ((.5*(1-correl_matrix))**.5) and have:
cluster = AgglomerativeClustering(n_clusters=40, linkage='average')
cluster.fit(((.5*(1-correl_matrix))**.5).values)
label_groups = cluster.labels_
To observe some of the data and cross check my work I pick out cluster 1 and observe the pairwise correlations and find the min correlation between two items with that group in my dataset to find :
single_cluster = []
for i in range(0,correl_matrix.shape[0]):
if label_groups[i]==1:
single_cluster.append(correl_matrix.index[i])
min_correl = 1.0
for x in single_cluster:
for y in single_cluster:
if x<>y:
if correl_matrix[x][y]<min_correl:
min_correl = correl_matrix[x][y]
print min_correl
and get a min pairwise correlation of .20
To me this seems quite low - but "low based off what?" is a fair question to which I have no answer.
I would like to anticipate/enforce each pairwise correlation of a cluster to be >=.7 or something like this.
Is this possible in AgglomerativeClustering?
Am I accidentally going down the wrong path?
Hierarchical clustering supports different "linkage" strategies.
single-link: this connects points on the minimum distance to the others in the cluster
complete-link: this connects based on the maximum distance to the cluster
...
If you want a high minimum correlation = small maximum distance, this calls for complete linkage.
You may want to treat negative correlations as "good", too.
i.e. use dist = 1 - abs(corr).
Make sure to use ghe dendrogram. If you have outliers in your data, you want to cut into (n_clusters+n_outliers) partitions.

Calculating Pearson correlation

I'm trying to calculate the Pearson correlation coefficient of two variables. These variables are to determine if there is a relationship between number of postal codes to a range of distances. So I want to see if the number of postal codes increases/decreases as the distance ranges changes.
I'll have one list which will count the number of postal codes within a distance range and the other list will have the actual ranges.
Is it ok to have a list that contain a range of distances? Or would it be better to have a list like this [50, 100, 500, 1000] where each element would then contain ranges up that amount. So for example the list represents up to 50km, then from 50km to 100km and so on.
Use scipy :
scipy.stats.pearsonr(x, y)
Calculates a Pearson correlation coefficient and the p-value for testing non-correlation.
The Pearson correlation coefficient measures the linear relationship between two datasets. Strictly speaking, Pearson’s correlation requires that each dataset be normally distributed. Like other correlation coefficients, this one varies between -1 and +1 with 0 implying no correlation. Correlations of -1 or +1 imply an exact linear relationship. Positive correlations imply that as x increases, so does y. Negative correlations imply that as x increases, y decreases.
The p-value roughly indicates the probability of an uncorrelated system producing datasets that have a Pearson correlation at least as extreme as the one computed from these datasets. The p-values are not entirely reliable but are probably reasonable for datasets larger than 500 or so.
Parameters :
x : 1D array
y : 1D array the same length as x
Returns :
(Pearson’s correlation coefficient, :
2-tailed p-value)
You can also use numpy:
numpy.corrcoef(x, y)
which would give you a correlation matrix that looks like:
[[1 correlation(x, y)]
[correlation(y, x) 1]]
try this:
val=Top15[['Energy Supply per Capita','Citable docs per Capita']].rank().corr(method='pearson')
In Python 3.10 correlation() function was added to the statistics module of the Python standard library, it can be directly used by importing the statistics module:
import statistics
statistics.correlation(words, views)

Categories