Creating 30 Plotly charts based on multi-index Pandas series - python

I've got a DataFrame with PLAYER_NAME, the corresponding cluster they're assigned to, their team's net rating, and their team ids. This is what it looks like:
I'd like to have a bunch of bar charts for each team that look like the following:
The would be matched with the team's net rating and id. I've tried using groupby like this to get a multi-index Pandas series where there's a team_id and a cluster number corresponding to the number of instances that cluster appears for a certain team. It looks like this:
.
Unfortunately this removes the net rating, so I'm really not sure how to get all that info. I'm using plotly right now but if there's a solution with matplotlib that's fine.
This is the groupby code:
temp_df = pos_clusters.groupby(['TEAM_ID'])['Cluster'].value_counts().sort_index()
Thanks so much!

Related

Trying to create a ML model

https://docs.google.com/spreadsheets/d/1_UkbdbxHKKfS7ibnR7HVS2lYE_Cr0WMk5zKRbTBddJY/edit?usp=sharing
above is the data table in thata table
i am trying to find the maximum numnber of product a particular cust_id(Customer purchased)
and customer id are same in multiple and expect
output as
CustId
Product_ID
1
Food
2
Electronic
and need to plot this also
so like this for any given data the ml model should give me the output
i tried groupby function and many function in pandas as im new to machince learning unable to slove this any one can help me.

Create groupings on data

I have input data like this:
The output I require is grouping all the elements and giving distinct customers and total sales for each group. I have tried cross matrix, group by, and cube functions until now, but however, I am not getting the desired results.
Output expected:
Any help will highly appreciated.

two datasets, to find relation based on date range

I have two datasets, one is with time of Volcanoes eruption, the second with Earthquake.
Both have "Date" column.
I would like to run somehow loop to find out based on the dates if Earthquake is linked to Volcanoes eruption.
The idea is that to check if the date of both events is close enough, lets say within 4 days range than create new column in Earthquake dataset and state yes or no (volcano related or no)....
I have no idea even how to start if that is even possible.
Here are the datasets:

How to visualize aggregate VADER sentiment score values over time in Python?

I have a Pandas dataframe containing tweets from the period July 24 2019 to 19 October 2019. I have applied the VADER sentiment analysis method to each tweet and added the sentiment scores in new columns.
Now, my hope was to visualize this in some kind of line chart in order to analyse how the averaged sentiment scores per day have changed over this three-months period. I therefore need the dates to be on the x-axis, and the averaged negative, positive and compound scores (three different lines) on the y-axis.
I have an idea that I need to somehow group or resample the data in order to show the aggregated sentiment value per day, but since my Python skills are still limited, I have not succeeded in finding a solution that works yet.
If anyone has an idea as to how I can proceed, that would be much appreciated! I have attached a picture of my dateframe as well as an example of the type of plot I had in mind :)
Cheers,
Nicolai
You should have a look at the groupby() method:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html
Simply create a day column which contains a timestamp/datetime_object/dict/tuple/str ... which represents the day of the tweet and not it's exact time . Then use the groupby() method on this column.
If you don't know how to create this column, an easy way of doing it is using https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html
Keep in mind that groupby method doesn't return a DataFrame but a groupby_generic.DataFrameGroupBy so you'll have to choose a way of aggreating the data in your groups (you should probably do groupby().mean() in your case, see grouby method documentation for more information)

Find correlation between two columns in two different dataframes

I have two dataframes which both have an ID column, and for each ID a date columns with timestamps and a Value column. Now, I would like to find a correlation between the values from each dataset in this way: dataset 1 has all the values of people that got a specific disease, and in dataset 2 there are values for people that DIDN'T get the disease. Now, using the corr function:
corr = df1['val'].corr(df2['val'])
my result is 0.1472 and is very very low (too much), meaning they don't have nothing in correlation.
Am I wrong in something? How do I calculate the correlation? Is there a way to find a value (maybe a line) where after that value the people will get the disease? I would like to try this with a Machine Learning technique (SVMs), but first it would be good to have something like the part I explained before. How can I do that?
Thanks
Maybe your low correlation is due to the index or order of your observations
Have you tried to do a left join by ID?

Categories