this is some of the data that is located in the excel sheet
I want to select musical theater shows (known in the code as 'ID') that had more minorities than Caucasians in the cast
once determined, I wanted to place the information of the code selected shows into a new data frame that
will only hold the shows, becasue it will be easier to manipulate. In the new data frame, I want to have in the same row for the show the related ethnicity, so I can compare to audience ethnicity. I then tried to plot this information.
So generally, I want to add up the values in specific rows if that row fits specific summation criteria. All data used in this project is located in an excel sheet that is converted to a csv and uploaded as a data frame. I would like to then plot the values of the cast in its entirety and compare the cast's ethnicity to the audience ethnicity.
I am working with python and I have tried to remove data that is not needed by selecting the columns by using an if statements so that the data frame only includes the shows that have more minorities than Caucasians, I then tried to use this information in the plot. I am unsure if I, have to filter all the unneeded columns if I am not using them in the calculations
import numpy as np
import pandas as pd
#first need to import numpy so that calculations can be made
from google.colab import files
uploaded = files.upload()
# df = pd.read_csv('/content/drive/My Drive/allTheaterDataV2.csv')
import io
df = pd.read_csv(io.BytesIO(uploaded['allTheaterDataV2.csv']))
# need to download excel sheet as csv and then upload into colab so that it can
# be manipulated as a dataframe
# want to select shows(ID) that had more minorities than Caucasians in the cast
# once determined, the selected shows should be placed into a new data frame that
# will only hold the shows and the related ethnicity, and compared to audience ethnicity
# this information should then be plotted
# first we will determine the shows that have a majority ethnic cast
minorcal = list(df)
minorcal.remove('CAU')
minoritycastSUM = df[minorcal].sum(axis=1)
# print(minorcal)
# next, we determine how many people in the cast were Caucasian, so remove all others
caucasiancal = list(df)
# i first wanted to do caucasiancal.remove('AFRAM', 'ASIAM', 'LAT', 'OTH')
# but got the statement I could only have 1 argument so i just put each on their own line
caucasiancal.remove('AFRAM')
caucasiancal.remove('ASIAM')
caucasiancal.remove('LAT')
caucasiancal.remove('OTH')
idrowcaucal = df[caucasiancal].sum(axis=1)
minoritycompare = old.filter(['idrowcaucal','minoritycastSUM'])
print(minoritycompare)
# now compare the two values per line
if minoritycastSUM < caucasiancal:
minoritydf = pd.df.minorcal.append()
# plot new data frame per each show and compare to audience ethnicity
df.plot(x=['AFRAM', 'ASIAM', 'CAU', 'LAT', 'OTH', 'WHT', 'BLK', 'ASN', 'HSP', 'MRO'], y = [''])
# i am unsure how to call the specific value for each column
plt.title('ID Ethnicity Comparison')
# i am unsure how to call the specific show so that only one show is per plot so for now i just subbed in 'ID'
plt.xlabel('Ethnicity comparison')
plt.ylabel('Number of Cast Members/Audience Members')
plt.show()
I would like to see the data frame with specific shows that fit within the criteria, and then the plot for the show, but right now I am getting errors on how to formulate the new data frame and python saying that the lengths of the if statements cannot be used.[2]
First of all, this will not be a complete answer, as
I don't know how you're imagining your final plot to look like
I don't know what the columns in your DataFrame are (consider using more descriptive column labels, e.g. 'caucasian actors' instead of 'CAU',…)
it is unclear to me whether any trend can be formed from your data, since the screenshot you've posted shows equal audience compositions for the first movies
Nevertheless, I built upon the DataFrame in this answer, and maybe this initial plot of "non caucasian/caucasian ratio" per movie can point you in the right direction.
Perhaps you could build a similar set of sum & ratio columns for the audience columns, and then plot the actor ratio as a function of the audience ratio to see whether a more caucasian audience prefers more or less caucasian actors (I guess that's what you're after?).
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({'ID':['Billy Elliot','next to normal','shrek','guys and dolls',
'west side story', 'pal joey'],
'Season' : [20082009,20082009,20082009,
20082009,20082009,20082009],
'AFRAM' : [2,0,4,4,0,1],
'ASIAM' : [0,0,1,0,0,0],
'CAU' : [48,10,25,24,28,20],
'LAT' : [1,0,1,3,18,0],
'OTH' : [0,0,0,0,0,0],
'WHT' : [73.7,73.7,73.7,73.7,73.7,73.7]})
## define a sum column for non caucasian actors (I suppose?)
df['non_cau']=df[['AFRAM','ASIAM','LAT','OTH']].sum(axis=1)
## build a ratio of non caucasian to caucasian
df['cau_ratio']=df['non_cau']/df['CAU']
## make a quick plot
fig,ax=plt.subplots()
ax.scatter(df['ID'],df['cau_ratio'])
ax.set_ylabel('non cau / cau ratio')
plt.tight_layout()
plt.show()
Related
So i have this df, the columns that im intrested in visualizing
later with matplotlib are the 'incident_date', 'fatalities'. I want to create two diagrams. The one will display the number of the incidents with injuries (the column named 'fatalities' says whether it was a fatal accident, or just one with injuries or neither), the other will display the dates with the most deaths. So, in order to do those, I need somehow to turn the data in the 'fatalities' column into numeral ones.
This is my df's head, so you get an idea
I created dummy data based on picture you provided
data = {'incident_date':['1-Mar-20','1-Mar-20','3-Mar-20','3-Mar-20','3-Mar-20','5-Mar-20','6-Mar-20','7-Mar-20','7-Mar-20'] \
,'fatalities':['Fatal','Fatal','Injuries','Injuries','Neither','Fatal','Fatal','Fatal','Fatal'] \
, 'conclusion_number':[1,1,3,23,23,34,23,24,123]}
df = pd.DataFrame(data)
All you need is to do a group by incident_data and fatalities and you will get the numerical values for that particular date and that particular incident.
df_grp = df.groupby(['incident_date','fatalities'],as_index=False)['conclusion_number'].count()
df_grp.rename({'conclusion_number':'counts'},inplace=True, axis=1)
The Output of above looks like this.
output dataframe
Once you get counts column you can perform your matplot diagrams.
Let me know if you need help with diagrams as well
country = str(input())
import matplotlib.pyplot as plt
lines = f.readlines ()
x = []
y = []
results = []
for line in lines:
words = line.split(',')
f.close()
plt.plot(x,y)
plt.show()
First problem is in the title of the plot. It is giving Population inCountryI instead of Population in Country I.
Second problem is in the graph.
While my answer could point out the mistakes in your code, I think it might also be enlightening to show another, perhaps more standard way, of doing this. This is particularly useful if you're going to do this more often, or with large datasets.
Handling CSV files and creating subgroups out of them by yourself is nice, but can become very tricky. Python already has a built-in csv module, but the Pandas library is nowadays basically the default (there are other options as well) for handling tabular data. Which means it is widely available, and/or easy to install. Plus it goes well with Matplotlib. (Read some of Pandas' user's guide for a good overview.)
With Pandas, you can use the following (I've put comments on the code in between the actual code):
import pandas as pd
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (8, 8)
# Read the CSV file into a Pandas dataframe
# For a normal CSV, this will work fine without tweaks
df = pd.read_csv('population.csv')
# Convert the month and year columns to a datetime
# Years have to be converted to string type for that
# '%b%Y' is the format for month abbrevation (English) and 4-digit year;
# see e.g. https://strftime.org/
# Instead of creating a new column, we set the date as the index ("row-indices")
# of the dataframe
df.index = pd.to_datetime(df['Month'] + df['Year'].astype(str), format='%b%Y')
# We can remove the month and year columns now
df = df.drop(columns=['Month', 'Year'])
# For nicety, replace the dot in the country name with a space
df['Country'] = df['Country'].str.replace('.', ' ', regex=False)
# Group the dataframe by country, and loop over the groups
# The resulting grouped dataframes, `grouped`, will have just
# their index (date) values and population values
# The .plot() method will therefore automatically use
# the index/dates as x-axis, and the population as
# y-axis.
for country, grouped in df.groupby('Country'):
# Use the convenience .plot() method
grouped.plot()
# Standard Matplotlib functions are still available
plt.title(country)
The resulting plots are shown below (2, given the example data).
If you don't want a legend (since there is only one line), use grouped.plot(legend=None) instead.
If you want to pick one specific country, remove and replace the whole for-loop with the following
country = "Country II"
df[df['Country'] == country].plot()
If you want to do even more, also have a look at the Seaborn library.
Resulting plots:
My first post, so I hope I do this correctly!
This is admittedly an OOP modification of something on DataCamp. I have two objects which contain Pandas dataframes. The first (StockData) has stock data for every trading day of 2016 for both Amazon and Facebook. The second (BenchmarkData) has the S&P 500 closing values for every trading day of 2016. For both, I want to calculate the percent change (StockReturns and BenchmarkReturns, respectively) and then plot them. I want both of the StockReturns on the same plot, but the BenchmarkReturns (which is a Series and not a dataframe for reasons irrelevant to this part of the code) on a separate plot. For my function, I've added a flag as input to tell the program whether the object contains a stock dataframe or a benchmark dataframe and I call the function twice during runtime, once for stock and the other for benchmark. However, no matter what I do, Pandas plots all 3 on the same plot. How do I separate the benchmark data and get it on its own plot?
def _CalculatePercentChange(self, IsStockData):
if(IsStockData):
self.__StockReturns = self.__StockData.GetData().pct_change()
self.__StockReturns.plot(title = 'Daily Percent Change')
else:
self.__BenchmarkReturnsDataFrame = self.__BenchmarkData.GetData().pct_change()
self.__BenchmarkReturns = self.__BenchmarkReturnsDataFrame['S&P 500'].squeeze()
self.__BenchmarkReturns.plot(title = 'Daily Percent Change')
Thanks guys.
Though Pandas has time series functionality, I am still struggling with dataframes that have incomplete time series data.
See the pictures below, the lower picture has complete data, the upper has gaps. Both pics show correct values. In red are the columns that I want to calculate using the data in black. Column Cumm_Issd shows the accumulated issued shares during the year, MV is market value.
I want to calculate the issued shares per quarter (IssdQtr), the quarterly change in Market Value (D_MV_Q) and the MV of last year (L_MV_Y).
See for underlying cvs data this link for the full data and this link for the gapped data. There are two firms 1020180 and 1020201.
However, when I try Pandas shift method it fails when there are gaps, try yourself using the csv files and the code below. All columns (DiffEq, Dif1MV, Lag4MV) differ - for some quarters - from IssdQtr, D_MV_Q, L_MV_Y, respectively.
Are there ways to deal with gaps in data using Pandas?
import pandas as pd
import numpy as np
import os
dfg = pd.read_csv('example_soverflow_gaps.csv',low_memory=False)
dfg['date'] = pd.to_datetime(dfg['Period'], format='%Y%m%d')
dfg['Q'] = pd.DatetimeIndex(dfg['date']).to_period('Q')
dfg['year'] = dfg['date'].dt.year
dfg['DiffEq'] = dfg.sort_values(['Q']).groupby(['Firm','year'])['Cumm_Issd'].diff()
dfg['Dif1MV'] = dfg.groupby(['Firm'])['MV'].diff(1)
dfg['Lag4MV'] = dfg.groupby(['Firm'])['MV'].shift(4)
Gapped data:
Full data:
Solved the basic problem by using a merge. First, create a variable that shows the lagged date or quarter. Here we want last year's MV (4 quarters back):
from pandas.tseries.offsets import QuarterEnd
dfg['lagQ'] = dfg['date'] + QuarterEnd(-4)
Then create a data-frame with the keys (Firm and date) and the relevant variable (here MV).
lagset=dfg[['Firm','date', 'MV']].copy()
lagset.rename(columns={'MV':'Lag_MV', 'date':'lagQ'}, inplace=True)
Lastly, merge the new frame into the existing one:
dfg=pd.merge(dfg, lagset, on=['Firm', 'lagQ'], how='left')
I have many files in a folder that like this one:
enter image description here
and I'm trying to implement a dictionary for data. I'm interested in create it with 2 keys (the first one is the http address and the second is the third field (plugin used), like adblock). The values are referred to different metrics so my intention is to compute the for each site and plugin the mean,median and variance of each metric, once the dictionary has been implemented. For example for the mean, my intention is to consider all the 4-th field values in the file, etc. I tried to write this code but, first of all, I'm not sure that it is correct.
enter image description here
I read others posts but no-one solved my problem, since they threats or only one key or they don't show how to access the different values inside the dictionary to compute the mean,median and variance.
The problem is simple, admitting that the dictionary implementation is ok, in which way must I access the different values for the key1:www.google.it -> key2:adblock ?
Any kind oh help is accepted and I'm available for any other answer.
You can do what you want using a dictionary, but you should really consider using the Pandas library. This library is centered around tabular data structure called "DataFrame" that excels in column-wise and row-wise calculations such as the one that you seem to need.
To get you started, here is the Pandas code that reads one text file using the read_fwf() method. It also displays the mean and variance for the fourth column:
# import the Pandas library:
import pandas as pd
# Read the file 'table.txt' into a DataFrame object. Assume
# a header-less, fixed-width file like in your example:
df = pd.read_fwf("table.txt", header=None)
# Show the content of the DataFrame object:
print(df)
# Print the fourth column (zero-indexed):
print(df[3])
# Print the mean for the fourth column:
print(df[3].mean())
# Print the variance for the fourth column:
print(df[3].var())
There are different ways of selecting columns and rows from a DataFrame object. The square brackets [ ] in the previous examples selected a column in the data frame by column number. If you want to calculate the mean of the fourth column only from those rows that contain adblock in the third column, you can do it like so:
# Print those rows from the data frame that have the value 'adblock'
# in the third column (zero-indexed):
print(df[df[2] == "adblock"])
# Print only the fourth column (zero-indexed) from that data frame:
print(df[df[2] == "adblock"][3])
# Print the mean of the fourth column from that data frame:
print(df[df[2] == "adblock"][3].mean())
EDIT:
You can also calculate the mean or variance for more than one column at the same time:
# Use a list of column numbers to calculate the mean for all of them
# at the same time:
l = [3, 4, 5]
print(df[l].mean())
END EDIT
If you want to read the data from several files and do the calculations for the concatenated data, you can use the concat() method. This method takes a list of DataFrame objects and concatenates them (by default, row-wise). Use the following line to create a DataFrame from all *.txt files in your directory:
df = pd.concat([pd.read_fwf(file, header=None) for file in glob.glob("*.txt")],
ignore_index=True)