Finding average of every column from CSV file using Python? - python

I have a CSV file, which has several columns and several rows. Please, see the picture above. In the picture is shown just the two first baskets, but in the original CSV -file I have hundreds of them.
[1]: https://i.stack.imgur.com/R2ZTo.png
I would like to calculate average for every Fruit in every Basket using Python. Here is my code but it doesn't seem to work as it should be. Better ideas? I have tried to fix this also importing and using numpy but I didn't succeed with it.
I would appreciate any help or suggestions! I'm totally new in this.
import csv
from operator import itemgetter
fileLineList = []
averageFruitsDict = {} # Creating an empty dictionary here.
with open('Fruits.csv', newline='') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
fileLineList.append(row)
for column in fileLineList:
highest = 0
lowest = 0
total = 0
average = 0
for column in row:
if column.isdigit():
column = int(column)
if column > highest:
highest = column
if column < lowest or lowest == 0:
lowest = column
total += column
average = total / 3
averageFruitsDict[row[0]] = [highest, lowest, round(average)]
averageFruitsList = []
for key, value in averageFruitsDict.items():
averageFruitsList.append([key, value[2]])
print('\nFruits in Baskets\n')
print(averageFruitsList)
--- So I'm know trying with this code:
import pandas as pd
fruits = pd.read_csv('fruits.csv', sep=';')
print(list(fruits.columns))
fruits['Unnamed: 0'].fillna(method='ffill', inplace = True)
fruits.groupby('Unnamed: 0').mean()
fruits.groupby('Bananas').mean()
fruits.groupby('Apples').mean()
fruits.groupby('Oranges').mean()
fruits.to_csv('results.csv', index=False)
It creates a new CSV file for me and it looks correct, I don't get any errors but I can't make it calculate the mean of every fruit for every basket. Thankful of all help!

So using the image you posted and replicating/creating an identical test csv called fruit - I was able to create this quick solution using pandas.
import pandas as pd
fruit = pd.read_csv('fruit.csv')
With the unnamed column containing the basket numbers with NaNs in between - we fill with the preceding value. By doing so we are then able to group by the basket number (by using the 'Unnamed: 0' column and apply the mean to all other columns)
fruit['Unnamed: 0'].fillna(method='ffill', inplace = True)
fruit.groupby('Unnamed: 0').mean()
This gets you your desired output of a fruit average for each basket (please note I made up values for basket 3)

Related

How to use fuzzywuzzy efficiently in python?

I want to compute the similarity of column Aname from dataframe Apple to column Bname from dataframe Banana, and create a new column in dataframe A that shows the similarity, my code is as follows:
Bname = []
similarity = []
for i in Apple.Aname:
ratio = process.extract( i, Banana.Bname, limit=1)
Bname.append(ratio[0][0])
similarity.append(ratio[0][1])
Apple['Bname'] = pd.Series(Bname)
Apple['similarity'] = pd.Series(similarity)
However, there are over 400,000 rows in dataframe Apple and over 700,000 rows in dataframe Banana, my code runs for hours and I still haven't get the result. I wonder how to do it more efficiently? Or at least how could I know the progress of my code? Thanks a lot for your great help in advance!

Splitting a large pandas datafile based on the data in one colimn

I have a large-ish csv file that I want to split in to separate data files based on the data in one of the columns so that all related data can be analyzed.
ie. [name, color, number, state;
bob, green, 21, TX;
joe, red, 33, TX;
sue, blue, 22, NY;
....]
I'd like to have it put each states worth of data in to its own data sub file
df[1] = [bob, green, 21, TX] [joe, red, 33, TX]
df[2] = [sue, blue, 22, NY]
Pandas seems like the best option for this as the csv file given is about 500 lines long
You could try something like:
import pandas as pd
for state, df in pd.read_csv("file.csv").groupby("state"):
df.to_csv(f"file_{state}.csv", index=False)
Here file.csv is your base file. If it looks like
name,color,number,state
bob,green,21,TX
joe,red,33,TX
sue,blue,22,NY
the output would be 2 files:
file_TX.csv:
name,color,number,state
bob,green,21,TX
joe,red,33,TX
file_NY.csv:
name,color,number,state
sue,blue,22,NY
There are different methods for reading csv files. You may find all methods in following link:
(https://www.analyticsvidhya.com/blog/2021/08/python-tutorial-working-with-csv-file-for-data-science/)
Since you want to work with dataframe, using pandas is indeed a practical choice. At start you may do:
import pandas as pd
df = pd.read_csv(r"file_path")
Now let's assume after these lines, you have the following dataframe:
name
color
number
state
bob
green
21
TX
joe
red
33
TX
sue
blue
22
NY
...
...
...
...
From your question, I understand that you want to dissect information based on different states. State data may be mixed. (Ex: TX-NY-TX-DZ-TX etc.) So, sorting alphabetically and resetting index may be first step:
df.sort_values(by=['state'])
df.reset_index(drop = True, inplace = True)
Now, there are several methods we may use. From your question, I did not understand df[1}=2 lists , df[2]=list. I am assuming you meant df as list of lists for a state. In that case, let's use following method:
Method 1- Making List of Lists for Different States
First, let's get state list without duplicates:
s_list = list(dict.fromkeys(df.loc[:,"state"].tolist()))
Now we need to use list comprehension.
lol = [[df.iloc[i2,:].tolist() for i2 in range(df.shape[0]) \
if state==df.loc[i2,"state"]] for state in s_list]
lol (list of lists) variable is a list, which contains x number (state number) of lists inside. Each inside list has one or more lists as rows. So you may reach a state by writing lol[0], lol[1] etc.
Method 2- Making Different Dataframes for Different States
In this method, if there are 20 states, we need to get 20 dataframes. And we may combine dataframes in a list. First, we need state names again:
s_list = list(dict.fromkeys(df.loc[:,"state"].tolist()))
We need to get row index values (as list of lists) for different states. (For ex. NY is in row 3,6,7,...)
r_index = [[i for i in range(df.shape[0]) \
if df.loc[i,"Year"]==state] for state in s_list]
Let's make different dataframes for different states: (and reset index)
dfs = [df.loc[rows,:] for rows in r_index]
for df in dfs: df.reset_index(drop = True, inplace = True)
Now you have a list which contains n (state number) of dataframes inside. After this point, you may sort dataframes for name for example.
Method 3 - My Recommendation
Firstly, I would recommend you to split data based on name since it is a great identifier. But I am assuming you need to use state information. I would add state column as index. And make a nested dictionary:
import pandas as pd
df = pd.read_csv(r"path")
df = df.sort_values(by=['state'])
df.reset_index(drop = True, inplace = True)
# we know state is in column 3
states = list(dict.fromkeys(df.iloc[:,3].tolist()))
rows = [[i for i in range(df.shape[0]) if df.iloc[i,3]==s] for s in states]
temp = [[i2 for i2 in range(len(rows[i]))] for i in range(len(rows))]
into = [inner for outer in temp for inner in outer]
df.insert(4, "No", into)
df.set_index(pd.MultiIndex.from_arrays([df.iloc[:,no] for no in [3,4]]),inplace=True)
df.drop(df.columns[[3,4]], axis=1, inplace=True)
dfs = [df.iloc[row,:] for row in rows]
for i in range(len(dfs)): dfs[i] = dfs[i]\
.melt(var_name="app",ignore_index=False).set_index("app",append=True)
def call(df):
if df.index.nlevels == 1: return df.to_dict()[df.columns[0]]
return {key: call(df_gr.droplevel(0, axis=0)) for key, df_gr in df.groupby(level=0)}
data = {}
for i in range(len(states)): data.update(call(dfs[i]))
I may have done some typos, but I hope you understand the idea.
This code gives a nested dictionary such as:
first choice is state (TX,NY...)
next choice is state number index (0,1,2...)
next choice is name or color or number
Now that I look back at number column in csv file, you may avoid making a new column by using number directly if number column has no duplicates.

Finding string contained in CSV file and computing a sum

It's my first time working with Panda, so I am trying to wrap my head around all of its functionalities.
Essentially, I want to download my bank statements in CSV and search for a keyword (e.g. steam) and compute the money I spent.
I was able to use panda to locate lines that contain my keyword, but I do not know how to iterate through them and attribute the cost of that purchase to a variable that I will sum up as the iteration grows.
If you look in the image I upload, I am able to find the lines containing my keyword in the dataframe, but what I want to do is for each line found, I want to take the content of the col1 and sum it up together.
Attempt At Code
# importing pandas module
import pandas as pd
keyword = input("Enter the keyword you wish to search in the statement: ")
# reading csv file from url
df = pd.read_csv('accountactivity.csv',header=None)
dff=df.loc[df[1].str.contains(keyword,case=False)]
value=df.values[68][2] #Fetches value of a specific cell in the CSV/dataframe created
print(dff)
print(value)
EDIT:
I essentially was almost able to complete the code I wanted, using only the CSV reader, but I can't get that code to find substrings. It only works if I enter the exact same string, meaning if I enter netflix it doesn't work, I would need to write it exactly as it appears on the statement like NETFLIX.COM _V. Here is another screenshot of that working code. I essentially want to mimic that with the capabilities of just finding substrings.
Working Code using CSV reader
import csv
data=[]
with open("accountactivity.csv") as csvfile:
reader = csv.reader(csvfile)
for row in reader:
data.append(row)
keyword = input("Enter the keyword you wish to search in the statement: ")
col = [x[1] for x in data]
Sum = 0
if keyword in col:
for x in range(0, len(data)):
if keyword == data[x][1]:
PartialSum=float(data[x][2])
Sum=Sum+PartialSum
print(data[x][1])
print("The sum for expenses at ",keyword," is of: ",Sum,"$",sep = '')
else:
print("Keyword returned no results.")
The format of the CSV is the following: CSV Format
column 0 Date of transaction
column 1 Name of transaction
column 2 Money spent from account
column 3 Money received to account
The CSV file downloaded directly from my bank has no headers. So I refer to columns using col[0] etc...
Thanks for your help, I will continue meanwhile to look at how to potentially do this.
dff[dff.columns[col_index]].sum()
where col_index is the index of the column you want to sum together.
Thanks everyone for your help. I ended up understanding more how dataframe with Pandas work and I used the command: df[df.columns["col_index"]].sum() (which was suggested to me by Jonny Kong) with the column of interest (which in my case is column 2 containing my expenses). It computes the sum of my expenses for the searched keyword which is what I need!
#Importing pandas module
import pandas as pd
#Keyword searched through bank statement
keyword = input("Enter the keyword you wish to search in the statement: ")
#Reading the bank statement CSV file
df = pd.read_csv('accountactivity.csv',header=None)
#Creating dataframe from bank statement with lines that match search keyword
dff=df.loc[df[1].str.contains(keyword,case=False)]
#Sum the column which contains total money spent on the keyword searched
Sum=dff[dff.columns[2]].sum()
#Prints the created dataframe
print("\n",dff,"\n")
#Prints the sum of expenses for the keyword searched
print("The sum for expenses at ",keyword," is of: ",Sum,"$",sep = '')
Working Code!
Again, thanks everyone for helping and supporting me through my first post on SO!

Create a column where values are max of range of another column in python

My Problem
I am trying to create a column in python where each value is equal to the max value of the last 64 rows of another column i.e. to find the rolling 64 day high of a stock.
I am currently using the following code, but it is really slow because of the loops. I want to try and re-do it without using loops. The dataset is simply the last closing price of a stock.
Current Working Code
import numpy as np
import pandas as pd
csv1 = pd.read_csv('vod_price.csv', delimiter = ',')
df = pd.DataFrame(csv1)
for x in range(1,65):
df["3m high"].iloc[x]= df["PX_LAST"].iloc[:(x+1)].max()
for x in range(65,len(df.index)):
df["3m high"].iloc[x]= df["PX_LAST"].iloc[(x-64):(x+1)].max()
df
Attempt at Solution
I have tried the following, but it just gives me the max of the whole column.
maxrange = df['PX_LAST'].between(df['PX_LAST'].shift(64),df['PX_LAST'])
df['3m high'] = df['PX_LAST'].loc[maxrange].max()
Does anyone know how I might be able to do it?
Cheers
Use Series.rolling:
df["3m high"] = df["PX_LAST"].rolling(64).max()

How to create separate data frames based on row numbers in a loop

I am reading in data from an excel file. And currently I am breaking down it several different DFs based on the row numbers.
What I want to do is create a loop which will iterate over the imputed row numbers and create different Dfs with the appropriate suffixes.
Currently I am creating separate Dfs by passing in row numbers in each line.
NHE_17= NHE_data.parse('NHE17')
#Slice DataFrame for only Total National Health Expenditure data, from
row 0 to 37(Population): total_nhe
total_nhe = NHE_17.iloc[0:37]
print(total_nhe.iloc[0,-1])
#Slice DataFrame for only Health Consumption Expenditures, from row 38 to
70(Total CMS Programs (Medicaid, CHIP and Medicare): total_hce
total_hce = NHE_17.iloc[38:70]
I want to be able call the function with the row numbers and suffix to create the specific DF.
that function would look like:
def row_slicer(slice_tuple):
#This will slice the NHE_17 according to slice_parameters parameters
# Input slice_tuple = [x1,x1
df_temp = NHE_17.iloc[slice_tuple[0]:slice_tuple[1]]
return df_temp
dict_dataframes = {}
#assuming this is a dictionary, else you can zip lists with pandas columns
name_list_row = [['total_nhe',[0,37]],['total_hce',[38,70]]...]
for name,slice_tuple in name_list_row:
df = row_slicer(slice_tuple)
dict_dataframes[name] = df
Hope this helps!

Categories