Convert csv column into list using pandas - python

I'm currently working on a project that takes a csv list of student names who attended a meeting, and converts it into a list (later to be compared to full student roster list, but one thing at a time). I've been looking for answers for hours but I still feel stuck. I've tried using both pandas and the csv module. I'd like to stick with pandas, but if it's easier in the csv module that works too. CSV file example and code below.
The file is autogenerated by our video call software- so the formatting is a little weird.
Attendance.csv
see sample as image, I can't insert images yet
Code:
data = pandas.read_csv("2A Attendance Report.csv", header=3)
AttendanceList = data['A'].to_list()
print(str(AttendanceList))
However, this is raising KeyError: 'A'
Any help is really appreciated, thank you!!!

As seen in sample image, you have column headers in the first row itself. Hence you need to remove header=3 from your read_csv call. Either replace it with header=0 or don't specify any explicit header value at all.

Related

Python) to_csv: my csv file data is separated and pushed back

I'm saving my pd.DataFrame with
"""df.to_csv('df.csv', encoding='utf-8-sig)"""
my csv file have a problem...
please see rows, where have content2-1, content2-2, and content2-3 in this pic.
Before saving(to_csv), there was no problem. All the data had right columns, 'content2' was not separated. but after df -> csv...
'content2' is all separated, and the others of 'id2' are allocated to the wrong columns.
"2018-04-21" have to be in column D, 0 with E,F,G, and url must be in column I.
why this happen? because of large csv file?(774,740KB), because of language?(Korean), or csv cannot recognize enter key?(All data with problems such as content2 were separated based on the enter key.)
how can I resolve this? I have no idea
Unfortunately I never figured out the reason for this.. I assumed it was something to do with the size of the data i was working with and excel not liking it.
What worked for me though was using .to_excel() instead of to_csv(). I know, far from a perfect answer, but thought id put it here incase it is enough for your case

Reading and writing to an excel sheet using pandas in python, to use append or concat and what method?

i'm writing a small script that reads from excel sheet the id of an episode and fills in it's corresponding series name, here's a following example of my excel sheet that would be used as input
my script would read the "tconst" value and use it to find the corrisponding episode on imdb and get the website title and use that to find the name of the series,
import pandas as pd
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
dataset_loc='C:\\Users\\Ghandy\\Documents\\Datasets\\Episodes with over 1k ratings 2020+Small.xlsx'
dataset= pd.read_excel(dataset_loc)
for tconst in dataset['tconst']:
url='https://www.imdb.com/title/{}/'.format(tconst)
soup = BeautifulSoup(urlopen(url),features="lxml")
dataset = dataset.append({"Name": re.findall(r'"([^"]*)"',soup.title.get_text())[0]}, ignore_index=True)
dataset.to_excel(dataset_loc,index=False)
I got a few problems with this code, first python keeps telling me to not use concat and instead use append, but all the answers on google and stackoverflow give examples with append and i don't know how to use concat exactly,
second, my data is being appened into a completely new and empty row, not next to the original data that i want, so in this example i would get "The Mandalorian" at row 4 instead of 2,
and finally third, i want to know if it's better to add the data one at a time or put them all in a temporary list variable and then add that all at the same time, and how would i go about doing that with concat?
I can't really say what your problem with append and concat consists in -- everyone says use append and you use append as well, do you want to use concat instead? Here is a post on the difference between concat and append.
Append appends rows, you might want to use .at?
I would say this depends on how much data you already have and how much you are going to add. To have less overhead and copying around I would prefer to add directly to the dataframe, but if there is a lot happening between the url call and the adding to the df, the collected version could be better.
thanks to #Stimmot using .at, the code would look like this now:
for index, tconst in enumerate(dataset['tconst']):
url='https://www.imdb.com/title/{}/'.format(tconst)
soup = BeautifulSoup(urlopen(url),features="lxml")
dataset.at[index,'Name']=re.findall(r'"([^"]*)"',soup.title.get_text())[0]
dataset.to_excel(dataset_loc)

python/pandas : Pandas changing the value adding extra digits in values [duplicate]

I have a csv file containing numerical values such as 1524.449677. There are always exactly 6 decimal places.
When I import the csv file (and other columns) via pandas read_csv, the column automatically gets the datatype object. My issue is that the values are shown as 2470.6911370000003 which actually should be 2470.691137. Or the value 2484.30691 is shown as 2484.3069100000002.
This seems to be a datatype issue in some way. I tried to explicitly provide the data type when importing via read_csv by giving the dtype argument as {'columnname': np.float64}. Still the issue did not go away.
How can I get the values imported and shown exactly as they are in the source csv file?
Pandas uses a dedicated dec 2 bin converter that compromises accuracy in preference to speed.
Passing float_precision='round_trip' to read_csv fixes this.
Check out this page for more detail on this.
After processing your data, if you want to save it back in a csv file, you can passfloat_format = "%.nf" to the corresponding method.
A full example:
import pandas as pd
df_in = pd.read_csv(source_file, float_precision='round_trip')
df_out = ... # some processing of df_in
df_out.to_csv(target_file, float_format="%.3f") # for 3 decimal places
I realise this is an old question, but maybe this will help someone else:
I had a similar problem, but couldn't quite use the same solution. Unfortunately the float_precision option only exists when using the C engine and not with the python engine. So if you have to use the python engine for some other reason (for example because the C engine can't deal with regex literals as deliminators), this little "trick" worked for me:
In the pd.read_csv arguments, define dtype='str' and then convert your dataframe to whatever dtype you want, e.g. df = df.astype('float64') .
Bit of a hack, but it seems to work. If anyone has any suggestions on how to solve this in a better way, let me know.

How to find duplicates in a csv with python, and then alter the row

For a little background this is the csv file that I'm starting with. (the data is nonsensical and only used for proof of concept)
Jackson,Thompson,jackson.thompson#hotmail.com,test,
Luke,Wallace,luke.wallace#lycos.com,test,
David,Wright,david.wright#hotmail.com,test,
Nathaniel,Butler,nathaniel.butler#aol.com,test,
Eli,Simpson,noah.simpson#hotmail.com,test,
Eli,Mitchell,eli.mitchell#aol.com,,test2
Bob,Test,bob.test#aol.com,test,
What I am attempting to do with this csv on a larger scale is if the first value in the row is duplicated I need to take the data in the second entry and append it to the row with the first instance of the value. For example, in the data above "Eli" is represented twice, the first instance has "test" after the email value. The second instance of "Eli" does not have a value there it instead has another value in the next index over, and remove the duplicate row.
I would want it to go from this:
Eli,Simpson,noah.simpson#hotmail.com,test,,
Eli,Mitchell,eli.mitchell#aol.com,,test2
To this:
Eli,Simpson,noa.simpson#hotmail.com,test,test2
I have been able to successfully import this csv into my code using what is below.
import csv
f = open('C:\Projects\Python\Test.csv','r')
csv_f = csv.reader(f)
test_list = []
for row in csv_f:
test_list.append(row[0])
print(test_list)
At this point I was able to import my csv, and put the first names into my list. I'm not sure how to compare the indexes to make the changes I'm looking for. I'm a python rookie so any help/guidance would be greatly appreciated.
If you want to use pandas you could use the pandas .drop_deplicates() method. An example would look something like this.
import pandas as pd
csv_f = pd.read_csv(r'C:\a file with addresses')
data.drop_duplicates(subset=['thing_to_drop'], keep='first',inplace=False)
see pandas documentation https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&uact=8&ved=2ahUKEwiej-eNrLrjAhVBGs0KHV6bB9kQFjADegQIABAB&url=https%3A%2F%2Fpandas.pydata.org%2Fpandas-docs%2Fstable%2Freference%2Fapi%2Fpandas.DataFrame.drop_duplicates.html&usg=AOvVaw1uGhCrPNMDDZAZWE9_YA9D
I am a kind of a newbie in python as well but I would suggest using dictreader and look at the excel file as a dictionary meaning every raw is a dictionary.
this way you can iterate through the names easily.
Second, I would suggest making a list of names already known to you as you iterate through the excel file to check if this is a known name for example
name_list.append("eli")
then when you check if "eli" in name_list:
and add a key, value to the first one.
I don't know if this is best practice so don't roast me guys, but this is a simple and quick solution.
This will help you practice iterating through lists and dictionaries as well.
Here is a helpful link for reading about csv handling.

Python or bash: Merging two csv files based on several matching field values, formatting, the outputting CSV

My preference would be for this to be in Python since I am working on learning more. If you can provide help in bash that would still be helpful, though.
I've looked around Stack Overflow and found some helpful things but not enough for me to finish this.
I have two CSV files with some shared fields. The data is not INT. I would like to join based on matching 3 specific fields and write it out to a new output.csv when all the processing is done.
sourceA.csv looks like this:
fieldname_1,fieldname_2,fieldname_3,fieldname_4,fieldname_5,fieldname_6,fieldname_7,fieldname_8,fieldname_9,fieldname_10,fieldname_11,fieldname_12,fieldname_13,fieldname_14,fieldname_15,fieldname_16
sourceB.csv looks like this:
fieldname_4,fieldname_5,fieldname_OTHER,fieldname_8,fieldname_16
As you can see, sourceB.csv has 4 field names that are also in sourceA.csv and one field name that does not. The data in fieldname_OTHER will need to replace the data in sourceA[fieldname_6].
The whole process should go like this:
Replace data in sourceA[fieldname_6] with data from sourceB[fieldname_OTHER] if all of the following criteria are met:
data in sourceA[fieldname_4]=sourceB[fieldname_4]
data in sourceA[fieldname_8]=sourceB[fieldname_8]
data in sourceA[fieldname_16]=sourceB[fieldname_16]
(The data in sourceB[fieldname_5] does not need to be evaluated.)
If the above criteria aren't met, just replace sourceA[fieldname_6] with the text ANY.
Write each processed line out to output.csv.
A sample of what I would like the output to be based on the input CSVs and processing outlined above:
dataA,dataB,dataC,dataD,dataE,dataOTHER,dataG,dataH,dataI,dataJ,dataK,dataL,dataM,dataN,dataO,dataP
I hope the details I've provided haven't made it more confusing than it needs to be. Thank you for all your help!
I'm not sure I'd bother with SQL for a one-off merger like this. It's straightforward in python.
Read in both files with the csv module, to get two lists. Index sourceA into a dictionary whose key is the tuple of fields that need to be matched. You can then loop over sourceB, find the matching row instantly, and merge into it from sourceB.
When you're done, you can just output the list you read from sourceA: the dict and the list point to the same values, which you've now updated.

Categories