Python output every nth row without Pandas - python

I'm really new to Python and my task is to rewrite a CSV with Python. I managed to program a working script for my task already. Now I would like to get only every 10th row of the CSV as output.
Is there an easy way to do this?
I already tried to use Jason Reeks answer.
Now it works, thank you!
import csv
import sys
userInputFileName = sys.argv[1]
outPutFileSkipped = userInputFileName.split('.')[0] + '-Skipped.csv'
cnt = 0
first = True
with open(outPutFileSkipped, 'w', newline='') as outputCSV:
csv_reader_object_skipped = csv.reader((x.replace('\0', '') for x in open(userInputFileName)), delimiter=',')
csv_writer_object_skipped = csv.writer(outputCSV, delimiter=',')
for row, line in enumerate(csv_reader_object_skipped):
if row % 10 == 0:
print(line)
csv_writer_object_skipped.writerow(line)
print('Es wurden erfolgreich ' + str(cnt) + ' Zeilen formatiert!')

Here's a native way to do it without pandas:
import csv
with open('file.csv', 'r') as f:
reader = csv.reader(f)
for row, line in enumerate(reader):
# Depending on your reference point you may want to + 1 to row
# to get every 10th row.
if row % 10 == 0:
print(line)

There's an easy way with Pandas:
import pandas as pd
df = pd.DataFrame({"a": range(100), "b": range(100, 200)})
df.loc[::10]

Related

Extract two columns sorted from CSV

I have a large csv file, containing multiple values, in the form
Date,Dslam_Name,Card,Port,Ani,DownStream,UpStream,Status
2020-01-03 07:10:01,aart-m1-m1,204,57,302xxxxxxxxx,0,0,down
I want to extract the Dslam_Name and Ani values, sort them by Dslam_name and write them to a new csv in two different columns.
So far my code is as follows:
import csv
import operator
with open('bad_voice_ports.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
sortedlist = sorted(readCSV, key=operator.itemgetter(1))
for row in sortedlist:
bad_port = row[1][:4],row[4][2::]
print(bad_port)
f = open("bad_voice_portsnew20200103SORTED.csv","a+")
f.write(row[1][:4] + " " + row[4][2::] + '\n')
f.close()
But my Dslam_Name and Ani values are kept in the same column.
As a next step I would like to count how many times the same value appears in the 1st column.
You are forcing them to be a single column. Joining the two into a single string means Python no longer regards them as separate.
But try this instead:
import csv
import operator
with open('bad_voice_ports.csv') as readfile, open('bad_voice_portsnew20200103SORTED.csv', 'w') as writefile:
readCSV = csv.reader(readfile)
writeCSV = csv.writer(writefile)
for row in sorted(readCSV, key=operator.itemgetter(1)):
bad_port = row[1][:4],row[4][2::]
print(bad_port)
writeCSV.writerow(bad_port)
If you want to include the number of times each key occurred, you can easily include that in the program, too. I would refactor slightly to separate the reading and the writing.
import csv
import operator
from collections import Counter
with open('bad_voice_ports.csv') as readfile:
readCSV = csv.reader(readfile)
rows = []
counts = Counter()
for row in readCSV:
rows.append([row[1][:4], row[4][2::]])
counts[row[1][:4]] += 1
with open('bad_voice_portsnew20200103SORTED.csv', 'w') as writefile:
writeCSV = csv.writer(writefile)
for row in sorted(rows):
print(row)
writeCSV.writerow([counts[row[0]]] + row)
I would recommend to remove the header line from the CSV file entirely; throwing away (or separating out and prepending back) the first line should be an easy change if you want to keep it.
(Also, hard-coding input and output file names is problematic; maybe have the program read them from sys.argv[1:] instead.)
So my suggestion is failry simple. As i stated in a previous comment there is good documentation on CSV read and write in python here: https://realpython.com/python-csv/
As per an example, to read from a csv the columns you need you can simply do this:
>>> file = open('some.csv', mode='r')
>>> csv_reader = csv.DictReader(file)
>>> for line in csv_reader:
... print(line["Dslam_Name"] + " " + line["Ani"])
...
This would return:
aart-m1-m1 302xxxxxxxxx
Now you can just as easilly create a variable and store the column values there and later write them to a file or just open up a new file wile reading lines and writing the column values in there. I hope this helps you.
After the help from #tripleee and #marxmacher my final code is
import csv
import operator
from collections import Counter
with open('bad_voice_ports.csv') as csv_file:
readCSV = csv.reader(csv_file, delimiter=',')
sortedlist = sorted(readCSV, key=operator.itemgetter(1))
line_count = 0
rows = []
counts = Counter()
for row in sortedlist:
Dslam = row[1][:4]
Ani = row[4][2:]
if line_count == 0:
print(row[1], row[4])
line_count += 1
else:
rows.append([row[1][:4], row[4][2::]])
counts[row[1][:4]] += 1
print(Dslam, Ani)
line_count += 1
for row in sorted(rows):
f = open("bad_voice_portsnew202001061917.xls","a+")
f.write(row[0] + '\t' + row[1] + '\t' + str(counts[row[0]]) + '\n')
f.close()
print('Total of Bad ports =', str(line_count-1))
As with this way the desired values/columns are extracted from the initial csv file and a new xls file is generated with the desired values stored in different columns and the total values per key are counted, along with the total of entries.
Thanks for all the help, please feel free for any improvement suggestions!
You can use sorted:
import csv
_h, *data = csv.reader(open('filename.csv'))
with open('new_csv.csv', 'w') as f:
write = csv.writer(f)
csv.writerows([_h, *sorted([(i[1], i[4]) for i in data], key=lambda x:x[0])])

Python: Effective reading from a file using csv module

I have just started learning csv module recently. Suppose we have this CSV file:
John,Jeff,Judy,
21,19,32,
178,182,169,
85,74,57,
And we want to read this file and create a dictionary containing names (as keys) and totals of each column (as values). So in this case we would end up with:
d = {"John" : 284, "Jeff" : 275, "Judy" : 258}
So I wrote this code which apparently works well, but I am not satisfied with it and was wondering if anyone knows of better or more efficient/elegant way of doing this. Because there's just too many lines in there :D (Or maybe a way we could generalize it a bit - i.e. we would not know how many fields are there.)
d = {}
import csv
with open("file.csv") as f:
readObject = csv.reader(f)
totals0 = 0
totals1 = 0
totals2 = 0
totals3 = 0
currentRowTotal = 0
for row in readObject:
currentRowTotal += 1
if currentRowTotal == 1:
continue
totals0 += int(row[0])
totals1 += int(row[1])
totals2 += int(row[2])
if row[3] == "":
totals3 += 0
f.close()
with open(filename) as f:
readObject = csv.reader(f)
currentRow = 0
for row in readObject:
while currentRow <= 0:
d.update({row[0] : totals0})
d.update({row[1] : totals1})
d.update({row[2] : totals2})
d.update({row[3] : totals3})
currentRow += 1
return(d)
f.close()
Thanks very much for any answer :)
Not sure if you can use pandas, but you can get your dict as follows:
import pandas as pd
df = pd.read_csv('data.csv')
print(dict(df.sum()))
Gives:
{'Jeff': 275, 'Judy': 258, 'John': 284}
Use the top row to figure out what the column headings are. Initialize a dictionary of totals based on the headings.
import csv
with open("file.csv") as f:
reader = csv.reader(f)
titles = next(reader)
while titles[-1] == '':
titles.pop()
num_titles = len(titles)
totals = { title: 0 for title in titles }
for row in reader:
for i in range(num_titles):
totals[titles[i]] += int(row[i])
print(totals)
Let me add that you don't have to close the file after the with block. The whole point of with is that it takes care of closing the file.
Also, let me mention that the data you posted appears to have four columns:
John,Jeff,Judy,
21,19,32,
178,182,169,
85,74,57,
That's why I did this:
while titles[-1] == '':
titles.pop()
It's a little dirty, but try this (operating without the empty last column):
#!/usr/bin/python
import csv
import numpy
with open("file.csv") as f:
reader = csv.reader(f)
headers = next(reader)
sums = reduce(numpy.add, [map(int,x) for x in reader], [0]*len(headers))
for name, total in zip(headers,sums):
print("{}'s total is {}".format(name,total))
Base on Michasel's solution, I would try with less code and less variables and no dependency on Numpy:
import csv
with open("so.csv") as f:
reader = csv.reader(f)
titles = next(reader)
sum_result = reduce(lambda x,y: [ int(a)+int(b) for a,b in zip(x,y)], list(reader))
print dict(zip(titles, sum_result))

Error while editing many CSV files

I asked this question a few days ago because I needed help with editing some CSV files: Fix numbering on CSV files that have deleted lines.
The people of Stack Overflow helped me out a great deal however I keep getting an error saying AttributeError: 'int' object has no attribute 'strip'. The problem is that all the information in my CSV files are not integers. Being the python newbie that I am, a few days of trying to fix it only made things worse. Here is what I have from my previous question that gives me the error:
import csv
import glob
import os
import re
numbered = re.compile(r'N\d+').match
for fn in fns:
# open for counting
reader = csv.reader(open(fn,"rb"))
count = sum(1 for row in reader if row and not any(r.strip() == 'DIF' for r in row) and numbered(row[0]))
# reopen for filtering
reader = csv.reader(open(fn,"rb"))
with open (os.path.join('out', fn), 'wb') as f:
counter = 0
w = csv.writer(f)
for row in reader:
if row and 'Count' in row[0].strip():
row = ['Count', count]
if row and not any(r.strip() == 'DIF' for r in row): #remove DIF
if numbered(row[0]):
counter += 1
row[0] = 'N%d' % counter
w.writerow(row)
The code is essentially supposed to run through a bunch of CSV files and delete all the lines that have 'DIF' in them and fix the numbering due to deleted lines. Does anyone have any suggestions?
Easiest might be to wrap r in str(). But at the same time, why don't you read the file in just once, makes it easier:
import csv
import glob
import os
import re
numbered = re.compile(r'N\d+').match
for fn in fns:
reader = csv.reader(open(fn,"rb"))
# filter out 'DIF' rows here
rows = [ row for row in reader
if not any(str(r).strip() == 'DIF'
for r in row) ]
# count numbered rows
count = len([row for row in rows if row and numbered(row[0])])
with open (os.path.join('out', fn), 'wb') as f:
counter = 0
w = csv.writer(f)
for row in rows:
if row and 'Count' in row[0].strip():
row = ['Count', count]
if row and numbered(row[0]):
counter += 1
row[0] = 'N%d' % counter
w.writerow(row)

How to find specific rows in csv document in python

What I'm trying to do is read into a csv document and find all values in the SN column > 20 and make a new file with only the rows with SN > 20.
I know that I need to do:
Read the original File
Open a new file
Iterate over rows of the original file
What I've been able to do is find the rows that have a value of SN > 20
import csv
import os
os.chdir("C:\Users\Robert\Documents\qwe")
with open("gdweights_feh_robert_cmr.csv",'rb') as f:
reader = csv.reader(f, delimiter= ',')
zerovar = 0
for row in reader:
if zerovar==0:
zerovar = zerovar + 1
else:
sn = row [11]
zerovar = zerovar + 1
x = float(sn)
if x > 20:
print x
So my question is how do I take the rows with SN > 20 and turn it into a new file?
Save the data in a list, then write the list to a file.
import csv
import os
os.chdir(r"C:\Users\Robert\Documents\qwe")
output_ary = []
with open("gdweights_feh_robert_cmr.csv",'rb') as f:
reader = csv.reader(f, delimiter= ',')
zerovar = 0
for row in reader:
if zerovar==0:
zerovar = zerovar + 1
else:
sn = row [11]
zerovar = zerovar + 1
x = float(sn)
if x > 20:
print x
output_ary.append(row)
with open("output.csv",'w') as f2:
for row in output_ary:
for item in row:
f2.write(item + ",")
In the code, the reading / looping through the rows is is quite complex. It could be cleaned up (and run faster in Python) with the following:
with open('gdweights_feh_robert_cmr.csv', 'rb') as f:
output_ary = [row for row in f if float(row[11]) > 20]
Using list comprehension ([row for row if f]) is optimised in python, so it will preform more efficiently. AND... you avoid having to create the reader array, which will reduce the memory required, also very handy if the csv file is large.
You can then proceed to write out the outout_ary as suggested in the other answers.
Hope this helps!

How can I get a specific field of a csv file?

I need a way to get a specific item(field) of a CSV. Say I have a CSV with 100 rows and 2 columns (comma seperated). First column emails, second column passwords. For example I want to get the password of the email in row 38. So I need only the item from 2nd column row 38...
Say I have a csv file:
aaaaa#aaa.com,bbbbb
ccccc#ccc.com,ddddd
How can I get only 'ddddd' for example?
I'm new to the language and tried some stuff with the csv module, but I don't get it...
import csv
mycsv = csv.reader(open(myfilepath))
for row in mycsv:
text = row[1]
Following the comments to the SO question here, a best, more robust code would be:
import csv
with open(myfilepath, 'rb') as f:
mycsv = csv.reader(f)
for row in mycsv:
text = row[1]
............
Update: If what the OP actually wants is the last string in the last row of the csv file, there are several aproaches that not necesarily needs csv. For example,
fulltxt = open(mifilepath, 'rb').read()
laststring = fulltxt.split(',')[-1]
This is not good for very big files because you load the complete text in memory but could be ok for small files. Note that laststring could include a newline character so strip it before use.
And finally if what the OP wants is the second string in line n (for n=2):
Update 2: This is now the same code than the one in the answer from J.F.Sebastian. (The credit is for him):
import csv
line_number = 2
with open(myfilepath, 'rb') as f:
mycsv = csv.reader(f)
mycsv = list(mycsv)
text = mycsv[line_number][1]
............
#!/usr/bin/env python
"""Print a field specified by row, column numbers from given csv file.
USAGE:
%prog csv_filename row_number column_number
"""
import csv
import sys
filename = sys.argv[1]
row_number, column_number = [int(arg, 10)-1 for arg in sys.argv[2:])]
with open(filename, 'rb') as f:
rows = list(csv.reader(f))
print rows[row_number][column_number]
Example
$ python print-csv-field.py input.csv 2 2
ddddd
Note: list(csv.reader(f)) loads the whole file in memory. To avoid that you could use itertools:
import itertools
# ...
with open(filename, 'rb') as f:
row = next(itertools.islice(csv.reader(f), row_number, row_number+1))
print row[column_number]
import csv
def read_cell(x, y):
with open('file.csv', 'r') as f:
reader = csv.reader(f)
y_count = 0
for n in reader:
if y_count == y:
cell = n[x]
return cell
y_count += 1
print (read_cell(4, 8))
This example prints cell 4, 8 in Python 3.
There is an interesting point you need to catch about csv.reader() object. The csv.reader object is not list type, and not subscriptable.
This works:
for r in csv.reader(file_obj): # file not closed
print r
This does not:
r = csv.reader(file_obj)
print r[0]
So, you first have to convert to list type in order to make the above code work.
r = list( csv.reader(file_obj) )
print r[0]
Finaly I got it!!!
import csv
def select_index(index):
csv_file = open('oscar_age_female.csv', 'r')
csv_reader = csv.DictReader(csv_file)
for line in csv_reader:
l = line['Index']
if l == index:
print(line[' "Name"'])
select_index('11')
"Bette Davis"
Following may be be what you are looking for:
import pandas as pd
df = pd.read_csv("table.csv")
print(df["Password"][row_number])
#where row_number is 38 maybe
import csv
inf = csv.reader(open('yourfile.csv','r'))
for row in inf:
print row[1]

Categories