Currently in my code it changes the 3rd row but for all rows, I want it to only change the row with the entered GTIN by the user.
Current code:
file=open("stock.csv")
stockfile= csv.reader(file)
for line in stockfile:
if GTIN in line:
currentstock= line[2]
targetstock = line[3]
newstock = (int(currentstock) - int(Quantity))
currentstock = str(currentstock)
targetstock = str(targetstock)
newstock = str(newstock)
if newstock < targetstock :
import csv
reader = csv.reader(open('stock.csv',"r"))
new = csv.writer(open('out.csv',"w"))
for line in reader:
new.writerow([line[0], line[1], newstock , line[3]])
Output in file (it changes all numbers in 3rd column):
86947367,banana,1,40
78364721,apple,1,20
35619833,orange,1,30
84716491,sweets,1,90
46389121,chicken,1,10
How can I only change the row with the GTIN the user enters?
use the csv module:
https://docs.python.org/3/library/csv.html
It has a csv.reader() and csv.writer(). Read the file into memory, iterate over it doing calcs/replacements, then write each row to a new list. Finally, generate a new data file to replace the old one.
I answered one of your other questions before you were using csvreader but it looks like it got deleted. But the principle is the same. As I stated in one of the comments, I don't think you should keep reopening/rereading stock.txt. Just read it line by line then write line by line to an output file:
stock_number = input('Enter the stock number: ')
new_item = input('Enter item to add to above stock listing: ')
lines = []
with open('stock.txt', 'r') as infile:
for line in infile:
lines.append(line)
# can call this 'output.txt' if you don't want to overwrite original
with open('stock.txt', 'w') as outfile:
for line in lines:
if stock_number in line:
# strip newline, add new item, add newline
line = '{},{}\n'.format(line.strip(), new_item)
outfile.write(line)
Edit: here it is with csv module instead. This makes it a little more straightforward because the csv module gives you a list of strings for each line, then you can add to or modify them as desired. Then you can just write the list back line by line, without worrying about newlines or delimiters.
import csv
stock_number = input('Enter the stock number: ')
new_item = input('Enter item to add to above stock listing: ')
lines = []
with open('stock.txt', 'r') as infile:
for line in csv.reader(infile):
lines.append(line)
# change this to 'stock.txt' to overwrite original file
with open('output.txt', 'w') as outfile:
writer = csv.writer(outfile)
for line in lines:
if stock_number in line:
line.append(new_item)
writer.writerow(line)
Also you shouldn't really import anything in the middle of the code like that. Imports generally go at the top of your file.
Related
I'm trying to write a code that will search for specific data from multiple report files, and write them into columns in a single csv.
The report file lines i'm looking for aren't always on the same line, so i'm looking for the data associated on the lines below:
Estimate file: pog_example.bef
Estimate ID: o1_p1
61078 (100.0%) estimated.
And I want to write the data from each text file into columns in a csv as below:
example.bef, o1_p1, 61078 (100.0%) estimated
So far I have this script which will list out the first of my criteria, but I can't figure out how to loop it through to find my second and third lines to populate the second and third columns
from glob import glob
import fileinput
import csv
with open('percentage_estimated.csv', 'w', newline='') as est_report:
writer = csv.writer(est_report)
for line in fileinput.input(glob('*.bef*')):
if 'Estimate file' in line:
writer.writerow([line.split('pog_')[1].strip()])
I'm pretty new to python so any help would be appreciated!
I think I see what you're trying to do, but I'm not sure.
I think your BEF file might look something like this:
a line
another line
Estimate file: pog_example.bef
Estimate ID: o1_p1
61078 (100.0%) estimated.
still more lines
If that's true, then once you find a line with 'Estimate file', you need to take control from the for-loop and start manually iterating the lines because you know which lines are coming up.
This is a very simple example script which opens my mock BEF file (above) and automatically iterates the lines till it finds 'Estimate file'. From there it processes each line specifically, using next(bef_file) to iterate to the next line, expecting them to have the correct text:
import csv
all_rows = []
bef_file = open('input.bef')
for line in bef_file:
if 'Estimate file' in line:
fname = line.split('pog_')[1].strip()
line = next(bef_file)
est_id = line.split('Estimate ID:')[1].strip()
line = next(bef_file)
value = line.strip()
row = [fname, est_id, value]
all_rows.append(row)
break # stop iterating lines in this file
csv_out = open('output.csv', 'w', newline='')
writer = csv.writer(csv_out)
writer.writerow(['File name', 'Est ID', 'Est Value'])
writer.writerows(all_rows)
When I run that I get this for output.csv:
File name,Est ID,Est Value
example.bef,o1_p1,61078 (100.0%) estimated.
If there are blank lines in your data between the lines you care about, manually step over them with next(bef_file) statements.
if anyone wants to see what finally worked for me
from glob import glob
import csv
all_rows = []
with open('percentage_estimated.csv', 'w', newline='') as bef_report:
writer = csv.writer(bef_report)
writer.writerow(['File name', 'Est ID', 'Est Value'])
for file in glob('*.bef*'):
with open(file,'r') as f:
for line in f:
if 'Estimate file' in line:
fname = line.split('pog_')[1].strip()
line = next(f)
est_id = line.split('Estimate ID:')[1].strip()
line = next(f)
line = next(f)
line = next(f)
line = next(f)
line = next(f)
line = next(f)
line = next(f)
value = line.strip()
row = [fname, est_id, value]
all_rows.append(row)
break
writer.writerows(all_rows)
I am new to data processing using CSV module. And i have input file And using this code`
import csv
path1 = "C:\\Users\\apple\\Downloads\\Challenge\\raw\\charity.a.data"
csv_file_path = "C:\\Users\\apple\\Downloads\\Challenge\\raw\\output.csv.bak"
with open(path1, 'r') as in_file:
in_file.__next__()
stripped = (line.strip() for line in in_file)
lines = (line.split(":$%:") for line in stripped if line)
with open(csv_file_path, 'w') as out_file:
writer = csv.writer(out_file)
writer.writerow(('id', 'donor_id','last_name','first_name','year','city','state','postal_code','gift_amount'))
writer.writerows(lines)
`
Is it possible to remove (:) in the first and last column of csv file. And i want output be like
Please help me.
If you just want to eliminate the ':' at the first and last column, this should work. Keep in mind that your dataset should be tab (or something other than comma) separated before you read it, because as I commented in your question, there are commas ',' in your dataset.
path1 = '/path/input.csv'
path2 = '/path/output.csv'
with open(path1, 'r') as input, open(path2, 'w') as output:
file = iter(input.readlines())
output.write(next(file))
for row in file:
output.write(row[1:][:-2] + '\n')
Update
So after giving your code, I added a small change to do the whole process starting from the initial file. The idea is the same. You should just exclude the first and the last char of each line. So instead of line.strip() you should have line.strip()[1:][:-2].
import csv
path1 = "C:\\Users\\apple\\Downloads\\Challenge\\raw\\charity.a.data"
csv_file_path = "C:\\Users\\apple\\Downloads\\Challenge\\raw\\output.csv.bak"
with open(path1, 'r') as in_file:
in_file.__next__()
stripped = (line.strip()[1:][:-2] for line in in_file)
lines = (line.split(":$%:") for line in stripped if line)
with open(csv_file_path, 'w') as out_file:
writer = csv.writer(out_file)
writer.writerow(('id', 'donor_id','last_name','first_name','year','city','state','postal_code','gift_amount'))
writer.writerows(lines)
This is my code i am able to print each line but when blank line appears it prints ; because of CSV file format, so i want to skip when blank line appears
import csv
import time
ifile = open ("C:\Users\BKA4ABT\Desktop\Test_Specification\RDBI.csv", "rb")
for line in csv.reader(ifile):
if not line:
empty_lines += 1
continue
print line
If you want to skip all whitespace lines, you should use this test: ' '.isspace().
Since you may want to do something more complicated than just printing the non-blank lines to the console(no need to use CSV module for that), here is an example that involves a DictReader:
#!/usr/bin/env python
# Tested with Python 2.7
# I prefer this style of importing - hides the csv module
# in case you do from this_file.py import * inside of __init__.py
import csv as _csv
# Real comments are more complicated ...
def is_comment(line):
return line.startswith('#')
# Kind of sily wrapper
def is_whitespace(line):
return line.isspace()
def iter_filtered(in_file, *filters):
for line in in_file:
if not any(fltr(line) for fltr in filters):
yield line
# A dis-advantage of this approach is that it requires storing rows in RAM
# However, the largest CSV files I worked with were all under 100 Mb
def read_and_filter_csv(csv_path, *filters):
with open(csv_path, 'rb') as fin:
iter_clean_lines = iter_filtered(fin, *filters)
reader = _csv.DictReader(iter_clean_lines, delimiter=';')
return [row for row in reader]
# Stores all processed lines in RAM
def main_v1(csv_path):
for row in read_and_filter_csv(csv_path, is_comment, is_whitespace):
print(row) # Or do something else with it
# Simpler, less refactored version, does not use with
def main_v2(csv_path):
try:
fin = open(csv_path, 'rb')
reader = _csv.DictReader((line for line in fin if not
line.startswith('#') and not line.isspace()),
delimiter=';')
for row in reader:
print(row) # Or do something else with it
finally:
fin.close()
if __name__ == '__main__':
csv_path = "C:\Users\BKA4ABT\Desktop\Test_Specification\RDBI.csv"
main_v1(csv_path)
print('\n'*3)
main_v2(csv_path)
Instead of
if not line:
This should work:
if not ''.join(line).strip():
my suggestion would be to just use the csv reader who can delimite the file into rows. Like this you can just check whether the row is empty and if so just continue.
import csv
with open('some.csv', 'r') as csvfile:
# the delimiter depends on how your CSV seperates values
csvReader = csv.reader(csvfile, delimiter = '\t')
for row in csvReader:
# check if row is empty
if not (row):
continue
You can always check for the number of comma separated values. It seems to be much more productive and efficient.
When reading the lines iteratively, as these are a list of comma separated values you would be getting a list object. So if there is no element (blank link), then we can make it skip.
with open(filename) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=",")
for row in csv_reader:
if len(row) == 0:
continue
You can strip leading and trailing whitespace, and if the length is zero after that the line is empty.
import csv
with open('userlist.csv') as f:
reader = csv.reader(f)
user_header = next(reader) # Add this line if there the header is
user_list = [] # Create a new user list for input
for row in reader:
if any(row): # Pick up the non-blank row of list
print (row) # Just for verification
user_list.append(row) # Compose all the rest data into the list
This example just prints the data in array form while skipping the empty lines:
import csv
file = open("data.csv", "r")
data = csv.reader(file)
for line in data:
if line: print line
file.close()
I find it much clearer than the other provided examples.
import csv
ifile=csv.reader(open('C:\Users\BKA4ABT\Desktop\Test_Specification\RDBI.csv', 'rb'),delimiter=';')
for line in ifile:
if set(line).pop()=='':
pass
else:
for cell_value in line:
print cell_value
Rather than appending to the end of a file, I am trying to append to the end of a certain line of a .csv file.
I want to do this when the user enters an input that matches the first column of the .csv.
Here's an example:
file=open("class"+classno+".csv", "r+")
writer=csv.writer(file)
data=csv.reader(file)
for row in data:
if input == row[0]:
(APPEND variable TO ROW)
file.close()
Is there a way to do this? Would I have to redefine and then rewrite the file?
You can read the whole file then change what you need to change and write it back to file (it's not really writing back when it's complete overwriting).
Maybe this example will help:
read_data = []
with open('test.csv', 'r') as f:
for line in f:
read_data.append(line)
with open('test.csv', 'w') as f:
for line in read_data:
key,value = line.split(',')
new_line = line
if key == 'b':
value = value.strip() + 'added\n'
new_line = ','.join([key,value])
f.write(new_line)
My test.csv file at start:
key,value
a,1
b,2
c,3
d,4
And after I run that sample code:
key,value
a,1
b,2added
c,3
d,4
It's probably not the best solution with big files.
This is my code i am able to print each line but when blank line appears it prints ; because of CSV file format, so i want to skip when blank line appears
import csv
import time
ifile = open ("C:\Users\BKA4ABT\Desktop\Test_Specification\RDBI.csv", "rb")
for line in csv.reader(ifile):
if not line:
empty_lines += 1
continue
print line
If you want to skip all whitespace lines, you should use this test: ' '.isspace().
Since you may want to do something more complicated than just printing the non-blank lines to the console(no need to use CSV module for that), here is an example that involves a DictReader:
#!/usr/bin/env python
# Tested with Python 2.7
# I prefer this style of importing - hides the csv module
# in case you do from this_file.py import * inside of __init__.py
import csv as _csv
# Real comments are more complicated ...
def is_comment(line):
return line.startswith('#')
# Kind of sily wrapper
def is_whitespace(line):
return line.isspace()
def iter_filtered(in_file, *filters):
for line in in_file:
if not any(fltr(line) for fltr in filters):
yield line
# A dis-advantage of this approach is that it requires storing rows in RAM
# However, the largest CSV files I worked with were all under 100 Mb
def read_and_filter_csv(csv_path, *filters):
with open(csv_path, 'rb') as fin:
iter_clean_lines = iter_filtered(fin, *filters)
reader = _csv.DictReader(iter_clean_lines, delimiter=';')
return [row for row in reader]
# Stores all processed lines in RAM
def main_v1(csv_path):
for row in read_and_filter_csv(csv_path, is_comment, is_whitespace):
print(row) # Or do something else with it
# Simpler, less refactored version, does not use with
def main_v2(csv_path):
try:
fin = open(csv_path, 'rb')
reader = _csv.DictReader((line for line in fin if not
line.startswith('#') and not line.isspace()),
delimiter=';')
for row in reader:
print(row) # Or do something else with it
finally:
fin.close()
if __name__ == '__main__':
csv_path = "C:\Users\BKA4ABT\Desktop\Test_Specification\RDBI.csv"
main_v1(csv_path)
print('\n'*3)
main_v2(csv_path)
Instead of
if not line:
This should work:
if not ''.join(line).strip():
my suggestion would be to just use the csv reader who can delimite the file into rows. Like this you can just check whether the row is empty and if so just continue.
import csv
with open('some.csv', 'r') as csvfile:
# the delimiter depends on how your CSV seperates values
csvReader = csv.reader(csvfile, delimiter = '\t')
for row in csvReader:
# check if row is empty
if not (row):
continue
You can always check for the number of comma separated values. It seems to be much more productive and efficient.
When reading the lines iteratively, as these are a list of comma separated values you would be getting a list object. So if there is no element (blank link), then we can make it skip.
with open(filename) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=",")
for row in csv_reader:
if len(row) == 0:
continue
You can strip leading and trailing whitespace, and if the length is zero after that the line is empty.
import csv
with open('userlist.csv') as f:
reader = csv.reader(f)
user_header = next(reader) # Add this line if there the header is
user_list = [] # Create a new user list for input
for row in reader:
if any(row): # Pick up the non-blank row of list
print (row) # Just for verification
user_list.append(row) # Compose all the rest data into the list
This example just prints the data in array form while skipping the empty lines:
import csv
file = open("data.csv", "r")
data = csv.reader(file)
for line in data:
if line: print line
file.close()
I find it much clearer than the other provided examples.
import csv
ifile=csv.reader(open('C:\Users\BKA4ABT\Desktop\Test_Specification\RDBI.csv', 'rb'),delimiter=';')
for line in ifile:
if set(line).pop()=='':
pass
else:
for cell_value in line:
print cell_value