I know that there are some topics on this that tell us to use .strip() or .rstrip() function to do this but it's not working for me.
I have a programme that appends a new line to the csv file but unfortunately it generates a trailing comma...
I have tried to remove it with the .strip() function in python but it isn't working well, I am doing something wrong?
This is an example of what happen when I input 'T123' for Bike_No and '05/08/2017' for Purchase_Date
from datetime import datetime
td= datetime.now()
initial_bike_detaillist=[]
deBatt =100
deKM = 0.00
deDate = str(td)[8:10] + "/"+ str(td)[5:7] + "/"+ str(td)[0:4]
print("Option 4: Add abicycle \n")
Bike_No=input("Bike No. :")
Purchase_Date=str(input("Purchase Date:"))
initial_bike_detaillist=[str(Bike_No),str(Purchase_Date),str(deBatt),str(deDate),str(deKM)]#because there is no write function for int
filename="Assignment_Data1.csv"
file=open(filepath + filename,"a")
file.write("\n")
for k in initial_bike_detaillist:
file.write("{},".format(k))
print("Bicycle ({}) has been created".format(Bike_No))
file.close()
file=open(filepath + filename,"r")
for line in file:
line.strip()
print(line)
expected output=
Bike No.,Purchase Date,Batt %,Last Maintenance,KM since Last
T101,10/04/2016,55,10/01/2017,25.08
T102,01/07/2016,10,15/05/2017,30.94
T103,15/11/2016,94,13/06/2017,83.16
T104,25/04/2017,58,10/01/2017,25.08
T105,24/05/2017,5,20/06/2017,93.80
T123,04/04/2017,100,05/08/2017,0.0
actual output:
Bike No.,Purchase Date,Batt %,Last Maintenance,KM since Last
T101,10/04/2016,55,10/01/2017,25.08
T102,01/07/2016,10,15/05/2017,30.94
T103,15/11/2016,94,13/06/2017,83.16
T104,25/04/2017,58,10/01/2017,25.08
T105,24/05/2017,5,20/06/2017,93.80
T123,04/04/2017,100,05/08/2017,0.0,
`
Instead of this line :
for k in initial_bike_detaillist:
file.write("{},".format(k))
use following line :
file.write(','.join(initial_bike_detaillist))
Your Code :
from datetime import datetime
td = datetime.now()
initial_bike_detaillist = []
deBatt = 100
deKM = 0.00
deDate = str(td)[8:10] + "/" + str(td)[5:7] + "/" + str(td)[0:4]
print("Option 4: Add abicycle \n")
Bike_No = input("Bike No. :")
Purchase_Date = str(input("Purchase Date:"))
initial_bike_detaillist = [str(Bike_No), str(Purchase_Date), str(deBatt), str(deDate),
str(deKM)] # because there is no write function for int
filename = "Assignment_Data1.csv"
file = open(filepath + filename, "a")
file.write("\n")
# for k in initial_bike_detaillist:
# file.write("{},".format(k))
file.write(','.join(initial_bike_detaillist)) # use this line .
print("Bicycle ({}) has been created".format(Bike_No))
file.close()
file = open(filepath + filename, "r")
for line in file:
# line.strip() # Then, not need this line
print(line)
Related
I tried to extract data from a text file (cisco switch logs) and convert it to CSV so I can create a table and sort out the data & create graphs out of it. So here is my code:
import pandas as pd
import csv
import time
from datetime import datetime
import os
import glob
import sys
pathh = glob.glob("C:\\Users\\Taffy R. Mantang\\Desktop\\PR logs\\*\\")
#This part of the code opens all the text with the name ISW-1.txt inside the PR logs folder
for x in pathh:
# Detect the line number in text file to know where the row begin
phrase = "Shelf Panel CPUID Power CPU(5s) CPU(1m) CPU(5m) Peak PhyMem FreeMem Mem"
file = open("{0}".format(x) + "\\ISW-1.txt")
for number, line in enumerate(file):
if phrase in line:
sh_pro = number
break
file.close()
#Convert the text file to CSV from the row determined earlier
with open("{0}".format(x) + '\\ISW-1.txt', 'r') as rf:
r = csv.reader(rf, skipinitialspace=True, delimiter=' ')
rows = list(r)
heada = rows[sh_pro]
heada.insert(0, " ")
print(heada)
#to mark the last row
skipprocessor = sh_pro + 4
for i in range(7):
if i == 0:
print(rows[skipprocessor + i])
if i == 2:
sub_heada = rows[skipprocessor + i]
sub_heada.insert(0, " ")
sub_heada.insert(1, " ")
sub_heada.insert(2, " ")
print(rows[skipprocessor + i])
if i == 4:
sub_heada = rows[skipprocessor + i]
sub_heada.insert(0, " ")
sub_heada.insert(1, " ")
sub_heada.insert(2, " ")
print(rows[skipprocessor + i])
if i == 6:
sub_heada = rows[skipprocessor + i]
sub_heada.insert(0, " ")
sub_heada.insert(1, " ")
sub_heada.insert(2, " ")
print(rows[skipprocessor + i])
Previously it worked and it printed the output successfully. However while I was experimenting with exporting the output to an excel table, suddenly there was an error saying:
Traceback (most recent call last):
File "C:\Users\Taffy R. Mantang\PycharmProjects\pythonProject\main.py", line 26, in
heada = rows[sh_pro]
NameError: name 'sh_pro' is not defined
I traced back and undo everything but it still gives the same error.
I tried to remove an indent on line 26, it managed to print(heada). but messed up the if else code down below it and not print out the rest below.
What exactly is the problem? Help :'''((
sh_pro is not defined because you are not hitting the condition if phrase in line:, I would suggest:
for number, line in enumerate(file):
if phrase in line:
sh_pro = number
break
file.close()
#Convert the text file to CSV from the row determined earlier
with open("{0}".format(x) + '\\ISW-1.txt', 'r') as rf:
r = csv.reader(rf, skipinitialspace=True, delimiter=' ')
rows = list(r)
try:
heada = rows[sh_pro]
except NameError:
# error handling
In order to declare sh_pro, the condition if phrase in line: in your for cycle should return True. So if your condition returns False then your interpreter never meets such name as sh_pro. You can try to modify your code in a way that sh_pro is declared before you want to start working with it.
for number, line in enumerate(file):
if phrase in line:
sh_pro = number
break
file.close()
the code below reads the data.txt file and prints the records in the data.txt file.
text_file = open("data.txt", "r")
lines = text_file.readlines()
print (lines)
print (lines)
text_file.close()
def print_all_records(records):
print("Date" + "\t\t" + "Branch" + "\t\t" + "Daily Sale" + "\t\t" + "Transactions")
for record in records:
parts = record.split(",")
print(parts[0] + "\t" + parts[1] + "\t" + "$" + parts[2] + "\t\t" + parts[3])
example of information in the data.txt file
1-2-2014,Frankton,42305.67,23
12-4-2014,Glenview,21922.22,17
10-2-2015,Glenview,63277.9,32
how do i make it so that i can query the records by date. for example if a user input the date 1 2 2014 it would search the data.txt file to find if that date exists then print that line of the record. and if it doesnt find anything it asks the user try again and again until it finds a date that matches a record.
I'm assuming that you use Python 3.
def print_entries(date):
"""Prints all the entries that match with date"""
with open('a.txt', 'r') as f:
flag = False
content = f.readlines()
content = [line.strip('\n').split(',') for line in content]
for row in content:
if row[0] == date:
flag = True
print(*row, sep='\t')
if not flag:
print('Try again')
return flag
while not print_entries(input("Enter date :")):
pass
If you're using Python 2, replace print(*row, sep = '\t') with print('\t'.join(row)).
Running the program -
Enter date :12-4-2014
12-4-2014 Glenview 21922.22 17
I have a file config and the contents are separated by space " "
cat config
/home/user1 *.log,*.txt 30
/home/user2 *.trm,*.doc,*.jpeg 10
I want to read this file,parse each line and print each field from the each line.
Ex:-
Dir = /home/user1
Fileext = *.log,*.txt
days=30
I couldn't go further than the below..
def dir():
file = open('config','r+')
cont = file.readlines()
print "file contents are %s" % cont
for i in range(len(cont)):
j = cont[i].split(' ')
dir()
Any pointers how to move further?
Your code is fine, you are just missing the last step processing each element of the splitted string, try this:
def dir():
file = open('config','r+')
cont = file.readlines()
print "file contents are %s" % cont + '\n'
elements = []
for i in range(len(cont)):
rowElems = cont[i].split(' ')
elements.append({ 'dir' : rowElems[0], 'ext' : rowElems[1], 'days' : rowElems[2] })
for e in elements:
print "Dir = " + e['dir']
print "Fileext = " + e['ext']
print "days = " + e['days']
dir()
At the end of this code, you will have all the rows processed and stored in an array of dictionaries you can easily access later.
You can write a custom function to parse each line, and then use the map function to apply that function against each line in file.readlines():
def parseLine(line):
# function to split and parse each line,
# and return the formatted string
Dir, FileExt, Days = line.split(' ')[:3]
return 'Dir = {}\nFileext = {}\nDays = {}'.format(Dir, FileExt, Days)
def dir():
with open('config','r+') as file:
print 'file contents are\n' + '\n'.join(map(parseLine, file.readlines()))
Results:
>>> dir()
file contents are
Dir = /home/user1
Fileext = *.log,*.txt
Days = 30
Dir = /home/user2
Fileext = *.trm,*.doc,*.jpeg
Days = 10
Essentially what I am attempting to do is read 'n' number of lines from a file and then write them to a separate file. This program essentially should take a file that has 100 lines and separate that file into 50 separate files.
def main():
from itertools import islice
userfile = raw_input("Please enter the file you wish to open\n(must be in this directory): ")
file1 = open(userfile, "r+")
#print "Name: ", file1.name
#print "Closed or not", file1.closed
#print "Opening mode: ", file1.mode
#print "Softspace flag: ", file1.softspace
jcardtop = file1.read(221);
#print jcardtop
n = 2
count = 0
while True:
next_n_lines = list(islice(file1,n))
print next_n_lines
count = count + 1
fileout = open(str(count)+ ".txt", "w+")
fileout.write(str(jcardtop))
fileout.write(str(next_n_lines))
fileout.close()
break
if not next_n_lines:
break
I do have the file printing as well to show what is in the variable next_n_lines.
*['\n', "randomtext' more junk here\n"]
I would like it instead to look like
randomtext' more junk here
Is this a limitatoin of the islice function? Or am I missing a portion of the syntax?
Thanks for your time!
Where you call str() or print, you want to ''.join(next_n_lines) instead:
print ''.join(next_n_lines)
and
fileout.write(''.join(next_n_lines))
You can store the flattened string in a variable if you don't want to call join twice.
Did you mean something like this?
f = open(userfile,"r")
start = 4
n_lines = 100
for line in f.readlines()[start:(start + n_lines)]:
print line
#do stuff with line
or maybe this rough, yet effective code:
f = open(userfile,"r")
start = 4
end = start + 100
count = start
while count != end:
for line in f.readlines()[count:(count + 2)]:
fileout = open(str(count)+ ".txt", "w+")
fileout.write(str(line))
fileout.close()
count = count + 2
I'm in trouble here. I need to read a file. Txt file that contains a sequence of records, check the records that I want to copy them to a new file.
The file content is like this (this is just an example, the original file has more than 30 000 lines):
AAAAA|12|120 #begin file
00000|46|150 #begin register
03000|TO|460
99999|35|436 #end register
00000|46|316 #begin register
03000|SP|467
99999|33|130 #end register
00000|46|778 #begin register
03000|TO|478
99999|33|457 #end register
ZZZZZ|15|111 #end file
The records that begin with 03000 and have the characters 'TO' must be written to a new file. Based on the example, the file should look like this:
AAAAA|12|120 #begin file
00000|46|150 #begin register
03000|TO|460
99999|35|436 #end register
00000|46|778 #begin register
03000|TO|478
99999|33|457 #end register
ZZZZZ|15|111 #end file
Code:
file = open("file.txt",'r')
newFile = open("newFile.txt","w")
content = file.read()
file.close()
# here I need to check if the record exists 03000 characters 'TO', if it exists, copy the recordset 00000-99999 for the new file.
I did multiple searches and found nothing to help me.
Thank you!
with open("file.txt",'r') as inFile, open("newFile.txt","w") as outFile:
outFile.writelines(line for line in inFile
if line.startswith("03000") and "TO" in line)
If you need the previous and the next line, then you have to iterate inFile in triads. First define:
def gen_triad(lines, prev=None):
after = current = next(lines)
for after in lines:
yield prev, current, after
prev, current = current, after
And then do like before:
outFile.writelines(''.join(triad) for triad in gen_triad(inFile)
if triad[1].startswith("03000") and "TO" in triad[1])
import re
pat = ('^00000\|\d+\|\d+.*\n'
'^03000\|TO\|\d+.*\n'
'^99999\|\d+\|\d+.*\n'
'|'
'^AAAAA\|\d+\|\d+.*\n'
'|'
'^ZZZZZ\|\d+\|\d+.*')
rag = re.compile(pat,re.MULTILINE)
with open('fifi.txt','r') as f,\
open('newfifi.txt','w') as g:
g.write(''.join(rag.findall(f.read())))
For files with additional lines between lines beginning with 00000, 03000 and 99999, I didn't find simpler code than this one:
import re
pat = ('(^00000\|\d+\|\d+.*\n'
'(?:.*\n)+?'
'^99999\|\d+\|\d+.*\n)'
'|'
'(^AAAAA\|\d+\|\d+.*\n'
'|'
'^ZZZZZ\|\d+\|\d+.*)')
rag = re.compile(pat,re.MULTILINE)
pit = ('^00000\|.+?^03000\|TO\|\d+.+?^99999\|')
rig = re.compile(pit,re.DOTALL|re.MULTILINE)
def yi(text):
for g1,g2 in rag.findall(text):
if g2:
yield g2
elif rig.match(g1):
yield g1
with open('fifi.txt','r') as f,\
open('newfifi.txt','w') as g:
g.write(''.join(yi(f.read())))
file = open("file.txt",'r')
newFile = open("newFile.txt","w")
content = file.readlines()
file.close()
newFile.writelines(filter(lambda x:x.startswith("03000") and "TO" in x,content))
This seems to work. The other answers seem to only be writing out records that contain '03000|TO|' but you have to write out the record before and after that as well.
import sys
# ---------------------------------------------------------------
# ---------------------------------------------------------------
# import file
file_name = sys.argv[1]
file_path = 'C:\\DATA_SAVE\\pick_parts\\' + file_name
file = open(file_path,"r")
# ---------------------------------------------------------------
# create output files
output_file_path = 'C:\\DATA_SAVE\\pick_parts\\' + file_name + '.out'
output_file = open(output_file_path,"w")
# create output files
# ---------------------------------------------------------------
# process file
temp = ''
temp_out = ''
good_write = False
bad_write = False
for line in file:
if line[:5] == 'AAAAA':
temp_out += line
elif line[:5] == 'ZZZZZ':
temp_out += line
elif good_write:
temp += line
temp_out += temp
temp = ''
good_write = False
elif bad_write:
bad_write = False
temp = ''
elif line[:5] == '03000':
if line[6:8] != 'TO':
temp = ''
bad_write = True
else:
good_write = True
temp += line
temp_out += temp
temp = ''
else:
temp += line
output_file.write(temp_out)
output_file.close()
file.close()
Output:
AAAAA|12|120 #begin file
00000|46|150 #begin register
03000|TO|460
99999|35|436 #end register
00000|46|778 #begin register
03000|TO|478
99999|33|457 #end register
ZZZZZ|15|111 #end file
Does it have to be python? These shell commands would do the same thing in a pinch.
head -1 inputfile.txt > outputfile.txt
grep -C 1 "03000|TO" inputfile.txt >> outputfile.txt
tail -1 inputfile.txt >> outputfile.txt
# Whenever I have to parse text files I prefer to use regular expressions
# You can also customize the matching criteria if you want to
import re
what_is_being_searched = re.compile("^03000.*TO")
# don't use "file" as a variable name since it is (was?) a builtin
# function
with open("file.txt", "r") as source_file, open("newFile.txt", "w") as destination_file:
for this_line in source_file:
if what_is_being_searched.match(this_line):
destination_file.write(this_line)
and for those who prefer a more compact representation:
import re
with open("file.txt", "r") as source_file, open("newFile.txt", "w") as destination_file:
destination_file.writelines(this_line for this_line in source_file
if re.match("^03000.*TO", this_line))
code:
fileName = '1'
fil = open(fileName,'r')
import string
##step 1: parse the file.
parsedFile = []
for i in fil:
##tuple1 = (1,2,3)
firstPipe = i.find('|')
secondPipe = i.find('|',firstPipe+1)
tuple1 = (i[:firstPipe],\
i[firstPipe+1:secondPipe],\
i[secondPipe+1:i.find('\n')])
parsedFile.append(tuple1)
fil.close()
##search criterias:
searchFirst = '03000'
searchString = 'TO' ##can be changed if and when required
##step 2: used the parsed contents to write the new file
filout = open('newFile','w')
stringToWrite = parsedFile[0][0] + '|' + parsedFile[0][1] + '|' + parsedFile[0][2] + '\n'
filout.write(stringToWrite) ##to write the first entry
for i in range(1,len(parsedFile)):
if parsedFile[i][1] == searchString and parsedFile[i][0] == searchFirst:
for j in range(-1,2,1):
stringToWrite = parsedFile[i+j][0] + '|' + parsedFile[i+j][1] + '|' + parsedFile[i+j][2] + '\n'
filout.write(stringToWrite)
stringToWrite = parsedFile[-1][0] + '|' + parsedFile[-1][1] + '|' + parsedFile[-1][2] + '\n'
filout.write(stringToWrite) ##to write the first entry
filout.close()
I know that this solution may be a bit long. But it is quite easy to understand. And it seems an intuitive way to do it. And I have already checked this with the Data that you have provided and it works perfectly.
Please tell me if you need some more explanation on the code. I will definitely add the same.
I tip (Beasley and Joran elyase) very interesting, but it only allows to get the contents of the line 03000. I would like to get the contents of the lines 00000 to line 99999.
I even managed to do here, but I am not satisfied, I wanted to make a more cleaner.
See how I did:
file = open(url,'r')
newFile = open("newFile.txt",'w')
lines = file.readlines()
file.close()
i = 0
lineTemp = []
for line in lines:
lineTemp.append(line)
if line[0:5] == '03000':
state = line[21:23]
if line[0:5] == '99999':
if state == 'TO':
newFile.writelines(lineTemp)
else:
linhaTemp = []
i = i+1
newFile.close()
Suggestions...
Thanks to all!