I'm trying to do something simple but having issues.
I want to read in a file and export each word to different columns in an excel spreadsheet. I have the spreadsheet portion, just having a hard time on what should be the simple part.
What I have happening is each character is placed on a new line.
I have a file called server_list. That file has contents as shown below.
Linux RHEL64 35
Linux RHEL78 24
Linux RHEL76 40
I want to read each line in the file and assign each word a variable so I can output it to the spreadsheet.
File = open("server_list", "r")
FileContent = File.readline()
for Ser, Ver, Up value in FileContent:
worksheet.write(row, col, Ser)
worksheet.write(row, col +1, Ver)
worksheet.write(row, col +1, Up)
row += 1
I'm getting the following error for this example
Traceback (most recent call last):
File "excel.py", line 47, in <module>
for Files, Ver, Uptime in FileContent:
ValueError: not enough values to unpack (expected 3, got 1)
FileContent is a string object that contains a single line of your file:
Out[4]: 'Linux RHEL64 35\n'
What you want to do with this string is strip the ending tag \n then split into single words. Only at this point you can do the item assignment that currently leads to your ValueError in your for-statement.
In python this means:
ser, ver, up = line.strip().split() # line is what you called FileContent, I'm allergic to caps in variable names
Now note that this is just one single line we are talking about. Probably you want to do this for all lines in the file, right?
So best is to iterate over the lines:
myfile = "server_list"
with open(myfile, 'r') as fobj:
for row, line in enumerate(fobj):
ser, ver, up = line.strip().split()
# do stuff with row, ser, ver, up
Note that you do not need to track the row yourself you can use the enumerate iterator to do so.
Also note, and this is crucial: the with statement I used here makes sure that you do not leave the file open. Using the with-clause whenever you are working with files is a good habit!
Related
I'm trying to remove duplicated rows in a csv file based on if a column has a unique value. My code looks like this:
seen = set()
for line in fileinput.FileInput('DBA.csv', inplace=1):
if line[2] in seen:
continue # skip duplicated line
seen.add(line[2])
print(line, end='')
I'm trying to get the value of the 2 index column in every row and check if it's unique. But for some reason my seen set looks like this:
{'b', '"', 't', '/', 'k'}
Any advice on where my logic is flawed?
You're reading your file line by line, so when you pick line[2] you're actually picking the third character of each line you're running this on.
If you want to capture the value of the second column for each row, you need to parse your CSV first, something like:
import csv
seen = set()
with open("DBA.csv", "rUb") as f:
reader = csv.reader(f)
for line in reader:
if line[2] in seen:
continue
seen.add(line[2])
print(line) # this will NOT print valid CSV, it will print Python list
If you want to edit your CSV in place I'm afraid it will be a bit more complicated than that. If your CSV is not huge, you can load it in memory, truncate it and then write down your lines:
import csv
seen = set()
with open("DBA.csv", "rUb+") as f:
handler = csv.reader(f)
data = list(handler)
f.seek(0)
f.truncate()
handler = csv.writer(f)
for line in data:
if line[2] in seen:
continue
seen.add(line[2])
handler.writerow(line)
Otherwise you'll have to read your file line by line and use a buffer that you'll pass to csv.reader() to parse it, check the value of its third column and if not seen write the line to the live-editing file. If seen, you'll have to seek back to the previous line beginning before writing the next line etc.
Of course, you don't need to use the csv module if you know your line structures well which can simplify the things (you won't need to deal with passing buffers left and right), but for a universal solution it's highly advisable to let the csv module do your bidding.
I am very new with python. I have a .txt file and want to convert it to a .csv file with the format I was told but could not manage to accomplish. a hand can be useful for it. I am going to explain it with screenshots.
I have a txt file with the name of bip.txt. and the data inside of it is like this
I want to convert it to csv like this csv file
So far, what I could do is only writing all the data from text file with this code:
read_files = glob.glob("C:/Users/Emrehana1/Desktop/bip.txt")
with open("C:/Users/Emrehana1/Desktop/Test_Result_Report.csv", "w") as outfile:
for f in read_files:
with open(f, "r") as infile:
outfile.write(infile.read())
So is there a solution to convert it to a csv file in the format I desire? I hope I have explained it clearly.
There's no need to use the glob module if you only have one file and you already know its name. You can just open it. It would have been helpful to quote your data as text, since as an image someone wanting to help you can't just copy and paste your input data.
For each entry in the input file you will have to read multiple lines to collect together the information you need to create an entry in the output file.
One way is to loop over the lines of input until you find one that begins with "test:", then get the next line in the file using next() to create the entry:
The following code will produce the split you need - creating the csv file can be done with the standard library module, and is left as an exercise. I used a different file name, as you can see.
with open("/tmp/blip.txt") as f:
for line in f:
if line.startswith("test:"):
test_name = line.strip().split(None, 1)[1]
result = next(f)
if not result.startswith("outcome:"):
raise ValueError("Test name not followed by outcome for test "+test_name)
outcome = result.strip().split(None, 1)[1]
print test_name, outcome
You do not use the glob function to open a file, it searches for file names matching a pattern. you could open up the file bip.txt then read each line and put the value into an array then when all of the values have been found join them with a new line and a comma and write to a csv file, like this:
# set the csv column headers
values = [["test", "outcome"]]
current_row = []
with open("bip.txt", "r") as f:
for line in f:
# when a blank line is found, append the row
if line == "\n" and current_row != []:
values.append(current_row)
current_row = []
if ":" in line:
# get the value after the semicolon
value = line[line.index(":")+1:].strip()
current_row.append(value)
# append the final row to the list
values.append(current_row)
# join the columns with a comma and the rows with a new line
csv_result = ""
for row in values:
csv_result += ",".join(row) + "\n"
# output the csv data to a file
with open("Test_Result_Report.csv", "w") as f:
f.write(csv_result)
I am working on my assignment for my intro level computing class and i've come across an error that i am unable to understand as to why it occurs.
My goal (at the moment) is to be able to extract information from the input file and store it in such a way where i get 3 values-- animal id, time visited and station.
Here is the input file:
#Comments
a01:01-24-2011:s1
a03:01-24-2011:s2
a03:09-24-2011:s1
a03:10-23-2011:s1
a04:11-01-2011:s1
a04:11-02-2011:s2
a04:11-03-2011:s1
a04:01-01-2011:s1
a02:01-24-2011:s2
a03:02-02-2011:s2
a03:03-02-2011:s1
a02:04-19-2011:s2
a04:01-23-2011:s1
a04:02-17-2011:s1
#comments
a01:05-14-2011:s2
a02:06-11-2011:s2
a03:07-12-2011:s1
a01:08-19-2011:s1
a03:09-19-2011:s1
a03:10-19-2011:s2
a03:11-19-2011:s1
a03:12-19-2011:s2
a04:12-20-2011:s2
a04:12-21-2011:s2
a05:12-22-2011:s1
a04:12-23-2011:s2
a04:12-24-2011:s2
And here is my code so far:
import os.path
def main():
station1={}
station2={}
record=()
items=[]
animal=[]
endofprogram =False
try:
filename1=input("Enter name of input file >")
infile=open(filename1,"r")
filename2=input('Enter name of output file > ')
while(os.path.isfile(filename2)):
filename2=input("File Exists!Enter name again>")
outfile=open(filename2.strip(),"w")
except IOError:
print("File does not exist")
endofprogram=True
if endofprogram==False:
print ('Continuing program')
records=reading(infile)
print('records are > ',records)
def reading(usrinput):
for line in usrinput:
if (len(line) !=0) or (line[0]!='#'):
AnimalID,Timestamp,StationID =line.split()
record= (AnimalID, Timestamp, StationID)
data=data.append(record)
return data
main()
What i am trying to do us to open the file and import the 3 data sets seperated by a' : '
The error i keep recieving is as such:
Continuing programTraceback (most recent call last):
File "C:\Program Files (x86)\Wing IDE 101 5.0\src\debug\tserver\_sandbox.py", line 39, in <module>
File "C:\Program Files (x86)\Wing IDE 101 5.0\src\debug\tserver\_sandbox.py", line 25, in main
File "C:\Program Files (x86)\Wing IDE 101 5.0\src\debug\tserver\_sandbox.py", line 34, in reading
builtins.ValueError: need more than 1 value to unpack
I have tried to switch the term in my reading function to :
AnimalID,Timestamp,StationID =line.split(':') ]
But still nothing.
The issue is len(line) !=0 that is always True. To select non-blank lines that do not start with #, you could:
line = line.strip() # remove leading/trailing whitespace
if line and line[0] != '#':
fields = line.split(':') #NOTE: use ':' delimiter
if len(fields) == 3:
data.append(fields)
I assume Comments is a line in your file. So the first line that your reading function will try to parse is the line that is simply Comments. This will not work because Comments will not create a sequence that is three elements long when you split the line on white space:
AnimalID, Timestamp, StationID = line.split() # won't work with headings
Thanks to the recent formatting of your input file, you could use the above approach if you filter which lines you try to split (that is to say, you ensure that the line you are splitting always has two colons, which would give you three elements). The following approach might illustrate an alternative method that you might use to stimulate thought:
for line in lines: # from the open file
if ':' in line.strip(): # for example; need to distinguish from station visits from headings somehow
print(line.split(':')) # you don't really want to print here, but you should figure out how to store this data
As I say in the comment, you won't really want to print the last line; you want to store it in some data structure. Further, you might find some better way to distinguish between lines with station visits and lines without. I'll leave these items for you to figure out, since I don't want to ruin the rest of the assignment for you.
Hi I already have the search function sorted out:
def searchconfig():
config1 = open("config.php", "r")
b='//cats'
for num, line in enumerate(config1,0):
if b in line:
connum = num + 1
return connum
config1.close()
This will return the line number of //cats, I then need to take the data underneath it put it in a tempoary document, append new data under the //cats and then append the data in the tempoary document to the original? how would i do this? i know that i would have to use 'a' instead of 'r' when opening the document but i do not know how to utilise the line number.
I think, the easiest way would be to read the whole file into a list of strings, work on that list and write it back afterwards.
# Read all lines of the file into a list of strings
with open("config.php", "r") as file:
lines = list(file)
file.close()
# This gets the line number for the first line containing '//cats'
# Note that it will throw an StopIteration exception, if no such line exists...
linenum = (num for (num, line) in enumerate(lines) if '//cats' in line).next()
# insert a line after the line containing '//cats'
lines.insert(linenum+1, 'This is a new line...')
# You could also replace the line following '//cats' like
lines[linenum+1] = 'New line content...'
# Write back the file (in fact this creates a new file with new content)
# Note that you need to append the line delimiter '\n' to every line explicitely
with open("config.php", "w") as file:
file.writelines(line + '\n' for line in lines)
file.close()
Using "a" as mode for open would only let you append ath the end of the file.
You could use "r+" for a combined read/write mode, but then you could only overwrite some parts of the file, there is no simple way to insert new lines in the middle of the file using this mode.
You could do it like this. I am creating a new file in this example as it is usually safer.
with open('my_file.php') as my_php_file:
add_new_content = ['%sNEWCONTENT' %line if '//cat' in line
else line.strip('\n')
for line in my_php_file.readlines()]
with open('my_new_file.php', 'w+') as my_new_php_file:
for line in add_new_content:
print>>my_new_php_file, line
I am trying to read a csv file in python. The csv file has 1400 rows. I opened the csv file using the following command:
import csv
import sys
f=csv.reader(open("/Users/Brian/Desktop/timesheets_9_1to10_5small.csv","rU"),
dialect=csv.excel_tab)
Then I tried to loop through the file to pull the first name from each row using the following commmands:
for row in f:
g=row
s=g[0]
end_of_first_name=s.find(",")
first_name=s[0:end_of_first_name]
I got the following error message:
Traceback (most recent call last):
File "", line 3, in module
s=g[0]
IndexError: list index out of range
Does anyone know why I would get this error message and how I can correct it?
You should not open the file in universal newline mode (U). Open the file in binary mode instead:
f=csv.reader(open("/Users/Brian/Desktop/timesheets_9_1to10_5small.csv","rb"),
dialect=csv.excel_tab)
CSV does it's own newline handling, including managing newlines in quotes.
Next, print your rows with print repr(row) to verify that you are getting the output you are expecting. Using repr instead of the regular string representation shows you much more about the type of objects you are handling, highlighting such differences as strings versus integers ('1' vs. 1).
Thirdly, if you want to select part of a string up to a delimiter such as a comma, use .split(delimiter, 1) or .partition(delimiter)[0]:
>>> 'John,Jack,Jill'.partition(',')[0]
'John'
row and g point to an empty list. I don't know if that necessarily means that it is empty line in the file as csv may have other issues with it.
line_counter = 0
for row in f:
line_counter = line_counter + 1
g=row
if len(g) == 0:
print "line",line_counter,"may be empty or malformed"
continue
Or, as Martijn points out, the Pythonic way is using enumerate:
for line_counter, row in enumerate(f,start=1):
g=row
if len(g) == 0:
print "line",line_counter,"may be empty or malformed"
continue