I'm running a program that reads a Dictionary off a file called questions.txt.
{
1:{0:'question',1:{1:'answer1a',2:'answer1b'},2:'answer2'},
2:{0:'question',1:'answer1',2:'answer2'},
3:{0:'question',1:'answer1',2:'answer2'},
4:{0:'question',1:'answer1',2:'answer2'}
}
I'm using this code from my file dictict.pyw to read parts of the Dictionary.
fQDict = open("questions.txt", "r")
QDict = " "
for i in fQDict.read().split():
QDict += i
fQDict.close()
QDict = eval(QDict)
print(QDict)
print(QDict[1])
print(QDict[1][0])
print(QDict[1][1])
print(QDict[1][1][1])
When I run the program python throws an error saying source code string cannot contain null bytes at the QDict = eval(QDict) line, why?
Your file contains null byte characters
sed -i 's/\x0//g' null.txt
This should help to remove these characters from your file.
for more reference see: https://stackoverflow.com/a/2399817/230468
Thank you to everyone for the advise and answers.
The solution i found was to change the file from wordpad to notepad, since wordpad embeds code into the file.
Related
I would like to convert the Powershell script below to Python code.
Here is the objective of this code.
The code takes in a comma delimited filename and file extension.
The code below exports file as a pipe delimited file
Then it removes commas that exists within the data
Finally it also removes the double quotes used to qualify the data.
This results in the final file being pipe delimited with no double quotes or commas in the data. In doing this work I used this order because if you try to just replace double quotes and commas before establishing pipes the columns and data would break.
Param([string]$RootString, [string]$Ext)
$OrgFile = $RootString
$NewFile = $RootString.replace($Ext,"out")
Import-Csv $OrgFile -Encoding UTF8 | Export-Csv tempfile.csv -Delimiter "|" -NoTypeInformation
(Get-Content tempfile.csv).Replace(",","").Replace('"',"") | Out-File $NewFile -Encoding UTF8
Copy-Item -Force $NewFile $OrgFile
Remove-Item –path $NewFile -Force
I got dinged a point for this but. Did not see a point in posting bad code that does not work. Here is my version of non working code.
for index in range(len(dfcsv)):
filename = dfcsv['csvpath'].iloc[index]
print(filename)
print(i)
with open(filename, 'r+') as f:
text = f.read()
print(datetime.now())
text = re.sub('","', '|', text)
print(datetime.now())
f.seek(0)
f.write(text)
f.truncate()
i = i + 1
Issues with this code is the method of find and replace. This was creating extra column in the beginning due to double quote. Then sometimes extra column at the end since sometimes there was a double quote at the end. This caused data from different rows to merge together. I didn't post this part as I didn't think it was necessary for my objective. More relevant seemed to put working code to create a better idea of objective. Here is the non working code.
Seem no one here wanted to answer question so I found solution else where. Here is the link for anyone needing to convert comma file to pipe delimited:
https://www.experts-exchange.com/questions/29188372/How-can-I-Convert-Powershell-data-clean-up-code-to-Python-Code.html
I'm trying to use python-gitlab projects.files.create to upload a string content to gitlab.
The string contains '\n' which I'd like it to be the real newline char in the gitlab file, but it'd just write '\n' as a string to the file, so after uploading, the file just contains one line.
I'm not sure how and at what point should I fix this, I'd like the file content to be as if I print the string using print() in python.
Thanks for your help.
EDIT---
Sorry, I'm using python 3.7 and the string is actually a csv content, so it's basically like:
',col1,col2\n1,data1,data2\n'
So when I upload it the gitlab file I want it to be:
,col1,col2
1,data1,data2
I figured out by saving the string to a file and read it again, this way the \n in the string will be translated to the actual newline char.
I'm not sure if there's other of doing this but just for someone that encounters a similar situation.
I'm trying to take the text output of a query to an SSD (pulling a log page, similar to pulling SMART data. I'm then trying to write this text data out of a log file I update periodically.
My problem happens when the log data for some drives has double double-quotes as a placeholder for a blank field. Here is a snippet of the input:
VER 0x10200
VID 0x15b7
BoardRev 0x0
BootLoadRev ""
When this gets written out (appended) to my own log file, the text gets replaced with several null characters and then when I try to open all the text editors tell me it's corrupted.
The "" characters are replaced by something like this on my Linux system:
BootLoadRev "\00\00\00\00"
Some fields are even longer with the \00 characters. If the "" is not there, things write out OK.
The code is similar to this:
f=open(fileName, 'w')
test_bench.send_command('get_log_page')
identify_data = test_bench.get_data_in()
f.write(identify_data)
f.close()
Is there a way to send this text to a file w/o these nulls causing problems?
Assuming that this is Python 2 (and that your content is thus what Python 3 would call a bytestring), and that your intended data format is raw ASCII, the trivial solution is simply to remove the NULs from your content before you write to disk:
f.write(identify_data.replace('\0', ''))
I am trying to implement a little script in order to automatize a local blast alignment.
I had ran commands in the terminal en it works perfectly. However when I try to automatize this, I have a message like : Empty XML file.
Do we have to implement a "system" waiting time to let the file be written, or I did something wrong?
The code :
#sequence identifier as key, sequence as value.
for element in dictionnaryOfSequence:
#I make a little temporary fasta file because the blast command need a fasta file as input.
out_fasta = open("tmp.fasta", 'w')
query = ">" + element + "\n" + str(dictionnary[element])
out_fasta.write(query) # And I have this file with my sequence correctly filled
OUT_FASTA.CLOSE() # EDIT : It was out of my loop....
#Now the blast command, which works well in the terminal, I have my tmp.xml file well filled.
os.system("blastn -db reads.fasta -query tmp.fasta -out tmp.xml -outfmt 5 -max_target_seqs 5000")
#Parsing of the xml file.
handle = open("tmp.xml", 'r')
blast_records = NCBIXML.read(handle)
print blast_records
I have an Error : Your XML file was empty, and the blast_records object doesn't exist.
Did I make something wrong with handles?
I take all advice. Thank you a lot for your ideas and help.
EDIT : Problem solved, sorry for the useless question. I did wrong with handle and I did not open the file in the right location. Same thing with the closing.
Sorry.
try to open the file "tmp.xml" in Internet explorer. All tags are closed?
I have tried so many options inside csv.reader but its not working. I am new to python and tried almost every parameter,the single messy message inside my csv file look like this
"Hey Hi
how are you all,I stuck into this problem,i have tried with such parameter but exceeding the existing number of records,in short file is not getting read properly.
\"I have tried with
datareader=csv.reader(csvfile,quotechar='"',lineterminator='\n\n\n\r\r',quoting=csv.QUOTE_ALL)
Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode? \"......... hence the problem continue.
"
as expected due to \" and \n in message getting more records or the records getting break,i have tried with different line terminator as well as you can see in the message but not succeed,this is my code right now..
with open("D:/Python/mssg5.csv", "r") as csvfile:
datareader = csv.reader(csvfile,quotechar='"' ,lineterminator='\n',quoting=csv.QUOTE_ALL)
count = 0
#csv_out = open('D:/Python/mycsv.csv', 'wb')
#mywriter = csv.writer(csv_out)
for row in datareader:
count = count + 1
print "COUNT is :%d" % count
Any kind of help,thanks.
A couple of things to try in the csv file:
Put the messy string into tipple quotes """ the string """
At the end of each line within your messy field use the continue char \