I'm trying to convert a python .py function (that takes two inputs) into a .exe file to execute (always in python).
The function looks like this and works perfectly:
def ncd(File1, FileN):
..do something..
return
Now to convert this to a .exe I'm using pyinstaller through the command and also in this case everything is fine.
But then when I execute it in python it doesn't take the two inputs file that the function neeeds.
import subprocess
File1 = "Cam0001.dat"
FileN = "Cam1000.dat"
exe_ncd = "ncd.exe"
subprocess.run("\"" + exe_ncd + "\"" + " " + "\"" + File1 + "\"" + " " + "\"" + FileN + "\"", shell=True)
How can I do it? Thanks a lot :)
Related
I am coding a script in python that automatically writes a file which you can use in the DecentSampler plugin, I am running into an error right now which I am not understanding.
noteGroups = []
firstGroup = True
print(noteGroups)
for file in files: #Writes all the files in the pack.
rfeS = ""
strFile = str(file)
rfe = strFile.split(".", 1)
rfe.pop(-1)
for rfeX in rfe:
rfeS += rfeX
filesSplit = rfeS.split("_", 1)
note = filesSplit[0]
rr = filesSplit[1]
noteList = note.split("delimiter")
print(noteList)
if note not in noteGroups:
noteGroups = noteGroups.append(note)
print(noteGroups)
if firstGroup:
dsp.write("\n </group>")
firstGroup = False
dsp.write("\n <group>")
dsp.write("\n <sample path=\"" + dir + "/" + file + "\" volume=\"5dB\" rootNote=\"" + note + "\" loNote=\"" + note + "\" hiNote=\"" + note + "\" seqPosition=\"" + rr + "\" />")
else:
print(noteGroups)
dsp.write("\n <sample path=\"" + dir + "/" + file + "\" volume=\"5dB\" rootNote=\"" + note + "\" loNote=\"" + note + "\" hiNote=\"" + note + "\" seqPosition=\"" + rr + "\" />")
print(noteGroups)
I am getting the error
File "D:\Python\GUI-test\dshgui.py", line 109, in dspWrite
if note not in noteGroups:
TypeError: argument of type 'NoneType' is not iterable
But if I try this:
noteGroups = ["test", "test"]
note = "A2"
noteGroups.append(note)
print(noteGroups)
It does function properly...
Does anyone know why? And how I can fix it?
The problem is this line:
noteGroups = noteGroups.append(note)
append modifies the list in-place and then it returns None. Don't assign that None value back to the name of the original list. Just do:
noteGroups.append(note)
After packing my program I decided to test it out to make sure it worked, a few things happened, but the main issue is with the Save_File.
I use a Save_File.py for data, static save data. However, the frozen python file can't do anything with this file. It can't write to it, or read from it. Writing says saved successful but on load it resets all values to zero again.
Is it normal for any .py file to do this?
Is it an issue in pyinstaller?
Bad freeze process?
Or is there some other reason that the frozen file can't write, read, or interact with files not already inside it? (Save_File was frozen inside and doesn't work, but removing it causes errors, similar to if it never existed).
So the exe can't see outside of itself or change within itself...
Edit: Added the most basic version of the save file, but basically, it gets deleted and rewritten a lot.
def save():
with open("Save_file.py", "a") as file:
file.write("healthy = " + str(healthy) + "\n")
file.write("infected = " + str(infected) + "\n")
file.write("zombies = " + str(zombies) + "\n")
file.write("dead = " + str(dead) + "\n")
file.write("cure = " + str(cure) + "\n")
file.write("week = " + str(week) + "\n")
file.write("infectivity = " + str(infectivity) + "\n")
file.write("infectivity_limit = " + str(infectivity_limit) + "\n")
file.write("severity = " + str(severity) + "\n")
file.write("severity_limit = " + str(severity_limit) + "\n")
file.write("lethality = " + str(lethality) + "\n")
file.write("lethality_limit = " + str(lethality_limit) + "\n")
file.write("weekly_infections = " + str(weekly_infections) + "\n")
file.write("dna_points = " + str(dna_points) + "\n")
file.write("burst = " + str(burst) + "\n")
file.write("burst_price = " + str(burst_price) + "\n")
file.write("necrosis = " + str(necrosis) + "\n")
file.write("necrosis_price = " + str(necrosis_price) + "\n")
file.write("water = " + str(water) + "\n")
file.write("water_price = " + str(water_price) + "\n")
file.write("air = " + str(air) + "\n")
file.write("blood = " + str(blood) + "\n")
file.write("saliva = " + str(saliva) + "\n")
file.write("zombify = " + str(zombify) + "\n")
file.write("rise = " + str(rise) + "\n")
file.write("limit = int(" + str(healthy) + " + " + str(infected) + " + " + str(dead) + " + " + str(zombies) + ")\n")
file.write("old = int(1)\n")
Clear.clear()
WordCore.word_corex("SAVING |", "Save completed successfully")
time.sleep(2)
Clear.clear()
player_menu()
it's probably because the frozen version of the file (somewhere in a .zip file) is loaded and never the one you're writing (works when the files aren't frozen)
That's bad practice to:
- have a zillion global variables to hold your persistent data
- generate code in a python file just to evaluate it back again (it's _self-modifying code_).
If you used C or C++ language, would you generate some code to store your data then compile it in your new executable ? would you declare 300 globals? I don't think so.
You'd be better off with json data format and a dictionary for your variables, that would work for frozen or not frozen:
your dictionary would be like:
variables = {"healthy" : True, "zombies" : 345} # and so on
Access your variables:
if variables["healthy"]: # do something
then save function:
import json
def save():
with open("data.txt", "w") as file:
json.dump(variables,file,indent=3)
creates a text file with data like this:
{
"healthy": true,
"zombies": 345
}
and load function (declaring variables as global to avoid creating the same variable, but local only)
def load():
global variables
with open("data.txt", "r") as file:
variables = json.load(file)
This is my code:
outputFile = 'C:\\myfileslog\\log\\' + timestr + str(myrandam) + '_' + "out.res"
binaryExe = 'C:\\Users\\xxx\\Desktop\\test\\test2.9.exe'
input = binaryExe + ' -o ' + outputFile + [inputParameters]
subprocess.Popen(input ,bufsize=65,stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
While running above code, i am getting 2 files created as below:
1) 20181004-124704_0.45529096783117506_out.res
2) 20181004-124704_0.45529096783117506_out.ÿÿÿÿCPI
I have already tried giving stdout=outputFile - which not writing anything on the file. Please help, i dont want duplicate file to be created which is causing issues
I am working on checking for corrupted PDF in a file system. In the test I am running, there are almost 200k PDF's. It seems like smaller corrupted files alert correctly, but I ran into a large 15 MB file that's corrupted and the code just hangs indefinitely. I've tried setting Strict to False with no luck. It seems like it's the initial opening that's the problem. Rather than doing threads and setting a timeout (which I have tried in the past to little success), I'm hoping there's an alternative.
import PyPDF2, os
from time import gmtime,strftime
path = raw_input("Enter folder path of PDF files:")
t = open(r'c:\pdf_check\log.txt','w')
count = 1
for dirpath,dnames,fnames in os.walk(path):
for file in fnames:
print count
count = count + 1
if file.endswith(".pdf"):
file = os.path.join(dirpath, file)
try:
PyPDF2.PdfFileReader(file,'rb',warndest="c:\test\warning.txt")
except PyPDF2.utils.PdfReadError:
curdate = strftime("%Y-%m-%d %H:%M:%S", gmtime())
t.write(str(curdate) + " " + "-" + " " + file + " " + "-" + " " + "fail" + "\n")
else:
pass
#curdate = strftime("%Y-%m-%d %H:%M:%S", gmtime())
#t.write(str(curdate) + " " + "-" + " " + file + " " + "-" + " " + "pass" + "\n")
t.close()
It looks like there is an issue with PyPDF2. I wasn't able to get it to work, however, I used pdfrw and it did not stop at this point and ran through all couple hundred thousand docs without issue.
I am having trouble calling an EMBOSS program (which runs via command line) called sixpack through Python.
I run Python via Windows 7, Python version 3.23, Biopython version 1.59, EMBOSS version 6.4.0.4. Sixpack is used to translate a DNA sequence in all six reading frames and creates two files as output; a sequence file identifying ORFs, and a file containing the protein sequences.
There are three required arguments which I can successfully call from command line: (-sequence [input file], -outseq [output sequence file], -outfile [protein sequence file]). I have been using the subprocess module in place of os.system as I have read that it is more powerful and versatile.
The following is my python code, which runs without error but does not produce the desired output files.
from Bio import SeqIO
import re
import os
import subprocess
infile = input('Full path to EXISTING .fasta file would you like to open: ')
outdir = input('NEW Directory to write outfiles to: ')
os.mkdir(outdir)
for record in SeqIO.parse(infile, "fasta"):
print("Translating (6-Frame): " + record.id)
ident=re.sub("\|", "-", record.id)
print (infile)
print ("Old record ID: " + record.id)
print ("New record ID: " + ident)
subprocess.call (['C:\memboss\sixpack.exe', '-sequence ' + infile, '-outseq ' + outdir + ident + '.sixpack', '-outfile ' + outdir + ident + '.format'])
print ("Translation of: " + infile + "\nWritten to: " + outdir + ident)
Found the answer.. I was using the wrong syntax to call subprocess. This is the correct syntax:
subprocess.call (['C:\memboss\sixpack.exe', '-sequence', infile, '-outseq', outdir + ident + '.sixpack', '-outfile', outdir + ident + '.format'])