below is my code this is a mini IDE for a class project I am stuck here am trying to build an IDE that compiles java. I downloaded JDK and am using subprocess to pipe cmd and communicate with javac but I need to pass the file name with extension so it just shows output and I also need help with outputting a console because it tends to only open in visual studio terminal please help me because I will be submitting on Thursday.
femi.femiii#gmail.com
from tkinter import filedialog
from tkinter import messagebox
import subprocess
import os
name_file = os.path.basename(__file__)
# run button that opens command line
def run(self, *args):
p1 = subprocess.Popen('cmd', shell=True, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p2 = subprocess.Popen('javac name_file; java name_file', shell=True, stdin=p1.stdout)
p1.stdout.close()
out, err = p2.communicate()
if __name__ == "__main__":
master = tk.Tk()
pt = PyText(master)
master.mainloop()```
The easiest way I can think of would be to use sys argv's to pass the file name to a node or java routine
modify your python script to pass info to command line in argv
fname = file
#file you want to pass
pythonCall = 'node javascript.js -- '+fname
#the -- passes what follows into the sys argvs
#this is python notation, java/node may not need the '--' but just a space
runCommand = 'cmd.exe /c ' + pythonCall
subprocess.Popen(runCommand)
then at the start of your javascript, this will give you the file name that was passed into argv, its from the link I included below.
var args = process.argv.slice(2);
if you need more help accessing process.argv in java:
How do I pass command line arguments to a Node.js program?
Related
I want to redirect o/p of shell commands to file using variable "path" but it is not working
import os, socket, shutil, subprocess
host = os.popen("hostname -s").read().strip()
path = "/root/" + host
if os.path.exists(path):
print(path, "Already exists")
else:
os.mkdir("Directory", path , "Created")
os.system("uname -a" > path/'uname') # I want to redirect o/p of shell commands to file using varibale "path" but it is not working
os.system("df -hP"> path/'df')
I think the problem is the bare > and / symbols in the os.system command...
Here is a python2.7 example with os.system that does what you want
import os
path="./test_dir"
command_str="uname -a > {}/uname".format(path)
os.system(command_str)
Here's a very minimal example using subprocess.run. Also, search StackOverflow for "python shell redirect", and you'll get this result right away:
Calling an external command in Python
import subprocess
def run(filename, command):
with open(filename, 'wb') as stdout_file:
process = subprocess.run(command, stdout=subprocess.PIPE, shell=True)
stdout_file.write(process.stdout)
return process.returncode
run('test_out.txt', 'ls')
I have been looking for an answer for how to execute a java jar file through python and after looking at:
Execute .jar from Python
How can I get my python (version 2.5) script to run a jar file inside a folder instead of from command line?
How to run Python egg files directly without installing them?
I tried to do the following (both my jar and python file are in the same directory):
import os
if __name__ == "__main__":
os.system("java -jar Blender.jar")
and
import subprocess
subprocess.call(['(path)Blender.jar'])
Neither have worked. So, I was thinking that I should use Jython instead, but I think there must a be an easier way to execute jar files through python.
Do you have any idea what I may do wrong? Or, is there any other site that I study more about my problem?
I would use subprocess this way:
import subprocess
subprocess.call(['java', '-jar', 'Blender.jar'])
But, if you have a properly configured /proc/sys/fs/binfmt_misc/jar you should be able to run the jar directly, as you wrote.
So, which is exactly the error you are getting?
Please post somewhere all the output you are getting from the failed execution.
This always works for me:
from subprocess import *
def jarWrapper(*args):
process = Popen(['java', '-jar']+list(args), stdout=PIPE, stderr=PIPE)
ret = []
while process.poll() is None:
line = process.stdout.readline()
if line != '' and line.endswith('\n'):
ret.append(line[:-1])
stdout, stderr = process.communicate()
ret += stdout.split('\n')
if stderr != '':
ret += stderr.split('\n')
ret.remove('')
return ret
args = ['myJarFile.jar', 'arg1', 'arg2', 'argN'] # Any number of args to be passed to the jar file
result = jarWrapper(*args)
print result
I used the following way to execute tika jar to extract the content of a word document. It worked and I got the output also. The command I'm trying to run is "java -jar tika-app-1.24.1.jar -t 42250_EN_Upload.docx"
from subprocess import PIPE, Popen
process = Popen(['java', '-jar', 'tika-app-1.24.1.jar', '-t', '42250_EN_Upload.docx'], stdout=PIPE, stderr=PIPE)
result = process.communicate()
print(result[0].decode('utf-8'))
Here I got result as tuple, hence "result[0]". Also the string was in binary format (b-string). To convert it into normal string we need to decode with 'utf-8'.
With args: concrete example using Closure Compiler (https://developers.google.com/closure/) from python
import os
import re
src = test.js
os.execlp("java", 'blablabla', "-jar", './closure_compiler.jar', '--js', src, '--js_output_file', '{}'.format(re.sub('.js$', '.comp.js', src)))
(also see here When using os.execlp, why `python` needs `python` as argv[0])
How about using os.system() like:
os.system('java -jar blabla...')
os.system(command)
Execute the command (a string) in a subshell. This is implemented by calling the Standard C function system(), and has the same limitations. Changes to sys.stdin, etc. are not reflected in the environment of the executed command.
In this post I have explained how to check how many QGIS projects are open in a windows session.
Here is the shortened code:
import os
import subprocess
from os.path import basename
from PyQt4.QtGui import QMessageBox
def checkQgisProcesses(self):
try:
from subprocess import DEVNULL
except ImportError:
DEVNULL = os.open(os.devnull, os.O_RDWR)
res = subprocess.check_output('C:\Windows\System32\cmd.exe /c tasklist /FI "IMAGENAME eq qgis-bin.exe" -v 2>NUL', stdin=DEVNULL, stderr=DEVNULL, startupinfo = info)
...
def someOtherFunc(self):
self.checkQgisProcesses()
...
The code uses tasklist from Windows to get information about all open windows and filters the QGIS window title.
I use this code within a function in a QGIS plugin.
There is another function in the plugin where I call multiple subprocess another time to make some calculations with another programm (SAGA GIS):
curv_PATH = plugin_pth + 'dem_curvature.bat'
subprocess.call(["C:\Windows\System32\cmd.exe", "/c", script_PATH], startupinfo = info)
subprocess.call(['gdalwarp', raster_dest_data + 'dem_gaussian.sdat', '-tr', cellsize, cellsize, '-r', 'bilinear', raster_dest_data + 'dem_res.tif' ], startupinfo = info)
subprocess.call(["C:\Windows\System32\cmd.exe", "/c", curv_PATH], startupinfo = info)
The problem is that these subprocess calls doesn't work anymore. I don't get a specific error in the QGIS python console.
When I comment self.checkQgisProcesses() it works again.
I think the problem lies within the DEVNULL, stdin, stderr parameters.
How can I set them back to default?
UPDATE
I wrongly thought the problem could be the DEVNULL declaration. Obviously the problem is the use of subprocess.STARTUPINFO().
Here is a reproducible example:
import os
from os.path import basename
import subprocess
SW_HIDE = 0
info = subprocess.STARTUPINFO()
info.dwFlags = subprocess.STARTF_USESHOWWINDOW
info.wShowWindow = SW_HIDE
def checkFirefoxProcesses():
try:
from subprocess import DEVNULL
except ImportError:
DEVNULL = os.open(os.devnull, os.O_RDWR)
res = subprocess.check_output('C:\Windows\System32\cmd.exe /c tasklist /FI "IMAGENAME eq firefox.exe" -v 2>NUL', stdin=DEVNULL, stderr=DEVNULL, startupinfo = info)
print res
def helloWorldToText():
subprocess.call(["C:\Windows\System32\cmd.exe", "/c", r"d:\\hello_world.bat"], startupinfo = info)
checkFirefoxProcesses()
helloWorldToText()
This is the code from hello_world.bat:
REM #ECHO OFF
ECHO Hello World! > d:\\hello_world.txt
PAUSE
The function checkFirefoxProcesses() looks for open firefox processes. To run through the example you have to open a firefox session. The function helloWorldToText() creates a text file from ECHO with the help of a BAT file.
The example doesn't creates the text file with the declaration of startupinfo = info. When I run the subprocess of function helloWorldToText() without startupinfo = info it works.
The case: running checkFirefoxProcesses() without startupinfo = info and running helloWorldToText() with startupinfo = info works.
I am trying to pass a variable to a abaqus script file(.psf) through command line. The command line call is made every time another script is executed and has different value for the variable in each call. Can I have help in this regard on the command syntax to be used. I tried os.system and subprocess.Popen, both are giving some sort of errors.
In my main script(.py file) it calls .psf
Xa=150000
abaqusCall = 'abaqus script=tt_Par.psf'
runCommand = 'cmd.exe /c ' + abaqusCall
process = subprocess.Popen(runCommand, cwd=workDir, args=Xa)
and in .psf
it accepts variables in this format..
import sys,os
for item in sys.argv
x1 = sys.argv[0]
x2 = sys.argv[1]
print x1,x2
Could anyone give directions in this regard?
try this out. I am not sure what as psf file is, but I just use py files.
def abaqus_cmd(mycmd):
'''
used to execute abaqus commands in the windows OS console
inputs : mycmd, an abaqus command
'''
import subprocess, sys
try:
retcode = subprocess.call(mycmd,shell=True)
if retcode < 0:
print >>sys.stderr, mycmd+"...failed during execution", -retcode
else:
print >>sys.stderr, mycmd+"...success"
except OSError as e:
print >>sys.stderr, mycmd+"...failed at execution", e
to run a simple command do this
abaqus_cmd('abaqus fetch job=beamExample')
to pass a variable you can do this
odbfile = 'test.odb'
abaqus_cmd('abaqus python odb_to_txt.py '+odbfile)
however, this is just a python instance in abaqus, you cannot access the abaqus kernal here. To access the abaqus kernal, you need to run the script like this.
abaqus_cmd('abaqus cae noGUI=beamExample.py')
I HAVE NOT figured out how to pass variables into scripts in the abaqus kernel, see my comment
Very late to the party but I also needed to call abaqus scripts with variables passed in and out of system. My main script is in Py3 but Abaqus (2021) is still using Py27. You don't need to worry as long as your model script is in py27 you can still call the command using Py3.
I needed to run my model script in a directory different to my py3 main script so in the main script I have:
aba_dir = PATH_TO_DIRECTORY_IN_WHICH_I_WANT_ABAQUS_FILES (.odb, .cae etc.)
script_dir = DIRECTORY_FOR_MODEL_SCRIPT (build_my_model.py - this is py27 script)
job_name = call_cae(cwd, aba_dir, script_dir, var1, var2, var3)
The following functions are used to build the correct command:
Function to define whether to call CAE or ODB viewer:
def caller_type(cae):
#DIFFERENTIATE BETWEEN CAE AND VIEWER
if cae:
caller = 'abaqus cae noGui='
else:
caller = 'abaqus viewer noGui='
return caller
Function to build command string with variables:
def build_command_string(script, cae, *args):
# ## GET THE CAE/VIEWER CALLER
caller = caller_type(cae)
# ##CREATE STRING REPRESENTING ABAQUS COMMAND
caller = caller + script
# ##STRING ALL ARGUMENTS AS INDIVIDUAL ITEMS
str_args = [str(arg) for arg in args]
# ##CREATE COMMAND INITIALISER WITH CALLER
c = ['cmd.exe', '/C', caller, '--']
# ##APPEND STRING ARGS TO COMMAND LIST
for arg in str_args:
c.append(arg)
# ##RETURN COMMAND LIST
return c
Function to submit the command to system and return the job name:
def call_cae(cwd, aba_dir, script, *args):
# SET CAE TO TRUE
cae = True
# CHANGE TO OUTPUT DIRECTORY
os.chdir(aba_dir)
# ##BUILD COMMAND STRING
command = build_command_string(script, cae, *args)
# ##RUN SUBPROCESS COMMAND
p1 = subprocess.run(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True)
# ##RETURN TO ORIGINAL WORKING DIRECTORY (MAIN FILE)
os.chdir(cwd)
if p1.stdout == None or p1.stdout == '':
# ##RETURN JOB NAME
job_name = p1.stderr[p1.stderr.rfind('\n'):].strip('\n')
else:
# ##RETURN JOB NAME
job_name = p1.stderr[p1.stderr.rfind('\n'):].strip('\n')
# ##PRINT STATEMENT TO CHECK CONVERGANCE
print('This job exited with error')
return job_name
Inside my code to build and execute the abaqus model I specify:
sim_num = int(sys.argv[-3]) #var1
E = float(sys.argv[-2]) #var2
nu = float(sys.argv[-1]) #var3
and at the end of my script
sys.stderr.write(cJob.name) #
This is working for me on PyCharm with Py3 in main file and Abaqus python 27 in 'build and execute model.py' file. Hopefully helpful to others too. Now onto creating commands for the ODB output!
I would like to log all the output of a Python script. I tried:
import sys
log = []
class writer(object):
def write(self, data):
log.append(data)
sys.stdout = writer()
sys.stderr = writer()
Now, if I "print 'something' " it gets logged. But if I make for instance some syntax error, say "print 'something# ", it wont get logged - it will go into the console instead.
How do I capture also the errors from Python interpreter?
I saw a possible solution here:
http://www.velocityreviews.com/forums/showpost.php?p=1868822&postcount=3
but the second example logs into /dev/null - this is not what I want. I would like to log it into a list like my example above or StringIO or such...
Also, preferably I don't want to create a subprocess (and read its stdout and stderr in separate thread).
I have a piece of software I wrote for work that captures stderr to a file like so:
import sys
sys.stderr = open('C:\\err.txt', 'w')
so it's definitely possible.
I believe your problem is that you are creating two instances of writer.
Maybe something more like:
import sys
class writer(object):
log = []
def write(self, data):
self.log.append(data)
logger = writer()
sys.stdout = logger
sys.stderr = logger
You can't do anything in Python code that can capture errors during the compilation of that same code. How could it? If the compiler can't finish compiling the code, it won't run the code, so your redirection hasn't even taken effect yet.
That's where your (undesired) subprocess comes in. You can write Python code that redirects the stdout, then invokes the Python interpreter to compile some other piece of code.
I can't think of an easy way. The python process's standard error is living on a lower level than a python file object (C vs. python).
You could wrap the python script in a second python script and use subprocess.Popen. It's also possible you could pull some magic like this in a single script:
import os
import subprocess
import sys
cat = subprocess.Popen("/bin/cat", stdin=subprocess.PIPE, stdout=subprocess.PIPE)
os.close(sys.stderr.fileno())
os.dup2(cat.stdin.fileno(), sys.stderr.fileno())
And then use select.poll() to check cat.stdout regularly to find output.
Yes, that seems to work.
The problem I foresee is that most of the time, something printed to stderr by python indicates it's about to exit. The more usual way to handle this would be via exceptions.
---------Edit
Somehow I missed the os.pipe() function.
import os, sys
r, w = os.pipe()
os.close(sys.stderr.fileno())
os.dup2(w, sys.stderr.fileno())
Then read from r
To route the output and errors from Windows, you can use the following code outside of your Python file:
python a.py 1> a.out 2>&1
Source: https://support.microsoft.com/en-us/help/110930/redirecting-error-messages-from-command-prompt-stderr-stdout
Since python 3.5 you can use contextlib.redirect_stderr
with open('help.txt', 'w') as f:
with redirect_stdout(f):
help(pow)
For such a request, usually it would be much easier to do it in the OS instead of in Python.
For example, if you're going to run "a.py" and record all the messages it will generate into file "a.out", it would just be
python a.py 2>&1 > a.out
The first part 2>&1 redirects stderr to stdout (0: stdin, 1:stdout, 2:stderr), and the second redirects that to a file called a.out.
And as far as I know, this command works in Windows, Linux or MacOS! For other file redirection techniques, just search the os plus "file redirection"
I found this approach to redirecting stderr particularly helpful. Essentially, it is necessary to understand if your output is stdout or stderr. The difference? Stdout is any output posted by a shell command (think an 'ls' list) while sterr is any error output.
It may be that you want to take a shell commands output and redirect it to a log file only if it is normal output. Using ls as an example here, with an all files flag:
# Imports
import sys
import subprocess
# Open file
log = open("output.txt", "w+")
# Declare command
cmd = 'ls -a'
# Run shell command piping to stdout
result = subprocess.run(cmd, stdout=subprocess.PIPE, shell=True)
# Assuming utf-8 encoding
txt = result.stdout.decode('utf-8')
# Write and close file
log.write(txt)
log.close()
If you wanted to make this an error log, you could do the same with stderr. It's exactly the same code as stdout with stderr in its place. This pipes an error messages that get sent to the console to the log. Doing so actually keeps it from flooding your terminal window as well!
Saw this was a post from a while ago, but figured this could save someone some time :)
import sys
import tkinter
# ********************************************
def mklistenconsswitch(*printf: callable) -> callable:
def wrapper(*fcs: callable) -> callable:
def newf(data):
[prf(data) for prf in fcs]
return newf
stdoutw, stderrw = sys.stdout.write, sys.stderr.write
funcs = [(wrapper(sys.stdout.write, *printf), wrapper(sys.stderr.write, *printf)), (stdoutw, stderrw)]
def switch():
sys.stdout.write, sys.stderr.write = dummy = funcs[0]
funcs[0] = funcs[1]
funcs[1] = dummy
return switch
# ********************************************
def datasupplier():
i = 5.5
while i > 0:
yield i
i -= .5
def testloop():
print(supplier.__next__())
svvitch()
root.after(500, testloop)
root = tkinter.Tk()
cons = tkinter.Text(root)
cons.pack(fill='both', expand=True)
supplier = datasupplier()
svvitch = mklistenconsswitch(lambda text: cons.insert('end', text))
testloop()
root.mainloop()
Python will not execute your code if there is an error. But you can import your script in another script an catch exceptions. Example:
Script.py
print 'something#
FinalScript.py
from importlib.machinery import SourceFileLoader
try:
SourceFileLoader("main", "<SCRIPT PATH>").load_module()
except Exception as e:
# Handle the exception here
To add to Ned's answer, it is difficult to capture the errors on the fly during the compilation.
You can write several print statements in your script and you can stdout to a file, it will stop writing to the file when the error occurs. To debug the code you could check the last logged output and check your script after that point.
Something like this:
# Add to the beginning of the script execution(eg: if __name__ == "__main__":).
from datetime import datetime
dt = datetime.now()
script_dir = os.path.dirname(os.path.abspath(__file__)) # gets the path of the script
stdout_file = script_dir+r'\logs\log'+('').join(str(dt.date()).split("-"))+r'.log'
sys.stdout = open(stdout_file, 'w')
This will create a log file and stream the print statements to the file.
Note: Watch out for escape characters in your filepath while concatenating with script_dir in the second line from the last in the code. You might want something similar to raw string. You can check this thread for this.