I am learning to use electron js with python and I am using python-shell so I have the following simple python script:
import sys, json
# simple JSON echo script
for line in sys.stdin:
print(json.dumps(json.loads(line)))
and in my main.js:
let {PythonShell} = require('python-shell')
let pyshell = new PythonShell('/home/bassel/electron_app/pyapp/name.py', {mode : 'json'});
pyshell.send({name:"mark"})
pyshell.on('message', function (message) {
// received a message sent from the Python script (a simple "print" statement)
console.log("hi");
});
but the hi is not getting printed, what is wrong?
This problem can also occur when trying to suppress the newline from the end of print output. See Why doesn't print output show up immediately in the terminal when there is no newline at the end?.
Output is often buffered in order to preserve system resources. This means that in this case, the system holds back the Python output until there's enough to release together.
To overcome this, you can explicitly "flush" the output:
import sys, json
# simple JSON echo script
for line in sys.stdin:
print(json.dumps(json.loads(line)))
sys.stdout.flush() # <--- added line to flush output
If you're using Python 3.3 or higher, you may alternatively use:
import sys, json
# simple JSON echo script
for line in sys.stdin:
print(json.dumps(json.loads(line)), flush=True) # <--- added keyword
Related
This is my code, how can I hide the output of searching in subprocess, I want my code to show just the information of my system.
import subprocess
Id = subprocess.check_output(['systeminfo']).decode('utf-8').split('\n')
new = []
for item in Id:
new.append(str(item.split("\r")[:-1]))
for i in new:
print(i[2:-2])
the progress messages aren't part of the output, they're fed to standard error.
So they won't show in your result. But to silence them and avoid them to be printed when executing the script, just redirect standard error to NUL
import subprocess
result = subprocess.check_output(['systeminfo'],stderr=subprocess.DEVNULL).decode('utf-8',errors='ignore')
as a side note, result contains Windows end of line characters, that can be split upon just with:
result.splitlines()
I have a large python script. It prints a bunch of output on my terminal console. The problem is the print is not happening altogether. Some print statements print one blob of statements together, then under that some other part of code prints some stuff. It goes on as long as the main loop runs.
Issue is I get the output as I want but all is getting printed on console as that is where we are running the python main script.
It would be very helpful if along with the print happening at console, I can get all the output in console in same format to a text file also for retention.
Again, there are bunch of print statements occurring in different parts of the whole script. So not sure how to retain the whole output of console in same format to a final text file.
If you want to do the redirection within the Python script, setting sys.stdout to a file object does the trick:
import sys
sys.stdout = open('file', 'w')
print('test')
A far more common method is to use shell redirection when executing (same on Windows and Linux):
$ python foo.py > file
Check this thread Redirect stdout to a file in Python?
Custom Print function for both console and file, replace all print with printing in the code.
outputFile = open('outputfile.log', 'w')
def printing(text):
print(text)
if outputFile:
outputFile.write(str(text))
you have to add file argument to the print() function
print('whatever', file = file_name)
I would rather go ahead with bash and use tee command. It redirects the output to a file too.
python -u my.py | tee my_file.txt
If your python script is file.py, Then use :
python3 file.py > output.txt
Or
python file.py > output.txt
Depending on your python version. This statement (>) will all the outputs of the program into the stdout to the file, output.txt
EDIT :
python3 file.py > output.txt;cat output.txt
The above line can be used to print the file output.txt after the program execution.
EDIT2 :
Another possible option to use a custom print function :
f = open('output.txt')
def custom_print(e = '\n',*s)
for i in s[:-1]:
print(i,end=' ')
print(s[-1],end = e)
f.write(s)
#Your code
#
f.close()
I have a python program, which is supposed to calculate changes based on a value written in a temporary file (eg. "12345\n"). It is always an integer.
I have tried different methods to read the file, but python wasn't able to read it. So then I had the idea to execute a shell command ("cat") that will return content. When I execute this in the shell it works fine, but python the feedback I get is empty. Then I tried writing a bash and then a php skript, which would read the file and then return the value. In python I called them over the shell and the feedback I get is empty as well.
I was wondering if that was a general problem in python and made my scripts return the content of other temporary files, which worked fine.
Inside my scripts I was able to do calculations with the value and in the shell the output is exactly as expected, but not when called via python. I also noticed that I don't get the value with my extra scripts when they are called by phython (I tried to write it into another file; it was updated but empty).
The file I am trying to read is in the /tmp directory and is written into serveral time per second by another script.
I am looking for a solution (open for new ideas) in which I end up having the value of the file in a python variable.
Thanks for the help
Here are my programs:
python:
# python script
import subprocess
stdout = subprocess.Popen(["php /path/to/my/script.php"], shell = True, stdout = subprocess.PIPE).communicate()[0].decode("utf-8")
# other things I tried
#with open("/tmp/value.txt", "r") as file:
# stdout = file.readline() # output = "--"
#stdout = os.popen("cat /tmp/value.txt").read() # output = "--"
#stdout = subprocess.check_output(["php /path/to/my/script.php"], shell = True, stdout = subprocess.PIPE).decode("utf-8") # output = "--"
print(str("-" + stdout + "-")) # output = "--"
php:
# php script
valueFile = fopen("/tmp/value.txt", "r");
value = trim(fgets($valueFile), "\n");
fclose($valueFile);
echo $value; # output in the shell is the value of $value
Edit: context: my python script is started by another python script, which listens for commands from an apache server on the pi. The value I want to read comes from a "1wire" device that listens for S0-signals.
My question is pretty similar to this one.
My perl script is invoked by an incoming mail via a procmail recipe. The perl script is being executed, and there are no syntax errors (or at least I get no warnings in the procmail logs and perl -c sounds OK).
My python script returns a tuple of five strings based on the input string.
And I want to use them after calling this from the Perl script. I am trying (code snippet)
my ($var1, $var2, $var3, $var4, $var5) = `./parse_input.py $input`;
print "var1 is $var1";
While parse_input returns a tuple as per the below...
return (var1, var2, var3, var4, var5)
I've tested running parse_input.py on my input strings directly, and that works fine.
I am successfully redirecting print statements in the Perl script to a logging file. I.e. I see other print statements fine. But the "var1 is $var1" line just comes out as "var1 is".
Maybe I am being too ambitious though and should work on spliting it into five functions that each return one value?
Edit: Here's the relevant part of my python script.
#! <path>/python3
import sys
import re
import subprocess
input_to_parse = sys.argv[1]
def main():
blah blah blah
print(var1)
print(var2)
print(var3)
print(var4)
print(var5)
return (var1, var2, var3, var4, var5)
if __name__ == "__main__":
sys.exit(main())
Instead of using STDOUT for transferring variable information, you're best using a mechanism that is built for this type of purpose. Here's an example using JSON. We dump the data from the Python script as a JSON string into a file, and after the Python script has completed, we read in the JSON string from the file and proceed.
Python script:
import json
data = (1, 2, 3, 40, 50)
with open('data.json', 'w') as jsonfile:
json.dump(data, jsonfile)
Perl script:
use warnings;
use strict;
use JSON;
my $file = 'data.json';
my $py_script = 'python.py';
# call the python script
system('python3', $py_script);
# read in json from Python
my $json;
{
local $/;
open my $fh, '<', $file or die $!;
$json = <$fh>;
close $fh;
}
# decode the Python tuple into a Perl array (reference) from the JSON string
my $array = decode_json $json ;
for my $elem (#$array){
print "$elem\n";
}
Output:
1
2
3
40
50
Note that I'm using a file on disk for the transfer in this example, but you can use any mechanism to transfer the string (shared memory, database, etc).
If you really do want to use the command line to pass the information, that's still possible, but I'd still use JSON. If the output of the Python script is something unexpected, perl will know it:
Python:
import json
print(json.dumps((1, 2, 3, 40, 50)))
Perl:
use warnings;
use strict;
use JSON;
my $json = `python3 python.py`;
my $array = decode_json $json;
for my $elem (#$array){
print "$elem\n";
}
I have a script which executes some command using os.popen4. Problem is some time command being executed will require user input ("y" or "n"). I am reading stdout/stderr and printing it, but it seems question from command doesn't got printed and it hangs. To make it work, i had to write "n" to stdin blindly. Can some one please guide on how to handle it?
Code not working:
(f_p_stdin, f_p_stdout_stderr) = os.popen4(cmd_exec,"t")
cmd_out = f_p_stdout_stderr.readlines()
print cmd_out
f_p_stdin.write("n")
f_p_stdin.close()
f_p_stdout_stderr.close()
Working Code:
(f_p_stdin, f_p_stdout_stderr) = os.popen4(cmd_exec,"t")
cmd_out = f_p_stdout_stderr.readlines()
f_p_stdin.write("n")
f_p_stdin.close()
print cmd_out
f_p_stdout_stderr.close()
NOTE : I am aware that it is depreciated and subprocess module is used, but right now i don't know on how to use it. So i'll appreciate if some one will help me to handle it using os.popen4. I want to capture the question and handle the input from user and execute it.
readlines() : returns a list containing all the lines of data in the file. If reading from a process like in this case, there is a good chance it does not send a newline and/or flush the output. You should read characters from the input and process that to see if the question was posed.
It would help to know what cmd_exec looks like, so others can try and emulate what you tried.
Update:
I wrote a uncheckout command in Python:
#! /usr/bin/env python
# coding: utf-8
import sys
print 'Uncheckout of {} is irreversible'.format(sys.argv[1])
print 'Do you want to proceed? [y/N]',
sys.stdout.flush()
x = raw_input()
if x == 'y':
print sys.argv[1], "no longer checked out"
else:
print sys.argv[1], "still checked out"
I put the prompt string on purpose not as argument to raw_input, to be able to do the flush() explicitly.
Neither of your code snippets work with that (assuming cmd_exec to be ['./uncheckout', 'abc.txt'] or './uncheckout abc.txt', popen4() uses the shell in the latter case to start the program).
Only when I move the readlines() until after the write() and close() will the command continue.
That makes sense to me as the close() flushes the output. You are writing in text mode and that buffers normally until end-of-line, which is not in your .write('n').
To be able to check what the prompt is and test and react on that., the following works with the above uncheckout:
#! /usr/bin/env python
# coding: utf-8
import os
import sys
cmd_exec = ['./uncheckout', 'abc.txt']
(f_p_stdin, f_p_stdout_stderr) = os.popen4(cmd_exec,"t")
line = ''
while True:
x = f_p_stdout_stderr.read(1)
if not x:
break
sys.stdout.write(x)
sys.stdout.flush()
if x == '\n':
line = ''
else:
line += x
if line.endswith('[y/N]'):
f_p_stdin.write("n\n")
f_p_stdin.flush()
sys.stdout.write('\n')
Maybe you can work backwards from that to make something that works for you. Make sure to keep flushes at appropriate places.