I've written a Python wrapper (pyprog) to run a program (someprogram), something like this:
...do some setup stuff in Python...
print("run [y=yes]")
CHOICE=input()
...do some setup stuff in Python...
if CHOICE == "y":
status=subprocess.call(["someprogram"])
sys.exit(status)
A user wants to use a shell script to run the program and feed it input using a here document like this:
#!/bin/sh
pyprog > pyprog.log << EOF
y
file1
file2
EOF
Is there a way to spawn the subprocess so that the here document will work (the "y" gets consumed by the Python input(), and the "file1" and "file2" continue along as stdin to someprogram)? Right now, the Python input() takes the "y", but the rest of it disappears.
You need to connect sys.stdin to the stdin of the call.
status=subprocess.call(["someprogram"], stdin=sys.stdin)
import sys
status=subprocess.call(["someprogram"], stdin=sys.stdin)
I've used something like this a few times before: https://gist.github.com/887225
Basically it's a python script that accepts a number of command line parameters, performs some transformation based on what was input, then uses os.system() to evoke a shell command.
In this example I'm calling Java, passing in a class path then running the ProgramName.jar program.
Related
I am working on a program that requires to call another python script and truncate the execution of the current file. I tried doing the same using the os.close() function. As follows:
def call_otherfile(self):
os.system("python file2.py") #Execute new script
os.close() #close Current Script
Using the above code I am able to open the second file but am unable to close the current one.I know I am silly mistake but unable to figure out what's it.
To do this you will need to spawn a subprocess directly. This can either be done with a more low-level fork and exec model, as is traditional in Unix, or with a higher-level API like subprocess.
import subprocess
import sys
def spawn_program_and_die(program, exit_code=0):
"""
Start an external program and exit the script
with the specified return code.
Takes the parameter program, which is a list
that corresponds to the argv of your command.
"""
# Start the external program
subprocess.Popen(program)
# We have started the program, and can suspend this interpreter
sys.exit(exit_code)
spawn_program_and_die(['python', 'path/to/my/script.py'])
# Or, as in OP's example
spawn_program_and_die(['python', 'file2.py'])
Also, just a note on your original code. os.close corresponds to the Unix syscall close, which tells the kernel that your program that you no longer need a file descriptor. It is not supposed to be used to exit the program.
If you don't want to define your own function, you could always just call subprocess.Popen directly like Popen(['python', 'file2.py'])
Use the subprocess module which is the suggested way to do that kind of stuff (execute new script, process), in particular look at Popen for starting a new process and to terminate the current program you can use sys.exit().
Its very simple use os.startfile and after that use exit() or sys.exit() it will work 100%
#file 1 os.startfile("file2.py") exit()
I have a python script that is run by the crontab and can be executed manually.
This script takes as an input:
either the value from -i argument,
or whatever comes from stdin through a pipe.
The code should be something like this:
if ???: #test to check if there is some data in stdin
print("I have data from stdin!")
else:
print("I have no data from stdin!")
The script is executed as follows:
$ ./myscript.py -i myInput
> I have no data from stdin!
$ cat myInput | ./myscript.py
> I have data from stdin!
I have tried several methods that work fine when executed via the console, but don't work as expected when executed by the crontab : the script always considers there IS data from stdin.
First test:
if not sys.stdin.isatty():
print("I have data from stdin!")
else:
print("I have no data from stdin!")
I think this one does not work because because there is no tty in crontab mode so the first statement is always true.
Second test:
import stat
mode = os.fstat(sys.stdin.fileno()).st_mode
if stat.S_ISFIFO(mode):
print("I have data from stdin!")
else:
print("I have no data from stdin!")
Third test:
import select
r, w, x = select.select([sys.stdin], [], [], 0)
if r:
print("I have data from stdin!")
else:
print("I have no data from stdin!")
Is there a correct way to make it work for both console and crontab mode?
As Nullman already wrote in a comment it is better to check your command line options to decide if you want to try stdin or not.
Short summary: You cannot safely guess if you should read data from stdin by checking stdin. You should only rely on checking the command line to find out what is expected.
For example cat will use stdin only if no input file was specified as a command line argument or if the special file name - was specified.
All the tests in your examples will work in certain conditions only and will not work in other cases.
Checking if stdin is a TTY does not help. It will only tell you if it is connected to a terminal. Your script can get input from a terminal if the user types something or if it is a pseudo-terminal connected to something else. Your script can also get input from stdin if it is not connected to a terminal but to something else (pipe, file, socket,...)
Checking if stdin is a FIFO is also wrong because you can read data both from a pipe/fifo or from something else (file, socket, terminal,...).
Using select will not tell you if there is any data, but only if a read will not block. It also will not block on EOF. To distinguish these cases you would have to check the result of a read from stdin. Without a delay/timeout it might also tell you that a read would block if the data is not yet available.
There are more ways to use the script:
Instead of cat myInput | ./myscript.py you could also use ./myscript.py < myInput. In the first case stdin will be a pipe, in the second case a file.
Or imagine ./myscript.py < /dev/null. This will return EOF condition on the first read.
Or ./myscript.py <&- which will close stdin leading to an error when you try to read from it.
If stdin is connected to a terminal a read might block if the user does not enter anything. This would happen if you call ./myscript.py. You could use select to find out if data is available now, but you cannot find out if the user will enter data later. So your script does not know the intention of the user.
How do you make a python program automatically restart itself? So let's say there is a really simple program like:
var = input("Hi! I like cheese! Do you like cheese?").lower()
if var == "yes":
print("That's awesome!")
Now, in a Python Shell, you would have to press either the Run button and then 'Run Module (F5)' or just the F5 key on your keyboard. That is the first time you run it. When the program ended, you would go back to your Cheese.py file and then press F5 to run the program again.
Everybody with me here?
OK, so my question is, how do you make the program restart itself automatically without you having to manually do it?
It depends on what you mean by "restart itself." If you just want to continuously execute the same code, you can wrap it in a function, then call it from within a while True loop, such as:
>>> def like_cheese():
... var = input("Hi! I like cheese! Do you like cheese?").lower() # Corrected the call to `.lower`.
... if var == "yes":
... print("That's awesome!")
...
>>> while True:
... like_cheese()
...
Hi! I like cheese! Do you like cheese?yes
That's awesome!
Hi! I like cheese! Do you like cheese?yes
That's awesome!
If you want to actually restart the script you can execute the script again, replacing the current process with the new one by doing the following:
#! /bin/env python3
import os
import sys
def like_cheese():
var = input("Hi! I like cheese! Do you like cheese?").lower()
if var == "yes":
print("That's awesome!")
if __name__ == '__main__':
like_cheese()
os.execv(__file__, sys.argv) # Run a new iteration of the current script, providing any command line args from the current iteration.
This will continuously re-run the script, providing the command line arguments from the current version to the new version. A more detailed discussion of this method can be found in the post "Restarting a Python Script Within Itself" by Petr Zemek.
One item that this article notes is:
If you use the solution above, please bear in mind that the exec*()
functions cause the current process to be replaced immediately,
without flushing opened file objects. Therefore, if you have any
opened files at the time of restarting the script, you should flush
them using f.flush() or os.fsync(fd) before calling an exec*()
function.
or you can try
$ chmod a+x "name".py
Then, you can run the script via
$ ./daemon.py
In such a situation, to restart the script, use the following code:
os.execv(__file__, sys.argv)
Otherwise, when you run the script via
$ python daemon.py
use this code:
os.execv(sys.executable, ['python'] + sys.argv)
Either way, do not forget to import the sys module
I use terminal on my Mac to re-start some of my python scripts with the function below.
import subprocess
def run_again(cmd):
subprocess.call(["bash", "-c", "source ~/.profile; " + cmd])
Note: Don't forget the space character after "profile;" or the function may fail silently!
Then at the bottom of my script to be re-started:
if some_condition:
run_again("python my_script.py %s" % my_new_arguments)
For the original question about the cheese script:
if var != 'yes':
run_again("python my_cheese_script.py")
You can just use a shell script like test.sh and make sure in your linux terminal you chmod +x test.sh
As for the code:
#!/bin/bash
while :
do
sleep 5
gnome-terminal --wait -- sh -c "python3 myscript.py 'myarg1'"
done
You can wrap something in while True: to make it execute repeatedly, as True will always evaluate to True, like this:
while True:
var = input("Hi! I like cheese! Do you like cheese?").lower() # <-- You had missed parentheses here
if var == "yes":
print("That's awesome!")
There's another issue with your code though; you haven't called lower by putting parentheses after it.
I'm trying to write a little tool that will let me pipe command output to the clipboard. I've read through multiple answers on Stack Overflow, but they didn't work for me, because they didn't include piping, or because they didn't use a function, or they just threw errors (or maybe I just messed up). I threw up my hands with PowerShell and decided to go with Python.
I created a Python script called copyToClipboard.py:
import sys
from Tkinter import Tk
if sys.stdin.isatty() and len(sys.argv) == 1:
#We're checking for input on stdin and first argument
sys.exit()
tk = Tk()
tk.withdraw()
tk.clipboard_clear()
if not sys.stdin.isatty():
#We have data in stdin
while 1:
try:
line = sys.stdin.readline()
except KeyboardInterrupt:
break
if not line:
break
tk.clipboard_append(line)
elif len(sys.argv) > 1:
for line in sys.argv[1]:
tk.clipboard_append(line)
tk.destroy()
(I haven't fully tested the argv[1] part, so that might be shaky. I'm mainly interested in reading from stdin, so the important part is sys.stdin.)
This works great! When I'm in the directory that contains the script, I can execute something like:
ls | python copyToClipboard.py
And the contents of ls magically appear on my clipboard. That's exactly what I want.
The challenge is wrapping this in a PowerShell function that will take a piped input and simply pass the input to the Python script. My goal is to be able to do ls | Out-Clipboard, so I created something like:
function Out-ClipBoard() {
Param(
[Parameter(ValueFromPipeline=$true)]
[string] $text
)
pushd
cd \My\Profile\PythonScripts
$text | python copyToClipboard.py
popd
}
But that doesn't work. Only one line of $text makes its way to the Python script.
How can I structure the wrapper for my PowerShell script such that whatever it receives as stdin simply gets passed to the Python script as stdin?
First, in PowerShell, a multi-line text is an array, so you need a [String[]] parameter. To solve your problem, try using the process block:
function Out-ClipBoard() {
Param(
[Parameter(ValueFromPipeline=$true)]
[String[]] $Text
)
Begin
{
#Runs once to initialize function
pushd
cd \My\Profile\PythonScripts
$output = #()
}
Process
{
#Saves input from pipeline.
#Runs multiple times if pipelined or 1 time if sent with parameter
$output += $Text
}
End
{
#Turns array into single string and pipes. Only runs once
$output -join "`r`n" | python copyToClipboard.py
popd
}
}
I don't have Python here myself, so I can't test it. When you need to pass multiple items (an array) through the pipeline, you need the process block for PowerShell to handle it. More about process block and advanced functions is at TechNet.
I'm using the OS.System command to call a python script.
example:
OS.System("call jython script.py")
In the script I'm calling, the following command is present:
x = raw_input("Waiting for input")
If I run script.py from the command line I can input data no problem, if I run it via the automated approach I get an EOFError. I've read in the past that this happens because the system expects a computer to be running it and therefore could never receive input data in this way.
So the question is how can I get python to wait for user input while being run in an automated way?
The problem is the way you run your child script. Since you use os.system() the script's input channel is closed immediately and the raw_input() prompt hits an EOF (end of file). And even if that didn't happen, you wouldn't have a way to actually send some input text to the child as I assume you'd want given that you are using raw_input().
You should use the subprocess module instead.
import subprocess
from subprocess import PIPE
p = subprocess.Popen(["jython", "script.py"], stdin=PIPE, stdout=PIPE)
print p.communicate("My input")
Your question is a bit unclear. What is the process calling your Python script and how is it being run? If the parent process has no standard input, the child won't have it either.