I'm debugging a service I'm developing, which basically will open my .app and pass it some data to stdin. But it doesn't seem like it's possible to something like:
open -a myapp.app < foo_in.txt
Is it possible to pass stuff to an .app's stdin at all?
Edit:
Sorry, I should have posted this on SO and been more clear. What I'm trying to do is that I have an app made in Python + py2app. I want to be able to handle both when a user drops a file, and use it as a service. The first case isn't a problem since py2app has argv_emulation. I just check if the first argument is a path.
But reading from stdin doesn't work at all, it doesn't read any data regardless if I do as the example above or pipe it. If I pass stdin data to the actual python main script, it works. So I rephrase my question, is it possible to read from stdin with a py2app bundle?
What do you mean with using it as a service?
The example you show won't work, the open command calls LaunchServices to launch the application, and there is no place in the LaunchServices API to pass stdin data or similar to the application.
If you mean adding an item to the OS X Services Menu, you should look at the introductory documentation for developers.
Well,
open -a /Applications/myapp.app < foo_in.txt
will open foo_in.txt in your myapp.app application. You need the full path of the application, be it Applications, bin, or wherever it is...
It depends on what your application does. This may be more appropriate:
cat foo_in.txt | your_command_goes_here
That will read the contents of foo_in.txt (with cat) and pass them to stdin (with the pipe), so then you just follow that with your command / application.
To start Finder as root, one would not use:
sudo open -a /System/Library/CoreServices/Finder.app
The above runs open as root, but still open runs Finder as the normal user. Instead, one would use:
sudo /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder
So, following that, maybe (I am really just guessing) one needs:
myapp.app/Contents/MacOS/myapp < foo_in.txt
You should almost certainly be doing this through Mach ports or Distributed Objects or pretty much any other method of interapplication communication the OS makes available to you.
open creates an entirely new process. Therefore do not use it to redirect stuff into an application from Terminal.
You could try
./Foo.app/Contents/MacOS/Foo < Foo.txt
Already mentioned cat Foo.txt | ./Foo.app/Contents/MacOS/Foo very much depending on whether you set Foo as execurtbale and it's in your path. In your case I'd check the .app package for a Ressources folder, that may contain another binary.
A *.app Package is a directory. It cannot handle commandline arguments.
Related
I use Python 2.7/PySerial scripts to run tests on devices with an embedded Linux. Due to a recent software change, the Linux box generates a number of log files in .csv format. I need to fetch them. I can't enable any server features in the Linux; I only have a serial connection.
I can of course read the file content out and capture it as text, but this is clumsy and unreliable - I would rather copy the files. Two days of search, and I'm still clueless (Generic problem with me!).
Any hints, please? Please be gentle - this is my first question... :)
Once you get a serial terminal you can use sz (part of lrzsz) to send the files via ZModem. Simply use a serial comm program on the other side (Hyperterminal?) that understands ZModem and the files can be transferred over.
I thank you very much for the proposed solutions. Unfortunately, neither work (I can not enable anything extra on the Linux box), and they are both outside the desired Python environment.
I think it's a kludge, but i'll have to ask for a
cat logfile
as a text string, and attempt to catch the prompt at the end.
Thank you for your time and effort.
I have a script in python (I called it monitor.py), that checks if another python application (called test.py) is running; if true nothing happens; if false it starts test.py.
I am using the subprocess module in monitor.py, but if I start test.py and I close monitor.py , test.py also closes; is there any way to avoid this ? Is this subprocess module the correct one ?
I have a script [...] that checks if another [...] is running
I'm not sure if it's any help in your case, but i just wanted to say that if you're working with Windows, you can program a real service in python.
Doing that from scratch is some effort, but some good people out there provide examples that you can easily change, like this one.
(In this example, look for the line f = open('test.dat', 'w+') and write your code there)
It'll behave like any other windows service, so you can make it start when booting your PC, for example.
I have a python script that is launched as root, I can't change it.
I would like to know if it's possible to exectute certain lines of this script (or all the script) as normal user (I don't need to be root to run this).
The reason is, I use notifications, and python-notify don't work in all machines in root (looks like this bug)
So ,do you know if it's possible to change it, with a subprocess, or other?
Thanks
I would like to know if it's possible to exectute certain lines of this script (or all the script) as normal user
Yes, it's possible—and a good idea.
Python's os module has a group of functions to set the real, effective, and saved user and group id, starting with setegid. What exactly each of these does is up to your platform, as far as Python is concerned; it's just calling the C functions of the same names.
But POSIX defines what those functions do. See setuid and seteuid for details, but the short version is:
If you want to switch to a normal user and then switch back, use either seteuid or setreuid, to set just effective, or real and effective, but not saved UID. Then use the same function again to set them back to root.
If you want to run the whole script as a normal user and make sure you can't get root back, use setresuid instead, to set all three.
If you're using Python 3.1 and earlier, you don't have all of these functions. You can still use seteuid to switch effective ID back and forth, but setuid will… well, it depends on your platform, but I think most modern platforms will change saved as well as real, meaning you can't get root back. If you read the linked POSIX doc, there are a bunch of caveats and complexities in the POSIX documentation. If you only care about one platform, you probably want to read your local manpages instead, rather than reading about all of the cases and then trying to figure out which one covers your platform.
So ,do you know if it's possible to change it, with a subprocess, or other?
That isn't necessary (at least on a conforming POSIX system), but it can make things easier or safer. You can use subprocess, multiprocessing, os.fork, or any other mechanism to launch a child process, which immediately uses setuid to drop privileges—or even setresuid to give up the ability to ever restore its privilege. When that child process is done with its task, it just exits.
you need getpwnam from PWD module , for access user-id by pass username and then with os.setuid() you can change the user and Run python script as another user .
import pwd, os
uid = pwd.getpwnam('username')[2] #instead of index 2 you can use pw_uid Attribute
os.setuid(uid)
But Note that using setuid can make a enormous security hole .
If the script is running as root, you can use os.setuid to change the process's current UID to that of another user (irrevocably) or os.seteuid to change the process's current effective UID (and you can use it again afterwards to reset the EUID to root).
Note that os.setuid changes both the real and effective UID - this is the reason it is irrevocable.
os.seteuid changes the effective UID. Since the real UID will still be root, you can still switch back the EUID to root later on in the script.
I've written a little Python (2.7.2+) module (called TWProcessing) that can be described as an improvised batch manager. The way it works is that I pass it a long list of commands that it will then run in parallel, but limiting the total number of simultaneous processes. That way, if I have 500 commands I would like to run, it will loop through all of them, but only running X of them at a time so as to not overwhelm the machine. The value of X can be easily set when declaring an instance of this batch manager (the class is called TWBatchManager) :
batch = TWProcessing.TWBatchManager(MaxJobs=X)
I then add a list of jobs to this object in a very straightforward manner :
batch.Queue.append(/CMD goes here/)
Where Queue is a list of commands that the batch manager will run. When the queue has been filled, I then call Run() which loops through all the commands, only running X at a time :
batch.Run()
So far, everything works fine. Now what I'd like to do is be able to change the value of X (i.e. the maximum number of processes running at once) dynamically i.e. while the processes are still running. My old way of doing this was rather straightforward. I had a file called MAXJOBS that the class would know to look at, and, if it existed, it would check it regularly to see if the desired value has changed. Now I'd like to try something a bit more elegant. I would like to be able to write something along the lines of export MAXJOBS=newX in the bash shell that launched the script containing the batch manager, and have the batch manager realize that this is now the value of X it should be using. Obviously os.environ['MAXJOBS'] is not what I'm looking for, because this is a dictionary that is loaded on startup. os.getenv('MAXJOBS') doesn't cut it either, because the export will only affect child processes that the shell will spawn from then on. So what I need is a way to get back to the environment of the parent process that launched my python script. I know os.ppid will give me the parent pid, but I have no idea how to get from there to the parent environment. I've poked around the interwebz to see if there was a way in which the parent shell could modify the child process environment, and I've found that people tend to insist I not try anything like that, lest I be prepared to do some of the ugliest things one can possibly do with a computer.
Any ideas on how to pull this off? Granted my "read from a standard text file" idea is not so ugly, but I'm new to Python and am therefore trying to challenge myself to do things in an elegant and clean manner to learn as much as I can. Thanks in advance for your help.
For me it looks that you are asking for inter-process communication between a bash script and a python program.
I'm not completely sure about all your requirements, but it might be a candidate for a FIFO (named pipe):
1) make the fifo:
mkfifo batch_control
2) Start the python - server, which reads from the fifo. (Note: the following is only a minimalistic example; you must adapt things:
while True:
fd = file("batch_control", "r")
for cmd in fd:
print("New command [%s]" % cmd[:-1])
fd.close()
3) From the bash script you can than 'send' things to the python server by echo-ing strings into the fifo:
$ echo "newsize 800" >batch_control
$ echo "newjob /bin/ps" >batch_control
The output of the python server is:
New command [newsize 800]
New command [newjob /bin/ps]
Hope this helps.
I'm using Python 2.6 on linux.
I have a run.py script which starts up multiple services in the background and generates kill.py to kill those processes.
Inside kill.py, is it safe to unlink itself when it's done its job?
import os
# kill services
os.unlink(__file__)
# is it safe to do something here?
I'm new to Python. My concern was that since Python is a scripting language, the whole script might not be in memory. After it's unlinked, there will be no further code to interpret.
I tried this small test.
import os
import time
time.sleep(10) # sleep 1
os.unlink(__file__)
time.sleep(10) # sleep 2
I ran stat kill.py when this file was being run and the number of links was always 1, so I guess the Python interpreter doesn't hold a link to the file.
As a higher level question, what's the usual way of creating a list of processes to be killed later easily?
Don't have your scripts write new scripts if you can avoid it – just write out a list of the PIDs, and then through them.
It's not very clear what you're trying to do, but creating and deleting scripts sounds like too much fragile magic.
To answer the question:
Python compiles all of the source and closes the file before executing it, so this is safe.
In general, unlinking an opened file is safe on Linux. (But not everywhere: on Windows you can't delete a file that is in use.)
Note that when you import a module, Python 2 compiles it into a .pyc bytecode file and interprets that. If you remove the .py file, Python will still use the .pyc, and vice versa.
Just don't call reload!
There's no need for Python to hold locks on the files since they are compiled and loaded at import time. Indeed, the ability to swap files out while a program is running is often very useful.
IIRC(!): When on *nix an unlink only removes the name in the filesystem, the inode is removed when the last file handle is closed. Therefore this should not induce any problems, except python tries to reopen the file.
As a higher level question, what's the usual way of creating a list of processes to be killed later easily?
I would put the PIDs in a list and iterate over that with os.kill. I don't see why you're creating and executing a new script for this.
Python reads in a whole source file and compiles it before executing it, so you don't have to worry about deleting or changing your running script file.