Check how the application has been launched? - python

My question is simple - is it possible with the Python to check, which way application has been launched/spawned?
More information:
I got an application something.exe and now I need to check if the something.exe has been launched whether by the user or rather with some third party application running it as a child process.
Is it possible to check?

With standard library it is not possible in windows. In Unix-like all processes (excepting init) have a parent.
import os
parent = os.getppid()
You can try to check os.environ. Different methods to run can set slightly different environment or not set any variable.
Also look at psutil. It has many functions for process management.

I tried on Windows using psutil this approach:
import psutil
def get_process_mode(process_name):
process_mode = None
plist = psutil.get_process_list()
for process in plist:
try:
if process.name == process_name:
if process.parent:
process_mode = "third party app"
else:
process_mode = "user launched"
break
except psutil.AccessDenied:
print "'%s' Process is not allowing us to check its parent!" % process
return process_mode
get_process_mode("something.exe")
But it didn't worked good in some cases...

Related

Python psutil module, why does it end up with an exception when i call .exe() method before an if statement

I'm working on a project that requires me using psutil, I was trying to compare some values to that of a .txt file's but for some reason whenever I called the psutil.Procces.exe() method outside an if statement it'd end up with an Access Denied exception, let me show you what I mean:
import psutil
import time
ini = 'start'
def getTaskList():
list_of_ran_proccesses = []
for procs in psutil.process_iter():
list_of_ran_proccesses.append(procs)
return list_of_ran_proccesses
def CompareRunningFiles():
if ini == "start":
list_of_old_procs = getTaskList()
while list_of_old_procs == getTaskList():
time.sleep(0.01)
for new_procs in psutil.process_iter():
if not new_procs in list_of_old_procs:
print(new_procs.exe())
CompareRunningFiles()
This example works completely fine but if i do this
import psutil
import time
ini = 'start'
def getTaskList():
list_of_ran_proccesses = []
for procs in psutil.process_iter():
list_of_ran_proccesses.append(procs)
return list_of_ran_proccesses
def CompareRunningFiles():
if ini == "start":
list_of_old_procs = getTaskList()
while list_of_old_procs == getTaskList():
time.sleep(0.01)
for new_procs in psutil.process_iter():
print(new_procs.exe())
CompareRunningFiles()
This for some reason ends up with an Access Denied exception.
Thank you for all your answers :)
Edit: I'm not sure but, can this be because the module is trying to access some protected directories?
Because after the if statement it would only try to get the directory of whatever process was launched but without the if statement it'd try and access all sorts of running processes.
So when it comes across a system process, it'd try and get it's directory too, which if the process runs inside a protected directory it would raise an Access Denied exception.
Basically, the if statement there prevents the program from trying to get the directory of some system processes(They are always on), some directories in Windows are protected and cannot be accessed directly which, without the if statement the module is trying to get the directories of all running processes and this causes an exception when it tries to do that for a process that's running from a protected directory (example: System Idle Process), using (as Omer said) a "try:... except psutil.AccessDenied: pass" will skip those processes and prevent this issue. Thank you Omer for the explanation and tripleee for the same :D

Python - good way to determine that my PyInstaller-based Python script is the only copy running, and possibly terminate other copies?

I'm looking for a way to determine that a script I wrote, packed by PyInstaller, is the only copy of itself running - so that it can quit if it finds itself open already.
I'd also like to implement an argument to kill all currently running versions of the .exe. Killing them one by one by simple list of PIDs associated with the .exe isn't an option since I could accidentally kill my own process before finishing.
It would be the best if I could use only win32 APIs, as this script is sometimes called by services and thus is unfriendly to many subprocess.Popen calls. I don't want to have to go through UAC spoofing. However, sometimes the .exe is invoked by the Windows Scheduler or by user-land programs.
My current version of finding processes uses win32pdh. I'm not exactly sure where to attribute this, though it's very close to first example from here: http://www.programcreek.com/python/example/51184/win32pdh.OpenQuery
def get_win_processes():
win32pdh.EnumObjects(None, None, win32pdh.PERF_DETAIL_WIZARD)
junk, instances = win32pdh.EnumObjectItems(None,None,'Process', win32pdh.PERF_DETAIL_WIZARD)
proc_dict = {}
for instance in instances:
if proc_dict.has_key(instance):
proc_dict[instance] = proc_dict[instance] + 1
else:
proc_dict[instance]=0
proc_ids = []
for instance, max_instances in proc_dict.items():
for inum in xrange(max_instances+1):
hq = win32pdh.OpenQuery() # initializes the query handle
try:
path = win32pdh.MakeCounterPath( (None, 'Process', instance, None, inum, 'ID Process') )
counter_handle=win32pdh.AddCounter(hq, path) #convert counter path to counter handle
try:
win32pdh.CollectQueryData(hq) #collects data for the counter
type, val = win32pdh.GetFormattedCounterValue(counter_handle, win32pdh.PDH_FMT_LONG)
proc_ids.append((instance, val))
except win32pdh.error, e:
pass
win32pdh.RemoveCounter(counter_handle)
except win32pdh.error, e:
pass
win32pdh.CloseQuery(hq)
return proc_ids
However, this returns two processes, one of which is guardian process for PyInstaller, the other is the actual instance of the program. Furthermore, it doesn't indicate which one is the currently-running guardian or child.
Example output when exe is 'wcdo.exe' and there are two copies running:
(u'wcdo', 11700)
(u'wcdo', 8748)
(u'wcdo', 4152)
(u'wcdo', 9308)
Thanks!
You could query wmic and check which applications are connected ...
C:\>wmic process where name="webserver2.exe" get processid,parentprocessid,commandline
CommandLine ParentProcessId ProcessId
webserver2.exe --scheduled 3136 2212
webserver2.exe --scheduled 2212 6004
Here:
3112 is cmd.exe
4140 the 'pyInstaller wrappper' (because it is parent and process)
3220 the application itself
Using PHD seems to be overhead, it is slow and quite unflexible to indentify processes on Windows.
Calling 'wmic' through subprocess and parsing the output is done in a few lines.
Additionally there is a format flag, how the wmic output is presented (csv, xml, ...)
Btw. you could try to create your exe with py2exe, that does not use a wrapper application.
Not sure if it is relevant, to identify how the application was started. But you could add a special command line argument to your Windows Scheduler to run wcdo.exe --scheduled.

Maya GUI freezes during subprocess call

I need to conform some maya scenes we receive from a client to make them compatible to our pipeline. I'd like to batch that action, obviously, and I'm asked to launch the process from within Maya.
I've tried two methods already (quite similar to each other), which both work, but the problem is that the Maya GUI freezes until the process is complete. I'd like for the process to be completely transparent for the user so that they can keep workind, and only a message when it's done.
Here's what I tried and found until now:
This tutorial here : http://www.toadstorm.com/blog/?p=136 led me to write this and save it:
filename = sys.argv[1]
def createSphere(filename):
std.initialize(name='python')
try:
mc.file(filename, open=True, pmt=False, force=True)
sphere = mc.polySphere() [0]
mc.file(save=True, force=True)
sys.stdout.write(sphere)
except Exception, e:
sys.stderr.write(str(e))
sys.exit(-1)
if float(mc.about(v=True)) >= 2016.0:
std.uninitialize()
createSphere(filename)
Then to call it from within maya that way:
mayapyPath = 'C:/Program Files/Autodesk/Maya2016/bin/mayapy.exe'
scriptPath = 'P:/WG_MAYA_Users/lbouet/scripts/createSphere.py'
filenames = ['file1', 'file2', 'file3', 'file4']
def massCreateSphere(filenames):
for filename in filenames:
maya = subprocess.Popen(mayapyPath+' '+scriptPath+' '+filename,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
out,err = maya.communicate()
exitcode = maya.returncode
if str(exitcode) != '0':
print(err)
print 'error opening file: %s' % (filename)
else:
print 'added sphere %s to %s' % (out,filename)
massCreateSphere(filenames)
It works fine, but like I said, freezes Maya GUI until the process is over. And it's just for creating a sphere, so not nearly close to all the actions I'll actually have to perform on the scenes.
I've also tried to run the first script via a .bat file calling mayabatch and running the script, same issue.
I found this post (Running list of cmd.exe commands from maya in Python) who seems to be exactly what I'm looking for, but I can't see how to adapt it to my situation ?
From what I understand the issue might come from calling Popen in a loop (i.e. multiple times), but I really can't see how to do otherwise... I'm thinking maybe saving the second script somewhere on disk too and calling that one from Maya ?
In this case subprocess.communicate() will block until the child process is done, so it is not going to fix your problem on its own.
If you just want to kick off the processes and not wait for them to complete -- 'fire and forget' style -- you can just use threads, starting off a new thread for each process. However you'll have to be very careful about reporting back to the user -- if you try to touch the Maya scene or GUI from an outside thread you'll get mysterious, undebuggable errors. print() is usually ok but maya.cmds() is not. If you're only printing messages you can probably get away with maya.utils.executeDeferred() which is discussed in this question and in the docs.

Root priv can't be dropped in python even after seteuid. A bug?

Root priv can't be dropped in python even after seteuid. A bug?
EDIT Summary: I forgot to drop gid. The accepted answer may help you, though.
Hi. I can't drop the root privilege in python 3.2 on my linux. In fact, even after seteuid(1000), it can read root-owned 400-mode files. The euid is surely set to 1000!
I found after empty os.fork() call, the privileged access is correctly denied. (But it's only in the parent. The child can still read illegitimately.) Is it a bug in python, or is linux so?
Try the code below. Comment out one of the three lines at the bottom, and run as root.
Thanks beforehand.
#!/usr/bin/python3
# Python seteuid pitfall example.
# Run this __as__ the root.
# Here, access to root-owned files /etc/sudoers and /etc/group- are tried.
# Simple access to them *succeeds* even after seteuid(1000) which should fail.
# Three functions, stillRoot(), forkCase() and workAround() are defined.
# The first two seem wrong. In the last one, access fails, as desired.
# ***Comment out*** one of three lines at the bottom before execution.
# If your python is < 3.2, comment out the entire def of forkCase()
import os
def stillRoot():
"""Open succeeds, but it should fail."""
os.seteuid(1000)
open('/etc/sudoers').close()
def forkCase():
"""Child can still open it. Wow."""
# setresuid needs python 3.2
os.setresuid(1000, 1000, 0)
pid = os.fork()
if pid == 0:
# They're surely 1000, not 0!
print('uid: ', os.getuid(), 'euid: ', os.geteuid())
open('/etc/sudoers').close()
print('open succeeded in child.')
exit()
else:
print('child pid: ', pid)
open('/etc/group-').close()
print('parent succeeded to open.')
def workAround():
"""So, a dummy fork after seteuid is necessary?"""
os.seteuid(1000)
pid = os.fork()
if pid == 0:
exit(0)
else:
os.wait()
open('/etc/group-').close()
## Run one of them.
# stillRoot()
# forkCase()
# workAround()
Manipulating process credentials on Unix systems is tricky. I highly recommend gaining a thorough understanding of how the Real, Effective, and Saved-Set user ids are interrelated. It's very easy to screw up "dropping privileges".
As to your specific observations... I'm wondering if there's a simple cause you may have overlooked. Your code is preforming a inconsistent tests and you've neglected to specify the exact file permissions on your /etc/sudoers and /etc/group- files. Your could would be expected to behave exactly as you describe if /etc/sudoers has permissions mode=440, uid=root, gid=root (which are the default permissions on my system) and if /etc/group- has mode=400.
You're not modifying the process's GID so if /etc/sudoers is group-readable, that would explain why it's always readable. fork() does not modify process credentials. However, it could appear to do so in your example code since you're checking different files in the parent and child. If /etc/group- does not have group read permissions where /etc/sudoers does, that would explain the apparent problem.
If all you're trying to do is "drop privileges", use the following code:
os.setgid( NEW_GID )
os.setuid( NEW_UID )
Generally speaking, you'll only want to manipulate the effective user id if your process needs to toggle it's root permissions on and off over the life of the process. If you just need to do some setup operations with root permissions but will no longer require them after those setup operations are complete, just use the code above to irrevokably drop them.
Oh, and a useful debugging utility for process credential manipulation on Linux is to print the output of /proc/self/status, the Uid and Gid lines of this file display the real, effective, saved-set, and file ids held by the current process (in that order). The Python APIs can be used to retrieve the same information but you can consider the contents of this file as "truth data" and avoid any potential complications from Python's cross-platform APIs.

Check if Rhythmbox is running via Python

I am trying to extract information from Rhythmbox via dbus, but I only want to do so, if Rhythmbox is running. Is there a way to check if Rhythmbox is running via Python without starting it if it is not running?
Whenever I invoke the dbus code like this:
bus = dbus.Bus()
obj = bus.get_object("org.gnome.Rhythmbox", "/org/gnome/Rhythmbox/Shell")
iface = dbus.Interface(obj, "org.gnome.Rhythmbox.Shell)
and Rhythmbox is not running, it then starts it.
Can I check via dbus if Rhythmbox is running without actually starting it? Or is there any other way, other than parsing the list of currently running processes, to do so?
This is similar to Rosh Oxymoron's answer, but arguably neater (albeit untested):
bus = dbus.SessionBus()
if bus.name_has_owner('org.gnome.Rhythmbox'):
# ...
If you want to be notified when Rhythmbox starts or stops, you can use:
def rhythmbox_owner_changed(new_owner):
if new_owner == '':
print 'Rhythmbox is no longer running'
else:
print 'Rhythmbox is now running'
bus.watch_name_owner('org.gnome.Rhythmbox')
See the documentation for dbus.bus.BusConnection for more details.
dbus_main_object = bus.get_object("org.freedesktop.DBus", "/")
dbus_names = dbus_main_object.ListNames(dbus_interface='org.freedesktop.DBus')
if 'org.gnome.Rhythmbox' in dbus_names:
do_whatever()

Categories