Why daemon's output can just goto /tmp dir? - python

the supper class I use is http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/, my code is below:
import os
import sys, time
from daemon import Daemon
class MyDaemon(Daemon):
def run(self):
while True:
cmd='cat test.txt > output.txt'
os.system(cmd)
time.sleep(6000)
if __name__ == "__main__":
daemon = MyDaemon('/tmp/DebugDaemon.pid')
daemon.start()
If I run DebugDaemon.py, the /tmp/DebugDaemon.pid can be created.
However, ouput.txt file can not be created, why?
If I call it directly (ie: No using the daemon code) work fine.

cmd is a local variable. Your assignment to it doesn't actually do anything, since no code uses it.
The subprocess module allows you to call other programs from within Python. I don't know how it interacts with daemons though.

Daemon appears to chdir() to /. I bet your process doesn't have write permissions for /.
Your daemon needs to chdir() to the directory where test.txt resides (and for which the process has write permissions). Alternatively, use full paths everywhere:
cmd = 'cat /tmp/test.txt > /tmp/output.txt'

the
cat test.txt > output.txt
is executed in / because the super class does
# decouple from parent environment
os.chdir("/")
the pid-file can be written, because everybody can write to /tmp - / is not writable for everybody.

Related

How to run this function in a seperate process on Windows?

I am trying to run this code in the background (from command line) on Windows using python 2.7:
import httpimport
mod = httpimport.load('module name','URL')
Everything works, but the process lingers when launched and only ctrl + c will end it. I am looking to start an independent process from this in the background.
I have read that multiprocess can come useful here but I would need some pointers if I may.
Any suggestions ?
EDIT: I may add this is a script which is calling another python script from URL. From the answers below I gathered that I might need to change my remote script first.
if you want to run your process in the background you can use spawnl
import os
os.spawnl(os.P_DETACH, 'python code.py "module name" "url"')
but you need to be cautious, you can't kill the process if you don't knew it's pid or check where it is running via task manager
check for more: https://docs.python.org/2/library/os.html#os.spawnl
for your code (for exemple code.py):
import httpimport
from sys import argv
name, module_name, URL = argv # here you get the module name and URL from the argument given from before
mod = httpimport.load(module_name , URL)

Write to a file with sudo privileges in Python

The following code throws an error if it's run by a non-root user for a file owned by root, even when the non-root user has sudo privileges:
try:
f = open(filename, "w+")
except IOError:
sys.stderr.write('Error: Failed to open file %s' % (filename))
f.write(response + "\n" + new_line)
f.close()
Is there a way to run open(filename, "w+") with sudo privileges, or an alternative function that does this?
You have a few options:
Run your script as root or with sudo
Set the setuid bit and have root own the script (although on many systems this won't work with scripts, and then the script will be callable by anyone)
Detect that you're not running as root (os.geteuid() != 0), then call yourself with sudo infront (which will ask the user to enter their password) and exit:
‌
import os
import sys
import subprocess
if os.geteuid() == 0:
print("We're root!")
else:
print("We're not root.")
subprocess.call(['sudo', 'python3', *sys.argv])
sys.exit()
Calling it looks like this:
$ python3 be_root.py
We're not root.
Password:
We're root!
TL;DR:
L3viathan's reply has a SynaxError Edit: works fine on Python 3.5+. Here is a version for Python 3.4.3 (distributed by default on Ubuntu 16.04) and below:
if os.geteuid() == 0:
# do root things
else:
subprocess.call(['sudo', 'python3'] + sys.argv) # modified
However, I went with a different approach because it made the code around it simpler in my use case:
if os.geteuid() != 0:
os.execvp('sudo', ['sudo', 'python3'] + sys.argv)
# do root things
Explaination
L3viathan's reply depends on PEP 448, which was included in Python 3.5 and introduced additional contexts where star argument expansion is permitted. For Python 3.4 and below, list concatenation can be used to do the same thing:
import os
import sys
import subprocess
if os.geteuid() == 0:
print("We're root!")
else:
print("We're not root.")
subprocess.call(['sudo', 'python3'] + sys.argv) # modified
But note: subprocess.call() launches a child process, meaning after root finishes running the script, the original user will continue running their script as well. This means you need to put the elevated logic into one side of an if/else block so when the original script finishes, it doesn't try to run any of the logic that requires elevation (L3viathan's example does this).
This isn't necessarily a bad thing - it means both normal/elevated logic can be written in the same script and separated nicely - but my task required root for all the logic. I didn't want to waste an indentation level if the other block was going to be empty, and I hadn't realized how using a child process would affect things, so I tried this:
import os
import sys
import subprocess
if os.geteuid() != 0:
subprocess.call(['sudo', 'python3'] + sys.argv)
# do things that require root
...and it broke of course, because after root was done, the regular user resumed execution of its script and tried to run the statements that required root.
Then I found this Gist recommending os.execvp() - which replaces the running process instead of launching a child:
import os
import sys
if os.geteuid() != 0:
os.execvp('sudo', ['sudo', 'python3'] + sys.argv) # final version
# do things that require root
This appears to behave as expected, saves an indentation level and 3 lines of code.
Caveat: I didn't know about os.execvp() ten minutes ago, and don't know anything yet about possible pitfalls or subtleties around its usage. YMMV.
Another option is to write the file to a directory you have access to, then sudo mv the file to the directory you don't have access to.
Your script is limited to the permissions it is run with as you cannot change users without already having root privileges.
As Rob said, the only way to do this without changing your file permissions is to run with sudo.
sudo python ./your_file.py
Having possibility to using sudo don't give you any privileges if you don't actually use it. So as other guys suggested you probably should just start your program with use of sudo. But if you don't like this idea (I don't see any reason for that) you can do other trick.
Your script can check if is run with root privileges or if it work only with user privileges. Than script can actually run itself with higher privileges. Here you have an example (please note that storing password in source code isn't a good idea).
import os.path
import subprocess
password_for_sudo = 'pass'
def started_as_root():
if subprocess.check_output('whoami').strip() == 'root':
return True
return False
def runing_with_root_privileges():
print 'I have the power!'
def main():
if started_as_root():
print 'OK, I have root privileges. Calling the function...'
runing_with_root_privileges()
else:
print "I'm just a user. Need to start new process with root privileges..."
current_script = os.path.realpath(__file__)
os.system('echo %s|sudo -S python %s' % (password_for_sudo, current_script))
if __name__ == '__main__':
main()
Output:
$ python test.py
I'm just a user. Need to start new process with root privileges...
OK, I have root privileges. Calling the function...
I have the power!
$ sudo python test.py
OK, I have root privileges.
Calling the function...
I have the power!
I like L3viathan's reply. I'd add a fourth option: Use subprocess.Popen() to execute an sh command to write the file. Something like subprocess.Popen('sudo echo \'{}\' > {}'.format(new_line, filename)).

use external python script to open maya and run another script inside maya

Is it possible to call a script from the command prompt in windows (or bash in linux) to open Maya and then subsequently run a custom script (possibly changing each time its run) inside Maya? I am searching for something a bit more elegant than changing the userSetup file and then running Maya.
The goal here is to be able to open a .mb file, run a script to position the scene inside, setup a generic set of lights and then render the scene to a specific place and file type. I want to be able to set this up as a scheduled task to check for any new scene files in a directory and then open maya and go.
Thanks for the help!
For something like this you can use Maya standalone instead of the full blown UI mode. It is faster. It is ideal for batch scheduled jobs like these. Maya standalone is just Maya running without the GUI. Once you have initialized your Maya standalone, you can import and call any scripts you want, as part of the original calling script. To start you off here is an example: (Feel free to use this as a reference/modify it to meet your needs)
In your script you first initialize Maya standalone.
import maya.standalone
maya.standalone.initialize("Python")
import maya.cmds as cmds
cmds.loadPlugin("Mayatomr") # Load all plugins you might need
That will get Maya running. Now we open and/or import all the files necessary (egs. lights, models etc.)
# full path to your Maya file to OPEN
maya_file_to_open = r"C:/Where/Ever/Your/Maya_Scene_Files/Are/your_main_maya_file.mb"
# Open your file
opened_file = cmds.file(maya_file_to_open, o=True)
# full path to your Maya file to IMPORT
maya_file_to_import = r"C:/Where/Ever/Your/Maya_Scene_Files/Are/your_maya_file.mb"
# Have a namespace if you want (recommended)
namespace = "SomeNamespaceThatIsNotAnnoying"
# Import the file. the variable "nodes" will hold the names of all nodes imported, just in case.
nodes = cmds.file(maya_file_to_import, i=True,
renameAll=True,
mergeNamespacesOnClash=False,
namespace=namespace,
returnNewNodes=True,
options="v=0;",
type="mayaBinary" # any file type you want. this is just an example.
)
#TODO: Do all your scene setup/ positioning etc. if needed here...
#Tip: you can use cmds.viewFit(cam_name, fitFactor=1) to fit your camera on to selected objects
Now we save this file out and call Maya Batch renderer to render it out
render_file = "C:/Where/Ever/Your/Maya_Scene_Files/Are/your_RENDER_file.mb"
cmds.file(rename=render_file)
cmds.file(force=True, save=True, options='v=1;p=17', type='mayaBinary')
import sys
from os import path
from subprocess import Popen
render_project = r"C:/Where/Ever/YourRenderProjectFolder"
renderer_folder = path.split(sys.executable)[0]
renderer_exec_name = "Render"
params = [renderer_exec_name]
params += ['-percentRes', '75']
params += ['-alpha', '0']
params += ['-proj', render_project]
params += ['-r', 'mr']
params += [render_file]
p = Popen(params, cwd=renderer_folder)
stdout, stderr = p.communicate()
That's it! Of Course, your script will have to be run using Maya's Python interpreter (Mayapy).
Do check out the docs for all the commands used for more options, esp.:
cmds.file()
cmds.viewFit()
cmds.loadPlugin()
Subprocess and Popen
PLUS, because of the awesomeness of Python, you can use modules like sched (docs) to schedule the running of this method in your Python code.
Hope this was useful. Have fun with this. Cheers.
A lot depends on what you need to do.
If you want to run a script that has access to Maya functionality, you can run a Maya standalone instance as in Kartik's answer. The mayapy binary installed in the same folder as your maya is the Maya python interpreter, you can run it directly the same way you'd run python.exe Mayapy has the same command flags as a regular python interpreter.
Inside a mayapy session, once you call standalone.initialize() you will have a running Maya session - with a few exceptions, it is as if you were running inside a script tab in a regular maya session.
To force Maya to run a particular script on startup, you can call the -c flag, just the way you would in python. For example, you can start up a maya and print out the contents of an empty scene like this (note: I'm assuming mayapy.exe is on your path. You can just CD to the maya bin directory too).
mayapy -c 'import maya.standalone; maya.standalone.initialize(); import maya.cmds as cmds; print cmds.ls()'
>>> [u'time1', u'sequenceManager1', u'renderPartition', u'renderGlobalsList1', u'defaultLightList1', u'defaultShaderList1', u'postProcessList1', u'defaultRenderUtilityList1', u'defaultRenderingList1', u'lightList1', u'defaultTextureList1', u'lambert1', u'particleCloud1', u'initialShadingGroup', u'initialParticleSE', u'initialMaterialInfo', u'shaderGlow1', u'dof1', u'defaultRenderGlobals', u'defaultRenderQuality', u'defaultResolution', u'defaultLightSet', u'defaultObjectSet', u'defaultViewColorManager', u'hardwareRenderGlobals', u'hardwareRenderingGlobals', u'characterPartition', u'defaultHardwareRenderGlobals', u'lightLinker1', u'persp', u'perspShape', u'top', u'topShape', u'front', u'frontShape', u'side', u'sideShape', u'hyperGraphInfo', u'hyperGraphLayout', u'globalCacheControl', u'brush1', u'strokeGlobals', u'ikSystem', u'layerManager', u'defaultLayer', u'renderLayerManager', u'defaultRenderLayer']
You can run mayapy interactively - effectively a command line version of maya - using the -i flag: This will start mayapy and give you a command prompt:
mayapy -i -c \"import maya.standalone; maya.standalone.initialize()\""
which again starts the standalone for you but keeps the session going instead of running a command and quitting.
To run a script file, just pass in the file as an argument. In that case you'd want to do as Kartik suggests and include the standalone.initalize() in the script. Then call it with
mayapy path/to/script.py
To suppress the userSetup, you can create an environmnet variable called MAYA_SKIP_USERSETUP_PY and set it to a non-zero value, that will load maya without running usersetup. You can also change environment varialbes or path variables before running the mayap; for example I can run mayapys from two different environments with these two bash aliases (in windows you'd use SET instead of EXPORT to change the env vars):
alias mp_zip="export MAYA_DEV=;mayapy -i -c \"import maya.standalone; maya.standalone.initialize()\""
alias mp_std="export MAYA_DEV=C:/UL/tools/python/ulmaya;export ZOMBUILD='C:/ul/tools/python/dist/ulmaya.zip';mayapy -i -c \"import maya.standalone; maya.standalone.initialize()\""
This blog post includes a python module for spinning up Mayapy instances with different environments as needed.
If you want to interact with a running maya from another envrionment - say, if you're trying to remote control it from a handheld device or a C program - you can use the Maya commandPort to handle simple requests via TCP. For more complex situations you could set up a basic remoting service like this of your own, or use a pre-exiating python RPC module like RPyC or ZeroMQ

How to determine if Python script was run via command line?

Background
I would like my Python script to pause before exiting using something similar to:
raw_input("Press enter to close.")
but only if it is NOT run via command line. Command line programs shouldn't behave this way.
Question
Is there a way to determine if my Python script was invoked from the command line:
$ python myscript.py
verses double-clicking myscript.py to open it with the default interpreter in the OS?
If you're running it without a terminal, as when you click on "Run" in Nautilus, you can just check if it's attached to a tty:
import sys
if sys.stdin and sys.stdin.isatty():
# running interactively
print("running interactively")
else:
with open('output','w') as f:
f.write("running in the background!\n")
But, as ThomasK points out, you seem to be referring to running it in a terminal that closes just after the program finishes. I think there's no way to do what you want without a workaround; the program is running in a regular shell and attached to a terminal. The decision of exiting immediately is done just after it finishes with information it doesn't have readily available (the parameters passed to the executing shell or terminal).
You could go about examining the parent process information and detecting differences between the two kinds of invocations, but it's probably not worth it in most cases. Have you considered adding a command line parameter to your script (think --interactive)?
What I wanted was answered here: Determine if the program is called from a script in Python
You can just determine between "python" and "bash". This was already answered I think, but you can keep it short as well.
#!/usr/bin/python
# -*- coding: utf-8 -*-
import psutil
import os
ppid = os.getppid() # Get parent process id
print(psutil.Process(ppid).name())
I don't think there's any reliable way to detect this (especially in a cross-platform manner). For example on OS X, when you double-click a .py file and it tuns with "Python Launcher", it runs in a terminal, identically to if you execute it manually.
Although it may have other issues, you could package the script up with something like py2exe or Platypus, then you can have the double-clickable icon run a specific bit of code to differentiate (import mycode; mycode.main(gui = True) for example)
If you run python IDLE then "pythonw.exe" is being used to run coding while when you run the command line "python.exe" is used to run coding. The python folder path can vary so you have to revert the path to the python folder. m = '\\' and m = m[0] is to get m to be '\' because of escaping.
import sys
a = sys.executable
m = '\\'
m = m[0]
while True:
b = len(a)
c = a[(b - 1)]
if c == m:
break
a = a[:(b - 1)]
if sys.executable == a + 'pythonw.exe':
print('Running in Python IDLE')
else:
print('Running in Command line')
Update for later versions (e.g. Python 3.6 on Ubuntu 16.04): The statement to get the name has changed to psutil.Process(os.getpid()).parent().name()
I believe this CAN be done. At least, here is how I got it working in Python 2.7 under Ubuntu 14.04:
#!/usr/bin/env python
import os, psutil
# do stuff here
if psutil.Process(os.getpid()).parent.name == 'gnome-terminal':
raw_input("Press enter to close...")
Note that -- in Ubuntu 14 with the Gnome desktop (aka Nautilus) -- you might need to do this:
from a Nautilus window (the file browser), select Edit(menu)->Preferences(item) then Behavior(tab)->Executable Text Files(section)->Ask Each Time(radio).
chmod your script to be executable, or -- from a Nautilus window (the file browser) -- right click on the file->Properties(item) then Permissions(tab)->Execute:Allow executing file as program(checkbox)
double-click your file. If you select "Run in Terminal", you should see the "Type enter to close..." prompt.
now try from a bash prompt; you should NOT see the prompt.
To see how this works, you can fiddle with this (based on the answer by from #EduardoIvanec):
#!/usr/bin/env python
import os
import sys
import psutil
def parent_list(proc=None, indent=0):
if not proc:
proc = psutil.Process(os.getpid())
pid = proc.pid
name = proc.name
pad = " " * indent
s = "{0}{1:5d} {2:s}".format(pad, pid, name)
parent = proc.parent
if parent:
s += "\n" + parent_list(parent, indent+1)
return s
def invoked_from_bash_cmdline():
return psutil.Process(os.getpid()).parent.name == "bash"
def invoked_as_run_in_terminal():
return psutil.Process(os.getpid()).parent.name == "gnome-terminal"
def invoked_as_run():
return psutil.Process(os.getpid()).parent.name == "init"
if sys.stdin.isatty():
print "running interactively"
print parent_list()
if invoked_as_run_in_terminal():
raw_input("Type enter to close...")
else:
with open('output','w') as f:
f.write("running in the background!\n")
f.write("parent list:\n")
f.write(parent_list())
From the idea behind this answer, adding for Win10 compatibility (Ripped from Python 2.7 script; modify as needed):
import os, psutil
status = 1
if __name__ =="__main__":
status = MainFunc(args)
args = sys.argv
running_windowed = False
running_from = psutil.Process(os.getpid()).parent().name()
if running_from == 'explorer.exe':
args.append([DEFAULT OR DOUBLE CLICK ARGS HERE])
running_windowed = True
if running_windowed:
print('Completed. Exit status of {}'.format(status))
ready = raw_input('Press Enter To Close')
sys.exit(status)
There is a number of switch like statements you could add to be more universal or handle different defaults.
This is typically done manually/, I don't think there is an automatic way to do it that works for every case.
You should add a --pause argument to your script that does the prompt for a key at the end.
When the script is invoked from a command line by hand, then the user can add --pause if desired, but by default there won't be any wait.
When the script is launched from an icon, the arguments in the icon should include the --pause, so that there is a wait. Unfortunately you will need to either document the use of this option so that the user knows that it needs to be added when creating an icon, or else, provide an icon creation function in your script that works for your target OS.
My solution was to create command line scripts using setuptools. Here are a the relevant parts of myScript.py:
def main(pause_on_error=False):
if run():
print("we're good!")
else:
print("an error occurred!")
if pause_on_error:
raw_input("\nPress Enter to close.")
sys.exit(1)
def run():
pass # run the program here
return False # or True if program runs successfully
if __name__ == '__main__':
main(pause_on_error=True)
And the relevant parts of setup.py:
setup(
entry_points={
'console_scripts': [
'myScript = main:main',
]
},
)
Now if I open myScript.py with the Python interpreter (on Windows), the console window waits for the user to press enter if an error occurs. On the command line, if I run 'myScript', the program will never wait for user input before closing.
Although this isn't a very good solution, it does work (in windows at least).
You could create a batch file with the following contents:
#echo off
for %%x in (%cmdcmdline%) do if /i "%%~x"=="/c" set DOUBLECLICKED=1
start <location of python script>
if defined DOUBLECLICKED pause
If you want to be able to do this with a single file, you could try the following:
#echo off
setlocal EnableDelayedExpansion
set LF=^
:: The 2 empty lines are necessary
for %%x in (%cmdcmdline%) do if /i "%%~x"=="/c" set DOUBLECLICKED=1
echo print("first line of python script") %LF% print("second and so on") > %temp%/pyscript.py
start /wait console_title pyscript.py
del %temp%/pyscript.py
if defined DOUBLECLICKED pause
Batch code from: Pausing a batch file when double-clicked but not when run from a console window?
Multi-line in batch from: DOS: Working with multi-line strings
Okay, the easiest way I found and made was to simply run the program in the command line, even if it was ran in the Python IDLE.
exist = lambda x: os.path.exists(x) ## Doesn't matter
if __name__ == '__main__':
fname = "SomeRandomFileName" ## Random default file name
if exist(fname)==False: ## exist() is a pre-defined lambda function
jot(fname) ## jot() is a function that creates a blank file
os.system('start YourProgram.py') ## << Insert your program name here
os.system('exit'); sys.exit() ## Exits current shell (Either IDLE or CMD)
os.system('color a') ## Makes it look cool! :p
main() ## Runs your code
os.system("del %s" % fname) ## Deletes file name for next time
Add this to the bottom of your script and once ran from either IDLE or Command Prompt, it will create a file, re-run the program in the CMD, and exits the first instance.
Hope that helps! :)
I also had that question and, for me, the best solution is to set an environment variable in my IDE (PyCharm) and check if that variable exists to know if the script is being executed either via the command line or via the IDE.
To set an environment variable in PyCharm check:
How to set environment variables in PyCharm?
Example code (environment variable: RUNNING_PYCHARM = True):
import os
# The script is being executed via the command line
if not("RUNNING_PYCHARM" in os.environ):
raw_input("Press enter to close.")
I hope it works for you.
Based on existing solutions and using sets:
import psutil
def running_interactively():
"""Return True if any of our parent processes is a known shell."""
shells = {"cmd.exe", "bash.exe", "powershell.exe", "WindowsTerminal.exe"}
parent_names = {parent.name() for parent in psutil.Process().parents()}
# print(parent_names)
# print(f"Shell in parents? {shells & parent_names}")
return bool(shells & parent_names)
if not running_interactively():
input("\nPress ENTER to continue.")
This answer is currently specific to Windows, but it can be reconfigured to work with other operating systems in theory. Rather than installing psutil module like most of these answers recommend, you can make use of the subprocess module and the Windows tasklist command to explicitly get the name of the parent process of your Python program.
import os
import subprocess
shells = {"bash.exe", "cmd.exe", "powershell.exe", "WindowsTerminal.exe"}
# These are standard examples, but it can also be used to detect:
# - Nested python.exe processes (IDLE, etc.)
# - IDEs used to develop your program (IPython, Eclipse, PyCharm, etc.)
# - Other operating system dependent shells
s = subprocess.check_output(["tasklist", "/v", "/fo", "csv", "/nh", "/fi", f"PID eq {os.getppid()}"])
# Execute tasklist command to get the verbose info without the header (/nh) of a single process in CSV format (/fo csv)
# Such that its PID is equal to os.getppid()
entry = s.decode("utf-8").strip().strip('"').split('","')
# Decode from bytes to str, remove end whitespace and quotations from CSV format
# And split along the quote delimited commas
# This process may differ and require adjustment when used for an OS other than Windows
condition = entry and entry[0] in shells
# Check first that entry is not an empty sequence, meaning the process has already ended
# If it still exists, check if the first element -- the executable -- exists as an element of the set of executables you're looking for
I hope this is helpful for anyone looking for an answer to this problem while minimizing the number of dependencies you'd need.
This was tested in Python 3.8 and uses an f-string in the subprocess.check_output line of the code, so please be sure to convert the f-string to a compatible syntax if you're working with a version of Python before f-strings were introduced.

Starting jython program from python using subprocess module?

I have a jython server script (called rajant_server.py) that interacts with a java api file to communicate over special network radios. I have a python program which acts as a client (and does several other things as well). Currently, I have to start the server first by opening a command/terminal window and typing:
cd [path to directory containing rajant_server.py
jython rajant_server.py
Once the server successfully connects it waits for the client, which I start by running:
cd [path to directory containing python client program]
python main.py
When the client connects, the server prints out information (currently for debug) in it's command/terminal window, and the client program prints out debug information in it's command/terminal window. What I want to do is do away with the complex process by calling jython from my 'main.py' program using the subprocess module.
The problem is two fold:
1 - I need the rajant_server.py program to open in it's own terminal/command window
2 - jython needs to be run in the directory where the rajant_server.py file is stored, in other words, typing the following into the command/Terminal Window doesn't work (don't ask me why):
jython C:/code_dir/comm/server/rajant_server.py
but:
cd C:/code_dir/comm/server
jython rajant_server.py
does work.
Okay... I just got something to work. It seems like a bit of a hack, so I would still love ideas on a better approach. Here is what I am currently doing:
serverfile = r'rajant_server_v2.py'
serverpath = os.path.join(os.path.realpath('.'),'Comm',serverfile)
serverpath = os.path.normpath(serverpath)
[path,file] = os.path.split(serverpath)
command = '/C jython '+file+'\n'
savedir = os.getcwd()
os.chdir(path)
rajantserver = subprocess.Popen(["cmd",command],\
creationflags = subprocess.CREATE_NEW_CONSOLE)
#Change Directory back
os.chdir(savedir)
#Start Client
rajant = rajant_comm.rajant_comm()
rajant.start()
If you have a solution that will work in both linux & windows you would be my hero. For some reason I couldn't change the stdin or stdout specifications on the subprocess when I added creationflags = subprocess.CREATE_NEW_CONSOLE.
The Popen function in subprocess accepts an optional parameter 'cwd', to define the current working directory of the child process.
rajantserver = subprocess.Popen(["cmd",command],\
creationflags = subprocess.CREATE_NEW_CONSOLE,\
cwd = path)
You can get rid of the os.getcwd call and the two os.chdir calls this way. If you want to be able to use this script on Linux, you have to do without 'cmd'. So call Popen with ["jython", file] as first argument.
EDIT: I've just seen that CREATE_NEW_CONSOLE is not defined in the subprocess module when running on Linux. Use this:
creationflags = getattr(subprocess,"CREATE_NEW_CONSOLE",0),\
This will be the same as before, except it falls back to the default value 0 when the subprocess module does not define CREATE_NEW_CONSOLE.

Categories