I'm writing a daemon in python, using the python-daemon package. the daemon is started at boot-time (init.d) and needs to access various devices.
the daemon is to run on an embedded system (beaglebone) running ubuntu.
now my problem is that I want to run the daemon as an unprivileged user rather (e.g. mydaemon) than root.
in order to allow the daemon to access the devices I added that user to the required groups.
in the python code I use daemon.DaemonContext(uid=uidofmydamon).
the process started by root daemonizes nicely and is owned by the correct user, but I get permission denied errors when trying to access the devices.
I wrote a small test application, and it seems that the process does not inherit the group-memberships of the user.
#!/usr/bin/python
import logging, daemon, os
if __name__ == '__main__':
lh=logging.StreamHandler()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.addHandler(lh)
uid=1001 ## UID of the daemon user
with daemon.DaemonContext(uid=uid,
files_preserve=[lh.stream],
stderr=lh.stream):
logger.warn("UID : %s" % str(os.getuid()))
logger.warn("groups: %s" % str(os.getgroups()))
when i run the above code as the user with uid=1001 i get something like
$ ./testdaemon.py
UID: 1001
groups: [29,107,1001]
whereas when I run the above code as root (or su), I get:
$ sudo ./testdaemon.py
UID: 1001
groups: [0]
How can I create a daemon-process started by root but with a different effective uid and intact group memberships?
my current solution involves dropping root priviliges before starting the actual daemon, using the chuid argument for start-stop-daemon:
start-stop-daemon \
--start \
--chuid daemonuser \
--name testdaemon \
--pidfile /var/run/testdaemon/test.pid \
--startas /tmp/testdaemon.py \
-- \
--pidfile /var/run/testdaemon/test.pid \
--logfile=/var/log/testdaemon/testdaemon.log
the drawback of this solution is, that i need to create all directories, where the daemon ought to write to (noteably /var/run/testdaemon and /var/log/testdaemon), before starting the actual daemon (with the proper file permissions).
i would have preferred to write that logic in python rather than bash.
for now that works, but me thinketh that this should be solveable in a more elegant fashion.
This can be fixed by monkey patching the daemon module, the code is as follows:
import os, grp, pwd
class DaemonError(Exception):
pass
class DaemonOSEnvironmentError(DaemonError, OSError):
pass
def change_process_owner(uid, gid):
try:
# This line adds all the groups the user is member of
# to keep the expected permissions
os.setgroups(
[g.gr_gid for g in grp.getgrall()
if pwd.getpwuid(uid).pw_name in g.gr_mem
]
)
os.setgid(gid)
os.setuid(uid)
except Exception, exc:
error = DaemonOSEnvironmentError(u"Unable to change process
owner (%(exc)s)" % vars())
raise error
And then the monkey patch:
import daemon
daemon.daemon.change_process_owner = change_process_owner
Related
from os import getenv, listdir, path
from kubernetes import client, config
from kubernetes.stream import stream
import constants, logging
from pprint import pprint
def listdir_fullpath(directory):
return [path.join(directory, file) for file in listdir(directory)]
def active_context(kubeConfig, cluster):
config.load_kube_config(config_file=kubeConfig, context=cluster)
def kube_exec(command, apiInstance, podName, namespace, container):
response = None
execCommand = [
'/bin/bash',
'-c',
command]
try:
response = apiInstance.read_namespaced_pod(name=podName,
namespace=namespace)
except ApiException as e:
if e.status != 404:
print(f"Unknown error: {e}")
exit(1)
if not response:
print("Pod does not exist")
exit(1)
try:
response = stream(apiInstance.connect_get_namespaced_pod_exec,
podName,
namespace,
container=container,
command=execCommand,
stderr=True,
stdin=False,
stdout=True,
tty=False,
_preload_content=True)
except Exception as e:
print("error in executing cmd")
exit(1)
pprint(response)
if __name__ == '__main__':
configPath = constants.CONFIGFILE
kubeConfigList = listdir_fullpath(configPath)
kubeConfig = ':'.join(kubeConfigList)
active_context(kubeConfig, "ort.us-west-2.k8s.company-foo.net")
apiInstance = client.CoreV1Api()
kube_exec("whoami", apiInstance, "podname-foo", "namespace-foo", "container-foo")
I run this code
and the response I get from running whoami is:'java\n'
how can I run as root? also, I can't find a good doc for this client anywhere (the docs on the git repo are pretty horrible) if you can link me to any it would be awesome
EDIT: I just tried on a couple of different pods and containers, looks like some of them default to root, would still like to be able to choose my user when I run a command so question is still relevant
some of them default to root, would still like to be able to choose my user when I run a command so question is still relevant
You have influence over the UID (not the user directly, as far as I know) when you launch the Pod, but from that point forward, there is no equivalent to docker exec -u in kubernetes -- you can attach to the Pod, running as whatever UID it was launched as, but you cannot change the UID
I would hypothesize that's a security concern in locked down clusters, since one would not want someone with kubectl access to be able to elevate privileges
If you need to run as root in your container, then you should change the value of securityContext: runAsUser: 0 and then drop privileges for running your main process. That way new commands (spawned by your exec command) will run as root, just as your initial command: does
I am using Open Semantic Search (OSS) and I would like to monitor its processes using the Flower tool. The workers that Celery needs should be given as OSS states on its website
The workers will do tasks like analysis and indexing of the queued files. The workers are implemented by etl/tasks.py and will be started automatically on boot by the service opensemanticsearch.
This tasks.py file looks as follows:
#!/usr/bin/python3
# -*- coding: utf-8 -*-
#
# Queue tasks for batch processing and parallel processing
#
# Queue handler
from celery import Celery
# ETL connectors
from etl import ETL
from etl_delete import Delete
from etl_file import Connector_File
from etl_web import Connector_Web
from etl_rss import Connector_RSS
verbose = True
quiet = False
app = Celery('etl.tasks')
app.conf.CELERYD_MAX_TASKS_PER_CHILD = 1
etl_delete = Delete()
etl_web = Connector_Web()
etl_rss = Connector_RSS()
#
# Delete document with URI from index
#
#app.task(name='etl.delete')
def delete(uri):
etl_delete.delete(uri=uri)
#
# Index a file
#
#app.task(name='etl.index_file')
def index_file(filename, wait=0, config=None):
if wait:
time.sleep(wait)
etl_file = Connector_File()
if config:
etl_file.config = config
etl_file.index(filename=filename)
#
# Index file directory
#
#app.task(name='etl.index_filedirectory')
def index_filedirectory(filename):
from etl_filedirectory import Connector_Filedirectory
connector_filedirectory = Connector_Filedirectory()
result = connector_filedirectory.index(filename)
return result
#
# Index a webpage
#
#app.task(name='etl.index_web')
def index_web(uri, wait=0, downloaded_file=False, downloaded_headers=[]):
if wait:
time.sleep(wait)
result = etl_web.index(uri, downloaded_file=downloaded_file, downloaded_headers=downloaded_headers)
return result
#
# Index full website
#
#app.task(name='etl.index_web_crawl')
def index_web_crawl(uri, crawler_type="PATH"):
import etl_web_crawl
result = etl_web_crawl.index(uri, crawler_type)
return result
#
# Index webpages from sitemap
#
#app.task(name='etl.index_sitemap')
def index_sitemap(uri):
from etl_sitemap import Connector_Sitemap
connector_sitemap = Connector_Sitemap()
result = connector_sitemap.index(uri)
return result
#
# Index RSS Feed
#
#app.task(name='etl.index_rss')
def index_rss(uri):
result = etl_rss.index(uri)
return result
#
# Enrich with / run plugins
#
#app.task(name='etl.enrich')
def enrich(plugins, uri, wait=0):
if wait:
time.sleep(wait)
etl = ETL()
etl.read_configfile('/etc/opensemanticsearch/etl')
etl.read_configfile('/etc/opensemanticsearch/enhancer-rdf')
etl.config['plugins'] = plugins.split(',')
filename = uri
# if exist delete protocoll prefix file://
if filename.startswith("file://"):
filename = filename.replace("file://", '', 1)
parameters = etl.config.copy()
parameters['id'] = uri
parameters['filename'] = filename
parameters, data = etl.process (parameters=parameters, data={})
return data
#
# Read command line arguments and start
#
#if running (not imported to use its functions), run main function
if __name__ == "__main__":
from optparse import OptionParser
parser = OptionParser("etl-tasks [options]")
parser.add_option("-q", "--quiet", dest="quiet", action="store_true", default=False, help="Don\'t print status (filenames) while indexing")
parser.add_option("-v", "--verbose", dest="verbose", action="store_true", default=False, help="Print debug messages")
(options, args) = parser.parse_args()
if options.verbose == False or options.verbose==True:
verbose = options.verbose
etl_delete.verbose = options.verbose
etl_web.verbose = options.verbose
etl_rss.verbose = options.verbose
if options.quiet == False or options.quiet==True:
quiet = options.quiet
app.worker_main()
I read multiple tutorials about Celery and from my understanding, this line should do the job
celery -A etl.tasks flower
but it doesnt. The result is the statement
Error: Unable to load celery application. The module etl was not found.
Same for
celery -A etl.tasks worker --loglevel=debug
so Celery itself seems to be causing the trouble, not flower. I also tried e.g. celery -A etl.index_filedirectory worker --loglevel=debug but with the same result.
What am I missing? Do I have to somehow tell Celery where to find etl.tasks? Online research doesn't really show a similar case, most of the "Module not found" errors seem to occur while importing stuff. So possibly it's a silly question but I couldn't find a solution anywhere. I hope you guys can help me. Unfortunately, I won't be able to respond until Monday though, sorry in advance.
I got same issue, I installed and configured my queue as follows, and it works.
Install RabbitMQ
MacOS
brew install rabbitmq
sudo vim ~/.bash_profile
In bash_profile add the following line:
PATH=$PATH:/usr/local/sbin
Then update bash_profile:
sudo source ~/.bash_profile
Linux
sudo apt-get install rabbitmq-server
Configure RabbitMQ
Launch the queue:
sudo rabbitmq-server
In another Terminal, configure the queue:
sudo rabbitmqctl add_user myuser mypassword
sudo rabbitmqctl add_vhost myvhost
sudo rabbitmqctl set_user_tags myuser mytag
sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"
Launch Celery
I would suggest to go in the folder that contains task.py and use the following command:
celery -A task worker -l info -Q celery --concurrency 5
Beware that this error means two things:
The module is missing
The module exists but cannot be loaded. If it has errors in it, such as a SyntaxError for instance.
To check that it's not the latter, run:
python -c "import <myModuleContainingTasksDotPyFile>"
In the context of this question:
python -c "import etl"
If it crashes, fix this first (Unlike with celery, you'll get a detailed error message).
Solutions above did not work for me.
I had the same issue and my problem was that in main celery.py (that was in SmartCalend folder) I had:
app = Celery('proj')
but instead I must type there:
app = Celery('SmartCalend')
where SmartCalend is the actual app name where celery.py belongs (!). not any random word, but precisely app name. Thats nowhere mentioned, only in official docs here:
Try export PYTHONPATH=<parent directory> where parent directory is the folder where the etl is. Run the Celery worker, and see it if fixes your problem. This is probably one of the most common Celery "issues" (not really Celery, but Python in general). Alternatively, run the Celery worker from that folder.
Answer for MacOS Catalina:
When you install celery with pip (pip install celery), python can import celery, but you are not able to launch celery from the terminal because the terminal does not know of the celery executable.
Add celery to the path to fix:
nano ~/.bash_profile
In the file add: export PATH="/Users/gavinbelson/Library/Python/2.7/bin:$PATH"
To save the file in the nano editor: ctrl+o, then enter, then ctrl+x
To update the terminal with your change type: source ~/.bash_profile
Now you should be able to type celery in the terminal window
---- Note this is for the default python terminal command which runs version 2.7. If you are using python3 to run python, you would need to change alter the path variable accordingly
I know crons run in a different environment than command lines, but I'm using absolute paths everywhere and I don't understand why my script behaves differently. I believe it is somehow related to my cron_supervisor which runs the django "manage.py" within a sub process.
Cron:
0 * * * * /home/p1/.virtualenvs/prod/bin/python /home/p1/p1/manage.py cron_supervisor --command="/home/p1/.virtualenvs/prod/bin/python /home/p1/p1/manage.py envoyer_argent"
This will call the cron_supervisor, and it's call the script, but the script won't be executed as it would if I would run:
/home/p1/.virtualenvs/prod/bin/python /home/p1/p1/manage.py envoyer_argent
Is there something particular to be done for the script to be called properly when running it through another script?
Here is the supervisor, which basically is for error handling and making sure we get warned if something goes wrong within the cron scripts themselves.
import logging
import os
from subprocess import PIPE, Popen
from django.core.management.base import BaseCommand
from command_utils import email_admin_error, isomorphic_logging
from utils.send_slack_message import send_slack_message
CURRENT_DIR = os.path.dirname(os.path.abspath(__file__))
PROJECT_DIR = CURRENT_DIR + '/../../../'
logging.basicConfig(
level=logging.INFO,
filename=PROJECT_DIR + 'cron-supervisor.log',
format='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
class Command(BaseCommand):
help = "Control a subprocess"
def add_arguments(self, parser):
parser.add_argument(
'--command',
dest='command',
help="Command to execute",
)
parser.add_argument(
'--mute_on_success',
dest='mute_on_success',
action='store_true',
help="Don't post any massage on success",
)
def handle(self, *args, **options):
try:
isomorphic_logging(logging, "Starting cron supervisor with command \"" + options['command'] + "\"")
if options['command']:
self.command = options['command']
else:
error_message = "Empty required parameter --command"
# log error
isomorphic_logging(logging, error_message, "error")
# send slack message
send_slack_message("Cron Supervisor Error: " + error_message)
# send email to admin
email_admin_error("Cron Supervisor Error", error_message)
raise ValueError(error_message)
if options['mute_on_success']:
self.mute_on_success = True
else:
self.mute_on_success = False
# running process
process = Popen([self.command], stdout=PIPE, stderr=PIPE, shell=True)
output, error = process.communicate()
if output:
isomorphic_logging(logging, "Output from cron:" + output)
# check for any subprocess error
if process.returncode != 0:
error_message = 'Command \"{command}\" - Error \nReturn code: {code}\n```{error}```'.format(
code=process.returncode,
error=error,
command=self.command,
)
self.handle_error(error_message)
else:
message = "Command \"{command}\" ended without error".format(command=self.command)
isomorphic_logging(logging, message)
# post message on slack if process isn't muted_on_success
if not self.mute_on_success:
send_slack_message(message)
except Exception as e:
error_message = 'Command \"{command}\" - Error \n```{error}```'.format(
error=e,
command=self.command,
)
self.handle_error(error_message)
def handle_error(self, error_message):
# log the error in local file
isomorphic_logging(logging, error_message)
# post message in slack
send_slack_message(error_message)
# email admin
email_admin_error("Cron Supervisor Error", error_message)
Example of script not executed properly when being called by the cron, through the cron_supervisor:
# -*- coding: utf-8 -*-
import json
import logging
import os
from django.conf import settings
from django.core.management.base import BaseCommand
from utils.lock import handle_lock
logging.basicConfig(
level=logging.INFO,
filename=os.path.join(settings.BASE_DIR, 'crons.log'),
format='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
class Command(BaseCommand):
help = "Envoi de l'argent en attente"
#handle_lock
def handle(self, *args, **options):
logging.info("some logs that won't be log (not called)")
logging.info("Those logs will be correcly logged")
Additionally, I have another issue with the logging which I don't quite understand either, I specify to store logs within cron-supervisor.log but they don't get stored there, I couldn't figure out why. (but that's not related to my main issue, just doesn't help with debug)
Your cron job can't just run the Python interpreter in the virtualenv; this is completely insufficient. You need to activate the env just like in an interactive environment.
0 * * * * . /home/p1/.virtualenvs/prod/bin/activate; python /home/p1/p1/manage.py cron_supervisor --command="python /home/p1/p1/manage.py envoyer_argent"
This is already complex enough that you might want to create a separate wrapper script containing these commands.
Without proper diagnostics of how your current script doesn't work, it's entirely possible that this fix alone is insufficient. Cron jobs do not only (or particularly) need absoute paths; the main differences compared to interactive shells is that cron jobs run with a different and more spare environment, where e.g. the shell's PATH, various library paths, environment variables etc can be different or missing altogether; and of course, no interactive facilities are available.
The system variables will hopefully be taken care of by your virtualenv; if it's correctly done, activating it will set up all the variables (PATH, PYTHONPATH, etc) your script needs. There could still be things like locale settings which are set up by your shell only when you log in interactively; but again, without details, let's just hope this isn't an issue for you.
The reason some people recommend absolute paths is that this will work regardless of your working directory. But a correctly written script should work fine in any directory; if it matters, the cron job will start in the owner's home directory. If you wanted to point to a relative path from there, this will work fine inside a cron job just as it does outside.
As an aside, you probably should not use subprocess.Popen() if one of the higher-level wrappers from the subprocess module do what you want. Unless compatibility with legacy Python versions is important, you should probably use subprocess.run() ... though running Python as a subprocess of Python is also often a useless oomplication. See also my answer to this related question.
I have a vanilla python that connects to a sqlite database.
Everything works fine until I try to run it as a daemon. Here is the code I'm using to do that:
def start(self):
if self.lockfile.is_locked():
exit_with_code(7, self.pid_file)
# If we're running in debug, run in the foreground, else daemonise
if self.options['debug']:
try:
self.main()
except KeyboardInterrupt:
pass
finally:
self.close_gracefully()
else:
context = daemon.DaemonContext(
files_preserve = [self.logger.socket(), self.lockfile]
)
context.signal_map = {
signal.SIGTERM: self.close_gracefully
}
with context: self.main()
I can run it in the foreground with python -m starter -debug and everything is fine, my app writes into the database, but when I leave the debug flag off I see the following when I try to write:
no such table: Frontends
I know that the frontends table exists because I've opened the database up. I assume that python is finding the database, because there would be an entirely different error message otherwise.
All my files are owned by vagrant, and ls -l shows the following:
-rwxrwxrwx 1 vagrant vagrant 9216 Nov 9 18:09 development.sqlite
Anyone got any tips?
Update
As requested, here is the code for my db
import os
import sqlite3
class Database(object):
def __init__(self, db_file='/vagrant/my_daemon/db/development.sqlite'):
self.db = sqlite3.connect(db_file)
if os.path.exists(db_file):
print "db exists"
And when I run this it prints "db exists". I instantiate the database in starter.py with a call to Database().
Python daemon closes all open file descriptors (except stdin, stout and stderr) when you daemonise.
I spent ages trying to figure out which files to keep open to prevent my database from being inaccessible, and in the end I found that it's easier to initialise the database inside the daemon context rather than outside. That way I don't need to worry about which files should stay open.
Now everything is working fine.
I'm using a crontab to run a maintenance script for my minecraft server. Most of the time it works fine, unless the crontab tries to use the restart script. If I run the restart script manually, there aren't any issues. Because I believe it's got to do with path names, I'm trying to make sure it's always doing any minecraft command FROM the minecraft directory. So I'm encasing the command in pushd/popd:
os.system("pushd /directory/path/here")
os.system("command to sent to minecraft")
os.system("popd")
Below is an interactive session taking minecraft out of the equation. A simple "ls" test. As you can see, it does not at all run the os.system command from the pushd directory, but instead from /etc/ which is the directory in which I was running python to illustrate my point.Clearly pushd isn't working via python, so I'm wondering how else I can achieve this. Thanks!
>>> def test():
... import os
... os.system("pushd /home/[path_goes_here]/minecraft")
... os.system("ls")
... os.system("popd")
...
>>> test()
~/minecraft /etc
DIR_COLORS cron.weekly gcrypt inputrc localtime mime.types ntp ppp rc3.d sasldb2 smrsh vsftpd.ftpusers
DIR_COLORS.xterm crontab gpm-root.conf iproute2 login.defs mke2fs.conf ntp.conf printcap rc4.d screenrc snmp vsftpd.tpsave
X11 csh.cshrc group issue logrotate.conf modprobe.d odbc.ini profile rc5.d scsi_id.config squirrelmail vz
adjtime csh.login group- issue.net logrotate.d motd odbcinst.ini profile.d rc6.d securetty ssh warnquota.conf
aliases cyrus.conf host.conf java lvm mtab openldap protocols redhat-release security stunnel webalizer.conf
alsa dbus-1 hosts jvm lynx-site.cfg multipath.conf opt quotagrpadmins resolv.conf selinux sudoers wgetrc
alternatives default hosts.allow jvm-commmon lynx.cfg my.cnf pam.d quotatab rndc.key sensors.conf sysconfig xinetd.conf
bashrc depmod.d hosts.deny jwhois.conf mail named.caching-nameserver.conf passwd rc rpc services sysctl.conf xinetd.d
blkid dev.d httpd krb5.conf mail.rc named.conf passwd- rc.d rpm sestatus.conf termcap yum
cron.d environment imapd.conf ld.so.cache mailcap named.rfc1912.zones pear.conf rc.local rsyslog.conf setuptool.d udev yum.conf
cron.daily exports imapd.conf.tpsave ld.so.conf mailman netplug php.d rc.sysinit rwtab shadow updatedb.conf yum.repos.d
cron.deny filesystems init.d ld.so.conf.d makedev.d netplug.d php.ini rc0.d rwtab.d shadow- vimrc
cron.hourly fonts initlog.conf libaudit.conf man.config nscd.conf pki rc1.d samba shells virc
cron.monthly fstab inittab libuser.conf maven nsswitch.conf postfix rc2.d sasl2 skel vsftpd
sh: line 0: popd: directory stack empty
===
(CentOS server with python 2.4)
In Python 2.5 and later, I think a better method would be using a context manager, like so:
import contextlib
import os
#contextlib.contextmanager
def pushd(new_dir):
previous_dir = os.getcwd()
os.chdir(new_dir)
try:
yield
finally:
os.chdir(previous_dir)
You can then use it like the following:
with pushd('somewhere'):
print os.getcwd() # "somewhere"
print os.getcwd() # "wherever you started"
By using a context manager you will be exception and return value safe: your code will always cd back to where it started from, even if you throw an exception or return from inside the context block.
You can also nest pushd calls in nested blocks, without having to rely on a global directory stack:
with pushd('somewhere'):
# do something
with pushd('another/place'):
# do something else
# do something back in "somewhere"
Each shell command runs in a separate process. It spawns a shell, executes the pushd command, and then the shell exits.
Just write the commands in the same shell script:
os.system("cd /directory/path/here; run the commands")
A nicer (perhaps) way is with the subprocess module:
from subprocess import Popen
Popen("run the commands", shell=True, cwd="/directory/path/here")
pushd and popd have some added functionality: they store previous working directories in a stack - in other words, you can pushd five times, do some stuff, and popd five times to end up where you started. You're not using that here, but it might be useful for others searching for the questions like this. This is how you can emulate it:
# initialise a directory stack
pushstack = list()
def pushdir(dirname):
global pushstack
pushstack.append(os.getcwd())
os.chdir(dirname)
def popdir():
global pushstack
os.chdir(pushstack.pop())
I don't think you can call pushd from within an os.system() call:
>>> import os
>>> ret = os.system("pushd /tmp")
sh: pushd: not found
Maybe just maybe your system actually provides a pushd binary that triggers a shell internal function (I think I've seen this on FreeBSD beforeFreeBSD has some tricks like this, but not for pushd), but the current working directory of a process cannot be influenced by other processes -- so your first system() starts a shell, runs a hypothetical pushd, starts a shell, runs ls, starts a shell, runs a hypothetical popd... none of which influence each other.
You can use os.chdir("/home/path/") to change path: http://docs.python.org/library/os.html#os-file-dir
No need to use pushd -- just use os.chdir:
>>> import os
>>> os.getcwd()
'/Users/me'
>>> os.chdir('..')
>>> os.getcwd()
'/Users'
>>> os.chdir('me')
>>> os.getcwd()
'/Users/me'
Or make a class to use with 'with'
import os
class pushd: # pylint: disable=invalid-name
__slots__ = ('_pushstack',)
def __init__(self, dirname):
self._pushstack = list()
self.pushd(dirname)
def __enter__(self):
return self
def __exit__(self, exec_type, exec_val, exc_tb) -> bool:
# skip all the intermediate directories, just go back to the original one.
if self._pushstack:
os.chdir(self._pushstack.pop(0)))
if exec_type:
return False
return True
def popd(self) -> None:
if len(self._pushstack):
os.chdir(self._pushstack.pop())
def pushd(self, dirname) -> None:
self._pushstack.append(os.getcwd())
os.chdir(dirname)
with pushd(dirname) as d:
... do stuff in that dirname
d.pushd("../..")
d.popd()
If you really need a stack, i.e. if you want to do several pushd and popd,
see naught101 above.
If not, simply do:
olddir = os.getcwd()
os.chdir('/directory/path/here')
os.system("command to sent to minecraft")
os.chdir(olddir)