Celery and oslo config not working together - python

What is the best way to initialize oslo.cfg in a celery project ?
I want to do this:
from phantom.openstack.common import cfg
import os
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(__file__),
os.pardir, os.pardir))
conf_file = os.path.join(possible_topdir, 'etc/phantom', 'phantom.conf')
print 'config done 1'
config_files = None
if os.path.exists(conf_file):
config_files = [conf_file]
print 'config done 2'
cfg.CONF(project='phantom', default_config_files=[conf_file])
I have my config file where I store a set of URLs and user privileges that I would use to access different systems during different task executions.
I tried setting it up in my celery.py where I have this right now
from __future__ import absolute_import
from celery import Celery
celery = Celery(include=['phantom.tasks.celery_tasks'])
celery.config_from_object('bin.celeryconfig')
If I insert it into this block, it gives me back an error saying:
LM-SJN-00871893:Phantom uruddarraju$ celery -A phantom.celery.celery worker -l DEBUG
usage: celery [-h] [--version] [--config-file PATH] [--config-dir DIR]
celery: error: unrecognized arguments: -A phantom.celery.celery worker -l DEBUG

Related

django-supervisor connection refused

I am using supervisor to run celery with a django 1.8.8. setup. also using django-supervisor==0.3.4
supervisor==3.2.0
but when i restart all proecsses, i get
unix:///tmp/supervisor.sock refused connection
not able to restart any processes,
python manage.py supervisor --config-file=setting/staging_supervisor.conf --settings=setting.staging_settings restart all
supervisor config file
[supervisord]
logfile_maxbytes=10MB ; maximum size of logfile before rotation
logfile_backups=3 ; number of backed up logfiles
loglevel=warn ; info, debug, warn, trace
nodaemon=false ; run supervisord as a daemon
minfds=1024 ; number of startup file descriptors
minprocs=200 ; number of process descriptors
childlogdir=/logs/ ; where child log files will live
[program:celeryd_staging]
environment=PATH="{{ PROJECT_DIR }}/../../bin"
command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py celeryd -l info -c 1 --logfile=/logs/staging-celeryd.log --settings=setting.staging_celery_settings
redirect_stderr=false
[program:celerybeat_staging]
environment=PATH="{{ PROJECT_DIR }}/../../bin"
command=/{{ PYTHON }} {{ PROJECT_DIR }}/manage.py celerybeat --loglevel=INFO --logfile=/logs/staging-celerybeat.log --settings=setting.staging_celery_settings
redirect_stderr=false
[group:tasks]
environment=PATH="{{ PROJECT_DIR}}/../../bin"
programs=celeryd_staging,celerybeat_staging
[program:autoreload]
exclude=true
[program:runserver]
exclude=true
Got the solution. The supervisor process was not reloaded as supervisord was in my virtual enve as i ma using django-supervisor package.
once i reloaded the supervisor process, the refused connection error went away.
make sure there isn't already another /tmp/supervisor.sock owned by some user other than you (like root or something).
if not a permissions problem, add this to your supervisord configuration:
[unix_http_server]
file = /tmp/supervisor.sock ;
chmod=0700 ;
[rpcinterface:supervisor]
supervisor.rpcinterface_factory =
supervisor.rpcinterface:make_main_rpcinterface
this might be helpful to you as well: https://github.com/Supervisor/supervisor/issues/480#issuecomment-145193475

How to execute a long running subprocess inside celery task?

I have the following code where I am running a shell script using subprocess inside a celery task. It's not working as in I don't get an error or any forward progress, or any output from the celery task:
The following is the code to execute the task:
def run_shell_command(command_line):
command_line_args = shlex.split(command_line)
logging.info('Subprocess: "' + command_line + '"')
try:
command_line_process = subprocess.Popen(
command_line_args,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
for l in iter(command_line_process.stdout.readline,b''):
print l.strip()
command_line_process.communicate()
command_line_process.wait()
except (OSError, subprocess.CalledProcessError) as exception:
logging.info('Exception occured: ' + str(exception))
logging.info('Subprocess failed')
return False
else:
# no exception was raised
logging.info('Subprocess finished')
return True
It's called from within a task:
#app.task
def execute(jsonConfig, projectName, tagName, stage, description):
command = 'python ' + runScript + ' -c ' + fileName
run_shell_command(command)
Here the python "runScript" is in itself calling subprocesses, and executes a long running task. What could be the problem
The logging level has been set to INFO :
logging.basicConfig(filename='celery-execution.log',level=logging.INFO)
The celery worker is started as follow:
celery -A celery_worker worker --loglevel=info
I can see the subprocess being started:
[2016-05-03 01:08:55,126: INFO/Worker-2] Subprocess: "python runScript.py -c data/confs/Demo-demo14-1.conf"
I can also see the subprocess running in the background using a ps -ef, however this is a compute/memory intensive workload and it does not seem to be actually using any cpu or memory which makes me believe that nothing is really happening and it's stuck.

Fabric executing command on remote server doesn't work

I want use fabric in python, to execute command on remote server.
I wrote these:
from fabric.api import *
from fabric.tasks import execute
def do_some_thing():
run("ls -lh")
if __name__ == '__main__':
execute(do_some_thing,hosts=['root#10.18.103.102'])
but, it doesn't work and make me login..
It's the output:
➜ ~ python test.py
[root#10.18.103.102] Executing task 'do_some_thing'
[root#10.18.103.102] run: ls -lh
[root#10.18.103.102] out: root#svn:~#
[root#10.18.103.102] out: root#svn:~#
Make use of the env variable -
from fabric.api import *
from fabric.contrib.files import *
def myserver():
env.hosts = ['10.18.103.102']
env.user = 'root'
# if you have key based authentication, uncomment and point to private key
# env.key_filename = '~/.ssh/id_rsa'
# if you have password based authentication
env.password = 'ThEpAsAwOrd'
def ls():
run('ls -al')
Now save these in a file call fabfile.py and execute (on the same directory) -
$ fab myserver ls
Fabric will execute both the functions one after another. So when it executes ls() it'll have the server details in env.

How to run script from django?

I would like to know how to run a script from a django view.
It works from the command line: Eg: $ python sync.py But not via the django view. Thanks in advance
script 1: /home/ubuntu/webapps/sony_mv/sync.py
#!/usr/bin/env python
from subprocess import call
call(["/bin/sh", "/home/ubuntu/webapps/sony_mv/sync.sh"])
script 2: /home/ubuntu/webapps/sony_mv/sync.sh
cd /home/ubuntu/webapps/sony_mv
heroku pgbackups:capture -a staging-db --expire
heroku pgbackups:capture -a prod-db --expire
heroku pgbackups:restore DATABASE -a prod-db `heroku pgbackups:url -a staging-d` --confirm prod-db
views.py
def sync_staging_to_production(request):
try:
token = request.GET['token']
except:
token = False
if token == '382749813256-231952135':
from subprocess import *
import sys
p = Popen([sys.executable, '/home/ubuntu/webapps/sony_mv/sync.py'],stdout=PIPE,stderr=STDOUT)
return render_to_response('hannibal/sync_staging_to_production.html',{'feedback':'Success. Sync in progress.'},context_instance=RequestContext(request))
else:
return render_to_response('hannibal/sync_staging_to_production.html',{'feedback':'Authorization required'},context_instance=RequestContext(request))
ls output
$ ls -l sync.*
-rwxrwxr-x 1 root 108 2013-04-09 16:35 sync.py
-rwxrwxr-x 1 root 326 2013-04-09 16:44 sync.sh
whoami output
$ python
>>> from subprocess import call
>>> call(["/usr/bin/whoami"])
ubuntu
0
>>>
Adding a log for the output of the shell commands helped to debug.
The issue was related with permissions and ssh keys for the corresponding user.
Adding the corresponding user SSH keys fixed the issue.
Thanks everyone

Passing a Fabric env.hosts sting as a variable is not work in function

Passing a Fabric env.hosts sting as a variable is not work in function.
demo.py
#!/usr/bin/env python
from fabric.api import env, run
def deploy(hosts, command):
print hosts
env.hosts = hosts
run(command)
main.py
#!/usr/bin/env python
from demo import deploy
hosts = ['localhost']
command = 'hostname'
deploy(hosts, command)
python main.py
['localhost']
No hosts found. Please specify (single) host string for connection:
But env.host_string works!
demo.py
#!/usr/bin/env python
from fabric.api import env, run
def deploy(host, command):
print host
env.host_string = host
run(command)
main.py
#!/usr/bin/env python
from demo import deploy
host = 'localhost'
command = 'hostname'
deploy(host, command)
python main.py
localhost
[localhost] run: hostname
[localhost] out: heydevops-workspace
But the env.host_string is not enough for us, it's a single host.
Maybe we can use env.host_string within a loop, but that's not good.
Because we also want to set the concurrent tasks number and run them parallelly.
Now in ddep(my deployment engine), I only use MySQLdb to get the parameters then execute the fab command like:
os.system("fab -f service/%s.py -H %s -P -z %s %s" % (project,host,number,task))
This is a simple way but not good.
Because if I use the fab command, I can't catch the exceptions and failures of the results in Python, to make my ddep can "retry" the failed hosts.
If I use the "from demo import deploy", I can control and get them by some codes in Python.
So now "env.host " is the trouble. Can somebody give me a solution?
Thanks a lot.
Here's my insight.
According to docs, if you're calling fabric tasks from python scripts - you should use fabric.tasks.execute.
Should be smth like this:
demo.py
from fabric.api import run
from fabric.tasks import execute
def deploy(hosts, command):
execute(execute_deploy, command=command, hosts=hosts)
def execute_deploy(command):
run(command)
main.py
from demo import deploy
hosts = ['localhost']
command = 'hostname'
deploy(hosts, command)
Then, just run python main.py. Hope that helps.
Finally, I fixed this problem by using execute() and exec.
main.py
#!/usr/bin/env python
from demo import FabricSupport
hosts = ['localhost']
myfab = FabricSupport()
myfab.execute("df",hosts)
demo.py
#!/usr/bin/env python
from fabric.api import env, run, execute
class FabricSupport:
def __init__(self):
pass
def hostname(self):
run("hostname")
def df(self):
run("df -h")
def execute(self,task,hosts):
get_task = "task = self.%s" % task
exec get_task
execute(task,hosts=hosts)
python main.py
[localhost] Executing task 'hostname'
[localhost] run: hostname
[localhost] out: heydevops-workspace

Categories