class SeedWorker
include Sidekiq::Worker
def perform
`python lib/assets/python_scripts/seed.py`
end
end
I am trying to execute this from terminal like so:
bundle exec SeedWorker.perform_async()
I have a Redis server and Sidekiq running as well as a Rails server. The script works fine by itself, but I am wondering if this is even possible. Also Sinatra is running too. Any help would be greatly appreciated.
SeedWorker.perform_async() is not an executable---that will not work. Also, Sidekiq is already running, but it may not have loaded your worker file. Last, Sidekiq just requires Redis. Rails and Sinatra are not related to your problem.
That statement will work in an environment like irb. Alternatively you can use the sidekiq executable:
bundle exec sidekiq -r <path to your worker file>
How about following up the good documentation provided with Sidekiq?
Related
I'm new to django and python and am trying to determine the best way to 'play around' with querying (I come from a front-end background and am used to using console.log there, but cannot at the back-end).
If I run python3 manage.py shell, I can then run useful commands in that shell, for example, I can test querying Tracking objects:
from myapp.tracking.models import Tracking
trackings = Tracking.objects.all()
print(trackings)
However, in my setup I apparently need to run a shell inside docker:
sudo docker-compose exec myapp bash
Here I can't run commands like from myapp.tracking.models import Tracking as I get errors like bash: from: command not found. From googling around, I assume that bash is a different type of shell to whatever the python manage.py shell shell uses (Ipython?).
So my question is how can I use the same shell that the python3 manage.py shell command uses? Presumably running something like sudo docker-compose exec myapp SOME_OTHER_SHELL? And if this is not possible, am I correct in assuming that bash is just a different type of shell with a different syntax for importing and querying variables? If so, any useful links to docs would be much appreciated.
Thanks
I have quite complicated setup with
Luigi https://github.com/spotify/luigi
https://github.com/kennethreitz/requests-html
and https://github.com/miyakogi/pyppeteer
But long story short - everything works fine at my local Ubuntu (17.10) desktop, but when run in Docker(18.03.0-ce) via docker-compose(1.18.0) (config version 3.3, some related details here https://github.com/miyakogi/pyppeteer/issues/14) it bloats up with zombie chrome processes spawned from python.
Any ideas why it might happen and what to do with it?
Try installing dumb-init: https://github.com/Yelp/dumb-init (available both in alpine and debian) inside your container and use it as entry point (more reading here: https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/)
Seems like it is because python process is not meant to be a root-level process - the topmost one in the processes tree. It just does not handle zombies properly. So after few hours of struggling I end up with the next ugly docker-compose config entry:
entrypoint: /bin/bash -c
command: "(luigid) &"
Where luigid is a python process. This makes bash a root process which handles zombies perfectly.
Would be great to know a more straightforward way of doing this.
I'm trying to run a python script from bamboo. I created a script task and wrote inline "python myFile.py". Should I be listing the full path for python?
I changed the working directory to the location of myFile.py so that is not a problem. Is there anything else I need to do within the configuration plan to properly run this script? It isn't running but I know it should be running because the script works fine from terminal on my local machine. Thanks
I run a lot of python tasks from bamboo, so it is possible. Using the Script task is generally painless...
You should be able to use your script task to run the commands directly and have stdout written to the logs. Since this is true, you can run:
'which python' -- Output the path of which python that is being ran.
'pip list' -- Output a list of which modules are installed with pip.
You should verify that the output from the above commands matches the output when ran from the server. I'm guessing they won't match up and once that is addressed, everything will work fine.
If not, comment back and we can look at a few other things.
For the future, there are a handful of different ways you can package things with python which could assist with this problem (e.g. automatically installing missing modules, etc).
You can also use the Script Task directly with an inline Python script to run your myFile.py:
/usr/bin/python <<EOF
print "Hello, World!"
EOF
Check this page for a more complex example:
https://www.langhornweb.com/display/BAT/Run+Python+script+as+a+Bamboo+task?desktop=true¯oName=seo-metadata
I'm trying to daemonize my bash script which starts running python script inside.
Here is my program section of supervisord.conf
[program:source]
directory=/home/vagrant/
command=/usr/local/bin/python /home/vagrant/start.py
process_name=%(program_name)s
user=vagrant
autostart=true
When I start supervisord it doesn't work. From the log i receive:
No module named monitor.tasks
When I run the program directly it works. Seems it has working directory issue but I don't know how to solve. Any suggestion?
Found where my mistake was. I just had to use -m after python command as follows:
command=/usr/local/bin/python -m vagrant/start.py
I had a similar problem, but mine was related with the PYTHONPATH. All I had to do was adding a single line on my program configuration:
[program:myProgram]
environment=PYTHONPATH=/home/nectu/.local/lib/python3.6/site-packages
(...)
Running on: Lubuntu 18.04 / Python 3.6
I have a set of python scripts which I run as a daemon services. These all work great, but when all the scripts are running and I use top -u <USER>, I see all my scripts running as python.
I would really like to know which script is running under which process id. So is there any way to execute a python script as a different process name?
I'm stuck here, and I'm not ever sure what terms to Google. :-)
Note: I'm using Ubuntu Linux. Not sure if the OS matters or not.
Try using setproctitle. It should work fine on Linux.
Don't have a linux system here to test this on appropriately, but if the above doesn't work, you should be able to use the same trick they use for things like gzip etc.
The script has to tell what to run it at the top like this:
#!/usr/local/bin/python
Use a softlink like this:
ln -s /usr/local/bin/python ~/bin/myutil
Then just change your script to
#!~/bin/myutil
and it should show up that way instead. You may need to use a hard link instead of a soft link.
Launching a python script using the python script itself (and file associations and/or shell magic) is not very portable, but you can use similar methods on nearly any OS.
The easiest way to get this is using she bang. The first line of your python script should be:
#!/usr/bin/python
or
#!/usr/bin/python3
depending upon whether you use python or python3
and then assign executable permissions to the script as follows:
chmod +x <scriptname>
and then run the script as
./scriptname
this will show up as scriptname in top.