Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
In Django, running ./manage.py runserver is really nice for dev, avoiding the hassle to setup and start a real webserver.
If you are not running Django, you can still setup a gunicorn server very easily.
Is there something similar for AMQP?
I don't need a full implementation nor something robust, just something that is easy to install and run for dev. PyPi package would be great.
Celery is not the answer. I don't want a client, I want a server. Like a mini python RabbitMq.
I'm not aware of any AMQP broker implemented in Python. And I am not aware of a 'lite' implementation in general; I think that implementing an AMQP broker is sufficiently complex that those that try it either aim to be close to one of the versions of the AMQP specification, or don't bother at all. :)
I also don't quite see how running a broker presents the same problems as running a test web server for your web application.
The web server does nothing useful without your application to run inside it, and while you're developing your application it makes sense to be able to run it without needing to do a full deployment.
But you are not implementing the internals of the broker, and you can configure it dynamically so (unlike the web server) it doesn't need to restart itself every time you change your code. Exchanges, bindings and queues can be declared by the application under test and then automatically deleted afterwards.
Installing RabbitMQ is not difficult at all, and it should need hardly any configuration, if any, because it comes with a default vhost and guest user account which are fine for use in an isolated test environment. So I have never had a problem with having RabbitMQ simply running on my test machine.
Maybe you've had some particular issue that I've not thought of; if that's the case, please do leave a comment (or expand your question) to explain it.
Edit: Recently I've been doing quite a lot of testing of AMQP-based applications, and I've found RabbitMQ's Management Plugin very useful. It includes an HTTP API which I'm using to do things like create a new vhost for each test run, and destroy it afterwards to clean up the broker's state. This makes running tests on a shared broker much less intrusive. Using the HTTP API to manage this, rather than the AMQP client being tested, avoids the tests becoming somewhat circular.
I had your same question and was shocked to see how dry the space is. Even after all this time, there's hardly a lightweight AMQP server out there. I couldn't even find toy implementations on Github. AMQP seems like a beast of a protocol. I also found that RabbitMQ is probably about as light as it gets.
I ended up going with a Docker based solution for my integration tests that automatically starts and kills a RabbitMQ container. You need to install the Docker Python library and (of course) have a Docker daemon running on your machine. I was already using Docker for other things so it wasn't a biggie for my setup; YMMV. After that, basically I do:
import docker
client = docker.from_env()
c = client.containers.run(
'rabbitmq:alpine', # use Alpine Linux build (smaller)
auto_remove=True, # Remove automatically when stopped
detach=True, # Run in daemon mode
ports={'5672/tcp' : ('127.0.0.1', None)} # Bind to a random localhost port
)
container = client.containers.get(c.id) # Re-fetch container for port
port = container.attrs['NetworkSettings']['Ports']['5672/tcp'][0]['HostPort']
# ... Do any set up of the RabbitMQ instance needed on (127.0.0.1:<port>)
# ... Run tests against (127.0.0.1:<port>)
container.kill() # Faster than 'stop'. Will also delete it so no need to be nice
It sucks to have to wait for a whole server startup in tests but I suppose if you wanted to get really clever you could cache the instance and just purge it before each test and maybe have it start once at the beginning of your dev session and then be refreshed at the beginning of the next one. I might end up doing that.
If you wanted something longer lived that isn't necessarily programmatically started and killed for persistent dev then the normal Docker route would probably serve you better.
Related
I want my django app to communicate by using a TCP/IP socket with a remote computer and I would like that socket to be available at all times. I would like to use the library tornado. Since I'm only familiar with writing views and models and such, I'm not entirely sure where to fit that into my codebase.
I was thinking about writing a management command that would run the tornado's server (see http://tornado.readthedocs.io/en/latest/tcpserver.html), but how could I call .stop() on my server once the management command quits? I wouldn't want it to spawn any threads that wouldn't exit upon my management command exiting, or end up with multiple open sockets, because I just want one.
Ofcourse I would also like the listener to reside somewhere in my django program and have access to it, not only within the management command code. I was thinking about importing a class from django's settings.
Am I thinking in the right direction, or is there a different, better approach?
EDIT: As to why would I want to do this:
I've got a microcontroller I want to communicate with, and I wouldn't want to go implementing/parsing HTTP on it, and I would also like to periodically send some indication of the connection being alive, and HTTP doesn't seem like the way to go
Management command is a nice approach, but I'd be reluctant to launch a server using it. A tornado server is a complex thing with a lot of state (including state outside of your codebase, like nginx, apache or HAProxy) and varying health. Management commands aren't designed to deal with all this.
It's probably a nice thing for development, and in this case you can easily make your management command not exit before the server by calling IOLoop.current().start() right inside the command.
For production environment I would advise to use contemporary orchestrating tools like Docker Compose, or if you plan to span your system over several machines, Docker Swarm or Kubernetes. These tools will allow you to start, shut down, scale and check health of individual components in a reliable manner, without reinventing the wheel with a set of management commands.
Either way, if your Tornado code lives in the same place with Django, then you're able to access the database using your Django models and reuse other parts of the project. Besides that, something launched from a management command doesn't get any advantages in using the running Django server.
I would like to deploy several WSGI web applications with Twisted on a debian server, and need some direction for a solid production setup. These applications will be running 24/7.
I need to run several configurations, each binding to different ports/interfaces/privileges.
I want to do as much of this in python as possible.
I do not want to package my applications with a program like 'tap2deb'.
What is the best way to implement each application as a system service? Do I need some /etc/init.d shell scripts, or can I manage this with python? (I don't want anything quite as heavy as Daemontools)
If I use twistd to manage most of the configuration/process management, what kind of wrappers/supervisors do I need to put in place?
I would like centralized management, but restricting control to the parent user account is not a problem.
The main problem I want to avoid, is having to SSH into my server once a day to restart a blocking/crashed application
I have found several good references for launching daemon processes with python. See daemoncmd from pypi.
Im still coming up a little short on the monitoring/alert solutions (in python).
Why is it not recommended to use the flask/werkzeug internal development webserver in production? What sort of issues can arise?
I'm asking because in work I'm being forced to do so and use a make shift cron to re-run the service every day!
If you're having to use a cron job to kill & restart it on a daily basis, you've already found a major issue with using the Flask development server. The development server is not written for stability, longevity, configurability, security, speed or much of anything other than convenience during development.
A proper WSGI setup will be faster, handle multiple connections properly and, most importantly for you, periodically restart your app process to clean out any cruft that might build up.
I had a network call inside the response handler that had no timeout. Something went wrong and it was waiting for a while (I was using the requests module), and then apparently never recovered.
Since Werkzeug server had only one thread, the whole development server became completely unavailable.
To give a little background, I'm writing (or am going to write) a daemon in Python for scheduling tasks to run at user-specified dates. The scheduler daemon also needs to have a JSON-based HTTP web service interface (buzzword mania, I know) for adding tasks to the queue and monitoring the scheduler's status. The interface needs to receive requests while the daemon is running, so they either need to run in a separate thread or cooperatively multitask somehow. Ideally the web service interface should run in the same process as the daemon, too.
I could think of a few ways to do it, but I'm wondering if there's some obvious module out there that's specifically tailored for this kind of thing. Any suggestions about what to use, or about the project in general are quite welcome. Thanks! :)
Check out the class BaseHTTPServer -- a "Basic HTTP server" bundled with Python.
http://docs.python.org/library/basehttpserver.html
You can spin up a second thread and have it serve your requests for you very easily (probably < 30 lines of code). And it all runs in the same process and Python interpreter space, so it can access all your objects, etc.
I'm not sure I understand your question properly, but take a look at Twisted
I believed all kinds of python web framework is useful.
You can pick up one like CherryPy, which is small enough to integrate into your system. Also CherryPy includes a pure python WSGI server for production.
Also the performance may not be as good as apache, but it's already very stable.
Don't re-invent the bicycle!
Run jobs via cron script, and create a separate web interface using, for example, Django or Tornado.
Connect them via a database. Even sqlite will do the job if you don't want to scale on more machines.
I have a Django application that I would like to deploy to the desktop. I have read a little on this and see that one way is to use freeze. I have used this with varying success in the past for Python applications, but am not convinced it is the best approach for a Django application.
My questions are: what are some successful methods you have used for deploying Django applications? Is there a de facto standard method? Have you hit any dead ends? I need a cross platform solution.
I did this a couple years ago for a Django app running as a local daemon. It was launched by Twisted and wrapped by py2app for Mac and py2exe for Windows. There was both a browser as well as an Air front-end hitting it. It worked pretty well for the most part but I didn't get to deploy it out in the wild because the larger project got postponed. It's been a while and I'm a bit rusty on the details, but here are a few tips:
IIRC, the most problematic thing was Python loading C extensions. I had an Intel assembler module written with C "asm" commands that I needed to load to get low-level system data. That took a while to get working across both platforms. If you can, try to avoid C extensions.
You'll definitely need an installer. Most likely the app will end up running in the background, so you'll need to mark it as a Windows service, Unix daemon, or Mac launchd application.
In your installer you'll want to provide a way to locate a free local TCP port. You may have to write a little stub routine that the installer runs or use the installer's built-in scripting facility to find a port that hasn't been taken and save it to a config file. You then load the config file inside your settings.py and whatever front-end you're going to deploy. That's the shared port. Or you could just pick a random number and hope no other service on the desktop steps on your toes :-)
If your front-end and back-end are separate apps then you'll need to design an API for them to talk to each other. Make sure you provide a flag to return the data in both raw and human-readable form. It really helps in debugging.
If you want Django to be able to send notifications to the user, you'll want to integrate with something like Growl or get Python for Windows extensions so you can bring up toaster pop-up notifications.
You'll probably want to stick with SQLite for database in which case you'll want to make sure you use semaphores to tackle multiple requests vying for the database (or any other shared resource). If your app is accessed via a browser users can have multiple windows open and hit the app at the same time. If using a custom front-end (native, Air, etc...) then you can control how many instances are running at a given time so it won't be as much of an issue.
You'll also want some sort of access to local system logging facilities since the app will be running in the background and make sure you trap all your exceptions and route it into the syslog. A big hassle was debugging Windows service startup issues. It would have been impossible without system logging.
Be careful about hardcoded paths if you want to stay cross-platform. You may have to rely on the installer to write a config file entry with the actual installation path which you'll have to load up at startup.
Test actual deployment especially across a variety of firewalls. Some of the desktop firewalls get pretty aggressive about blocking access to network services that accept incoming requests.
That's all I can think of. Hope it helps.
If you want a good solution, you should give up on making it cross platform. Your code should all be portable, but your deployment - almost by definition - needs to be platform-specific.
I would recommend using py2exe on Windows, py2app on MacOS X, and building deb packages for Ubuntu with a .desktop file in the right place in the package for an entry to show up in the user's menu. Unfortunately for the last option there's no convenient 'py2deb' or 'py2xdg', but it's pretty easy to make the relevant text file by hand.
And of course, I'd recommend bundling in Twisted as your web server for making the application easily self-contained :).