Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
As per documentation of redis,
A.3.1 Drawbacks of Redis on Windows
Windows doesn’t support the fork system call, which Redis uses in a
variety of situations to dump its database to disk. Without the
ability to fork, Redis is unable to perform some of its necessary
database-saving methods without blocking clients until the dump has
completed.
Questions:
1) If I'm not wrong, this issue will occur when concurrent users increases? Is that correct?
2) Is it really an issue, if we deploy channels on a windows machine (production server)? If yes, is there any better alternative of redis?
3) How to test the above-mentioned drawback on a production server?
Note:
Can't use wsl2(as officially not released) or wsl as the current windows server won't support.
as far as I know to come over this issue you may try one of these options:
setup redis on a docker container using this image and use it in your project.
install linux using a virtualbox and setup redis there.
In both of them since you are running redis in linux environment I don't think you get a problem but like I said try them before going to production. :)
For testing purpose maybe a simulation would work fine first write a test with lots of read and write and try it on both redis on windows and redis on docker and measure the benchmarks.
Memurai is a good Redis for Windows alternative. Memurai is based on Redis source code.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm trying to transfer from using PHP to Python, Im looking to change to Python as it seems a much more versatile language able to work across a range of scenarios. The sort of things i plan to use it for range from web app development (using django), NLP, machine learning and automation using mechanize.
One of the things I really liked about PHP was MAMP, the way it creates an htdocs folder, a localhost:8888 url, and a MySQL server, with pretty much 0 effort.
Is there something similar with Python ? I'm not necessarily looking for a GUI like MAMP (although that would be good) - what are the other options for setting up a local environment?
Python excels in this area, but as with most tools exactly what you do depends on what you want. In particular, you certainly want virtualenv, Python's configuration and dependency -isolation tool.
You may also want a development-configuration management tool such as buildout, but that is more controversial as there are many other great, language-agnostic tools that overlap. (For example, you may want to set up your environment using Vagrant and leave your host OS behind.)
Neither virtualenv nor buildout will set up Apache for you OotB, but you do have the option of installing django, zope, or many other Python frameworks and applications with buildout recipes. There are recipes for apache too, but most Python web development that I know of is agnostic of the httpd, so you might end up not wanting it.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am trying to understand docker.io LXC container for a while,but if we consider fedora's devassistant tool and virtualenv then virtualenv will do the work of isolation and devassistant will download all the needed dependency by interpreting the setup configuration file.so its like by using two keyword commands on terminal like we almost setup a developer environment of Openstack or lets say large multi-repository project within minutes taking into consideration to use right tool for the right job.so how is docker different?
virtualenv only does isolation for python environments, it doesn't do process isolation.
I'm not familiar with fedora's devassistant but I'm pretty sure those changes are system wide. What if on the same server you want to run, python, ruby, java, and node.js apps? There might be conflicting requirements at the system level.
With Docker, this is easy because each app has it's own container and they can put what ever you want in there, and they don't interfere with each other. Think of docker like this. It is giving each application it's own VM (container) to live in, it is similar to setting up a physical server and installing different virtualbox servers on it, one for each application. But it is much more lightweight and you can run it on both physical and virtual hosts.
You can also move the docker containers from one docker compatible server to another really easily.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm sorry if my question is too elementary. I have some python code, which makes the machine act as a transparent proxy server using "twisted" library. Basically I want my own transparent proxy OUTSIDE my internal network and since I want to be able to monitor traffic, I need to have my own server. So I need a machine that runs my script 24/7 and listens for http connections. What kind of server/host do I need? Any host provider suggestions?
Go for Amazon Ec2 instance, Ubantu server. If your process is not much memory consuming , you can go with Micro instance(617 Mb ram, 8 Gb HD) which is free for first year. Or you could go with small instance (1.7 GB ram and 8Gb HD), which might cost you little more.
For setting up the python code to run 24/7 , you can create a daemon process in the instance. You can also put the twisted library/ any other library in it. Should not take much time if you have worked with Amazon AWS.
There are many specialized commercial hosts for python. Python maintains a list of them on their wiki. Some even have the twisted framework available. The other alternative is to get a virtual private server and install all of the specialized libraries that you need.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
In Django, running ./manage.py runserver is really nice for dev, avoiding the hassle to setup and start a real webserver.
If you are not running Django, you can still setup a gunicorn server very easily.
Is there something similar for AMQP?
I don't need a full implementation nor something robust, just something that is easy to install and run for dev. PyPi package would be great.
Celery is not the answer. I don't want a client, I want a server. Like a mini python RabbitMq.
I'm not aware of any AMQP broker implemented in Python. And I am not aware of a 'lite' implementation in general; I think that implementing an AMQP broker is sufficiently complex that those that try it either aim to be close to one of the versions of the AMQP specification, or don't bother at all. :)
I also don't quite see how running a broker presents the same problems as running a test web server for your web application.
The web server does nothing useful without your application to run inside it, and while you're developing your application it makes sense to be able to run it without needing to do a full deployment.
But you are not implementing the internals of the broker, and you can configure it dynamically so (unlike the web server) it doesn't need to restart itself every time you change your code. Exchanges, bindings and queues can be declared by the application under test and then automatically deleted afterwards.
Installing RabbitMQ is not difficult at all, and it should need hardly any configuration, if any, because it comes with a default vhost and guest user account which are fine for use in an isolated test environment. So I have never had a problem with having RabbitMQ simply running on my test machine.
Maybe you've had some particular issue that I've not thought of; if that's the case, please do leave a comment (or expand your question) to explain it.
Edit: Recently I've been doing quite a lot of testing of AMQP-based applications, and I've found RabbitMQ's Management Plugin very useful. It includes an HTTP API which I'm using to do things like create a new vhost for each test run, and destroy it afterwards to clean up the broker's state. This makes running tests on a shared broker much less intrusive. Using the HTTP API to manage this, rather than the AMQP client being tested, avoids the tests becoming somewhat circular.
I had your same question and was shocked to see how dry the space is. Even after all this time, there's hardly a lightweight AMQP server out there. I couldn't even find toy implementations on Github. AMQP seems like a beast of a protocol. I also found that RabbitMQ is probably about as light as it gets.
I ended up going with a Docker based solution for my integration tests that automatically starts and kills a RabbitMQ container. You need to install the Docker Python library and (of course) have a Docker daemon running on your machine. I was already using Docker for other things so it wasn't a biggie for my setup; YMMV. After that, basically I do:
import docker
client = docker.from_env()
c = client.containers.run(
'rabbitmq:alpine', # use Alpine Linux build (smaller)
auto_remove=True, # Remove automatically when stopped
detach=True, # Run in daemon mode
ports={'5672/tcp' : ('127.0.0.1', None)} # Bind to a random localhost port
)
container = client.containers.get(c.id) # Re-fetch container for port
port = container.attrs['NetworkSettings']['Ports']['5672/tcp'][0]['HostPort']
# ... Do any set up of the RabbitMQ instance needed on (127.0.0.1:<port>)
# ... Run tests against (127.0.0.1:<port>)
container.kill() # Faster than 'stop'. Will also delete it so no need to be nice
It sucks to have to wait for a whole server startup in tests but I suppose if you wanted to get really clever you could cache the instance and just purge it before each test and maybe have it start once at the beginning of your dev session and then be refreshed at the beginning of the next one. I might end up doing that.
If you wanted something longer lived that isn't necessarily programmatically started and killed for persistent dev then the normal Docker route would probably serve you better.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm starting to migrate a few applications to Amazon Web Services. My applications are Python/Django apps, running over MySQL.
I plan on using EC2 and EBS for starters.
I'm looking for a few things:
A good step-by-step tutorial explaining how to deploy Django on AWS. I've looked around, but most tutorials are from 2 years ago, so I'm worried they're outdated.
I'm trying to understand, with which AMI should I start? I know there's a BitNami AMI that comes preconfigured with Django goodiness, but I've seen a lot of other sources say you should start with a basic (clean) linux box and install everything yourself. Why?
Are there any other important things I should be thinking about? I have very little sysadmining experience (the apps are currently on WebFaction), so I'm not sure what I should be thinking about.
A few extra points:
I plan on running several applications on the same EC2 instance, I assume that's possible?
I'm using virtualenv to separate between the various apps right now, I assume I can continue doing the same on the EC2 instance?
Thanks!
There is nothing "special" with EC2 here. It just provides bare (or generally preconfigured from custom AMI) system instance - you have access to whole virtualized system, so you can safely break things on your own. Think about it as a specific VPS.
You have to prepare deployment by yourself, which is not so difficult - just follow the documentation. I'd advice to run with basic linux distro and add needed stuff than to rely on some preconfigured image. As for you questions:
You need to do two things: setup your instance (accounts, needed software, other custom setup, so some linux administration guide should be handy) and prepare django app deployment (deploy python code, hook up to web server). For the latter, general deployment instructions for django can be applied here ( http://docs.djangoproject.com/en/dev/howto/deployment/ ).
Start with AMI with your favorite linux distro, and then add necessary software from it's repository.
Mount and use EBS as soon as possible for all your data. When rebooted, EC2 instance will wipe out all it's data, so be prepared for this. Do system snapshots to AMI to have quick recovery on failure.
Yes, you can deploy several applications on one instance, but mind that EC2 instance is virtualized (with quite high "virtualization tax" imo, especially for smaller instances), so you might run into general performance problems. Assume that you'd need migrate to bigger instance/multiple instances after some time.
Virtualenv should be your default deployment tool. Yes, you can use it here too.
You can follow the official documentation of setting up Amazon ec2 instance: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-instance_linux.html
You should start with an AMI that you are familiar with. For example, if you use Ubuntu, you can just use one of the Ubuntu AMI in the recommended page. I didn't use the BitNami server and my Django site is deployed smoothly.
If you are using Apache server, just follow the instructions on the official Django doc:
https://docs.djangoproject.com/en/1.5/howto/deployment/wsgi/modwsgi/
I tried quite a few blogs but as you said, they are outdated. Just use the official docs and it will save you a lot of time.
This repo is meant to address exactly this issue - provide a reference implementation for a basic django project that can be deployed onto AWS ElasticBeanstalk.
https://github.com/pushkarparanjpe/django-awsome
It has:
Static assets
DB back-end
django contrib Admin
Just configure your EBS env, clone the repo and deploy!