I have a simple django python server process which needs to be executed in linux environment (in a virtualenv python environment)
Currently one of my colleague manually logs into ssh console and starts the virtual environment via source bin/activate command. Thereafter python server is started using below command
/etc/init.d/start-python-server.sh
Note: This sh file starts the python server as a background process listening in port 8080
Can some one give some thoughts on improving this?
Please share in your thoughts.
you may include source <your_env_path>/bin/activate at the beginning of /etc/init.d/start-python-server.sh to automate this process
Related
I am running a Python script that collects data and is running inside a Virtual Environment hosted remotely on a VPS (Debian based).
My PC crashed and I am trying to get back into the visual logs of the python script.
I know that the script is still running because it saves its data into a CSV file. That CSV is still being written.
If I activate the source again, then I can rerun the script. It sounds to me that I will have 2 instances of the same script running in this case...
I am not familiar with the virtual environment and I cannot find the right way to do it without deactivating and reactivating it. I am running my script on the cheapest OVH VPS I could buy because my computer is clearly not reliable for running 24/7.
You might use screen to run your script in a separate terminal session. This will avoid losing logging if the ssh connection gets dropped.
The workflow would be something in the lines of (on your host):
# Install screen
$ sudo apt udpate
$ sudo apt install screen
# Start a screen session
$ screen
# Run your script
$ python myscript.py
In case of dropping your ssh connections, it'll be enough to:
# ssh back into the host from your client
# reattach previous screen session
$ screen -r
For advanced use the official docs are quite comprehensive.
Note: As a more general note, what explained above is pretty much the basic logic of a terminal mulitplexer. You'll be able to achieve the same using tmux.
I am running pycharm 2017.2.3. I want to run my python script on a remote ec2 instance using sudo user through pycharm. How do I acieve this?
Follow the steps below:
Go to File -> Settings -> Project Interpreter and add a new interpreter
Click on + to add a new python interpreter and then click on SSH interpreter
Provide your EC2 Public DNS in HOST and ubuntu as username
Click Next and add the private_key.pem file.
See this article for more details:
PyCharm setup for AWS automatic deployment
It looks like you can configure your python interpreter over SSH with the professional version of PyCharm.
Configuring Remote Interpreter + PyCharm
Finally found an answer after a researching through the internet. We can have a script on remote machine as a pycharm interpreter. Create a following script on a remote machine and make sure the script is executable.
#!/bin/bash
sudo /usr/bin/python "$#"
Now change the project interpreter to point to the above script on remote machine in pycharm. Now every script you run on local machine gets executed on remote as a sudo user.
When I start PyCharm for remote python interpreter, it always performs "Uploading PyCharm helpers", even when the remote machine IP is the same and already containing previously uploaded helpers. Is the behaviour correct?
This is a well known problem that can be a major obstacle in productivity especially if you use disposable instances in your workflow. It leads to a forced coffee break of 20 minutes every time you want to connect to a remote system. Unacceptable.
Seems like PyCharm creates a build.txt file in the remote helper folder that just has the current PyCharm build number as its contents, for instance:
PY-171.4694.38
So it's possible to upload the helpers manually by using rsync on /Applications/PyCharm.app/Contents/helpers/ and finally manually creating a build.txt file with your current build number. After that, PyCharm should not attempt to re-upload them.
Example:
$ rsync -avz /Applications/PyCharm.app/Contents/helpers/ cluster:/home/xapple/.pycharm_helpers/
$ echo "PY-171.4694.38" > /home/xapple/.pycharm_helpers/build.txt
$ python /home/xapple/.pycharm_helpers/pydev/setup_cython.py build_ext --inplace
In my case, several projects are projected to the remote server by Pycharm. All of them get stuck when one of the projects goes wrong on the remote server. Solution: leave only one that you need to work on and restart the PyCharm by "Invalidate caches".
Note that -- at least as late as version 2018.3.x -- PyCharm also appears to require re-uploading of the helpers when the local network connection changes as well, for some reason.
What I've observed in my case is that if, while PyCharm remains running, I relocate my laptop and connect to a different LAN, the next remote debugging session I initiate will trigger the lengthy helper upload. It turns out that the contents of the helpers directory actually uploaded in this case are exactly identical to the contents already present in that directory on the remote system (I compared them), so this upload is entirely superfluous, but PyCharm isn't able to detect this.
As there's no way I know of in PyCharm to bypass or cancel automatic helpers upload, the only recourse is to completely exit from PyCharm (close all open project windows) after each change of network connection and restart the IDE. In my experience, this will cause the helper upload to succeed in the "checking remote helpers" phase, before actually uploading all the helpers again. Of course, this is major annoyance if you have multiple projects open, but it's faster than waiting the (tens of) minutes for the agonizingly slow helpers upload to complete.
All of what other responders describe for the course of action to take when changing PyCharm versions is true. It is sufficient to use rsync, ftp, scp, or whatever to transfer the contents of the new local helpers directory (on Linux, a subdirectory of where the app is installed) to the remote system (on Linux, ~/.pycharm_helpers, where ~ is the home directory of the user name used for the remote debugging session), and update the remote build.txt in the helpers directory with the new PyCharm version.
This problem came back again 6 years later with PyCharm 2022.3.2.
The directory /Applications/PyCharm.app/Contents/helpers/ doesn't exist anymore, so the previous trick doesn't work.
What solved it this time is simply to:
Quit PyCharm.
Delete the ~/.pycharm_helpers directory on the remote server.
Relaunch PyCharm and let it do it's thing.
According to the docs,
PyCharm checks remote helpers version on every remote run, so if you update your PyCharm version, the new helpers will be uploaded automatically and you don't need to recreate remote interpreter.
fast (less than 3 second between me an digitalocean) solution inspired by excellent xApple's answer
on remote server:
export SOURCE=<your ip>
export PORT=9000
export HELPERS=$HOME/.pycharm_helpers
# PyCharm Help -> About
export BUILD=PY-172.4343.24 # 2017/10/11
cd $HELPERS
rm -fr *
# my OS - ubuntu, change firewall rules to yours if you're not so lucky
sudo ufw allow from $SOURCE proto tcp to any port $PORT
netcat -l -v -p $PORT | tar xz # here you waiting for connection
# after finish
sudo ufw delete allow from $SOURCE proto tcp to any port $PORT
echo -n $BUILD > build.txt
python $HELPERS/pydev/setup_cython.py build_ext --inplace
on your workstation:
export TARGET=<remote server ip>
export PORT=9000
export HELPERS=<path to helpers> # for me it's $HOME/opt/pycharm-2016.3/helpers
cd $HELPERS
tar cfz - . | netcat -v $TARGET $PORT
Turning off the firewall addressed the problem in my case (macOS - Mojave). Note that this is not a general solution as it was not tested in any other environments/OS.
I'm new to Docker. I'm using Docker & docker-compose, going through a flask tutorial. The base docker image is python 2.7 slim.
It's running on Linux. docker 1.11.2
The application is working fine.
I want to get pycharm pro connecting to the remote interpreter, something I have never done before.
I followed the instructions for docker-compose. Initially it was failing because it could not connect to port 2376. I added this port to docker-compose.yml and the error went away.
However, trying to save the configuration now stalls/hangs with a dialog 'Getting Remote Interpreter Version'. This never completes. Also, I can't quit pycharm. This happens in Pycharm 2016.2 and 2016.3 EAP (2nd).
The help say "SFTP support is required for copying helpers to the server".
Does this mean I need to do something?
I'm not using docker-machine
The problem was that TCP access to the docker API is not established by default under ubuntu 16.04.
There are suggestions to enable TCP/IP access.
However, JetBrains gave me the simplest solution:
If you are using Linux it is most likely that Docker installed with
its default setup and Docker is expecting to be used through UNIX
domain file socket /var/run/docker.sock. And you should specify
unix:///var/run/docker.sock in the API URL field. Please comment
whether it helps!
This suggestion worked with my Ubuntu 16.04 -derived distribution.
This goes into the Docker entry in PyCharm preferences under Build, Execution, Deployment.
You can also edit this while setting up a remote interpreter, but only by making a new Docker entry.
TCP/IP Method
This method works if you want TCP/IP access, but this is a security risk. The socket approach is better, which is probably why it is the default.
https://coreos.com/os/docs/latest/customizing-docker.html
Customizing docker
The Docker systemd unit can be customized by overriding the unit that
ships with the default CoreOS settings. Common use-cases for doing
this are covered below.
Enable the remote API on a new socket
Create a file called /etc/systemd/system/docker-tcp.socket to make
Docker available on a TCP socket on port 2375.
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=2375
BindIPv6Only=both
Service=docker.service
[Install]
WantedBy=sockets.target
Then enable this new socket:
systemctl enable docker-tcp.socket
systemctl stop docker
systemctl start docker-tcp.socket
systemctl start docker
Test that it’s working:
docker -H tcp://127.0.0.1:2375 ps
Once I thought to search for ubuntu 16.04 I came across simpler solutions, but I did not test them.
For instance:
https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04/
Edit the file /lib/systemd/system/docker.service
Modify the line that starts with ExecStart to look like this:
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375
Where my addition is the “-H tcp://0.0.0.0:2375” part. Save the
modified file. Restart the Docker service:
sudo service docker restart
Test that the Docker API is indeed accessible:
curl http://localhost:2375/version
I - docker-compose up
I think PyCharm will run docker-compose up, have you try to run this command first in your terminal (from where your docker-compose.yml is) ?
Maybe if some errors occur, you will get more info in your terminal.
II - pycharm docker configuration
Otherwise it could be due to your docker machine configuration in PyCharm.
What I do to configure my machine and to be sure this one is correctly configured:
1 - run docker-machine ls in your shell
2 - copy paste the url without tcp://
3 - go to pycharm preferences -> Build, Execution, Deployement -> Docker -> + to create a new server, fill the server name field
4 - paste previously copied url keeping https://
5 - fill the path of your machine certificates folder
6 - tick Import credentials from Docker Machine
7 - click Detect -> your machine should appear in the selection list
8 - save this server
9 - select this server when configuring your remote interpreter, from PyCharm Preferences -> Project -> Project Interpreter -> wheel -> add remote -> Docker or Docker Compose
10 - you should be able to select a service name
11 - save your new interpreter
11 - try run your test twice, sometimes it could take time to initialize
Would you please let me know, how would I run my code in local machine to remote server?
I have source code and data in local machine. But I would like to run the code in remote server.
One solution would be:
Install python on the remote machine
Package your code into a python package using distutils (see http://wiki.python.org/moin/Distutils/Tutorial). Basically the process ends when you run the command python setup sdist in the root dir of your project, and get a tar.gz file in the dist/ subfolder.
Copy your package to the remote server using scp, for example, if it is an amazon machine:
scp -i myPemFile.pem local-python-package.tar.gz remote_user_name#remote_ip:remote_folder
Run sudo pip install local-python-package.tar.gz on the remote server
Now you can either SSH to the remote machine and run your code or use some remote enabler such as fabric to start commands on the remote server (works for any shell command, specifically python scripts)
Alternatively, you can just skip the package building in [2], if you have a simple script, just scp the script itself to the remote machine ane proceed using a remote python myscript.py
Hope this helps
I would recommend setting up git repository on repote server and connect local source (for git you can read about how to do it here: http://git-scm.com/book).
Then you can use i.e. Eclipse EGit and after you change your local code you can push it to remote location.