PyCharm Professional has a Remote Deployment feature that allows for editing, running and debugging code remotely. This is a powerful feature when writing short scripts and top-level applications that make use of standard or third-party library packages. You can even create a virtualenv on the remote, with all dependency packages installed, and use that to execute the remote program.
However when writing applications that make use of multiple packages that are also developed alongside the application, it becomes necessary to edit packages. Without PyCharm the usual way to do this is with pip install -e . or python setup.py develop, which integrates the source directory with Python's package system, making it possible to edit a number of packages alongside the application.
With a single package, I've found the PyCharm will deploy the package code into its remote workspace, which works OK for debugging if I'm running a script or entry point from within this same package.
The problem I'm having with PyCharm is that it's not clear how to remotely edit and debug multiple packages. Let's say I have a PyCharm project open for one of these packages. When finding references or debugging into code that is in another (yet still developed-by-me) package, PyCharm shows a cached version of the second package (on my local machine). This is fine until I edit the second package on the remote host - after which the cached version is now out of sync and doesn't automatically update. This leads to a mismatch between execution result and debugger/editor state.
There are other quirks too, such as the edited package not actually being installed into the remote's virtualenv.
I haven't been able to find a proper guide to this workflow in PyCharm's documentation, and I'm starting to wonder if I'm either going about this the entirely wrong way, or maybe PyCharm just doesn't support this kind of app+multiple-packages development?
Related
I'm planning to run a machine learning script consistently on one of my Google Cloud VMs.
When I configured the remote interpreter, unfortunately all the imported libraries where not recognized anymore (as they might not be installed in the virtual envoirement in the cloud). I tried to install the missing modules (for example yfinance) through the Pycharm terminal extension within my remote host connection over SSH and SFTP. So I basically chose the 188.283.xxx.xxx #username in the Pycharm terminal, and used pip3 install to install the missing modules. Unfortnately my server (due to limited ressources) collapses during the build process.
Is there a way to automatically install the needed libraries when connecting the script to the remote interpreter?
Shouldn't that be the standard procedure? And if not: does my approach make sense?
Thank you all in advance
Peter
You could use something like importlib to install your modules at runtime, but I'd just opt for creating a requirements file using pip freeze > requirements.txt which you can then use on the server to get all your dependencies in one go (pip install -r requirements.txt) before your first run.
If it fails for any reason (or when you've updated the requirements file) you can run it again and it will only install whatever wasn't installed before.
This way it's clear what modules (and which version of each module) you've installed. In my experience with machine learning using the right version or combination of versions can be important, so it makes sense to define those and not just always get the latest version. This especially helps when trying to run some older project.
I am using PyCharm to develop a python project, which uses an external library called win10toast. I have installed win10toast using PyCharm. However, when I tried to run the .py file using cmd (i.e Externally running the python file), an error shows up:
ModuleNotFoundError: No module named 'win10toast'.
I have python 3.6.4. I installed win10toast using PyCharm.
from win10toast import ToastNotifier
I expect the program to run without any error, but currently I am getting the ModuleNotFound error.
Python can be tricky to run properly because it is sensitive to where you installed your dependencies (such as external libraries and packages). If you installed Python to one directory, but accidentally installed the external library to another directory, when you run your .py program, it will be unable to call from the external library because it doesn't exist in the same library that Python is running from.
Lookup where you installed Python on your computer and then find where you installed the external library. Once your find where you installed the external library, move its entire package content to the same directory where Python is installed. Or better yet, reinstall the external library with pip into the same directory as Python.
If you're on Mac, Python and its related dependencies are usually stored somewhere in /User/bin. If you're on Windows, it will be stored somewhere in your C:// directory (possibly somewhere in C:\Users\username\Local\AppData). If you're on Linux, it will be stored somewhere in /usr/bin. Whatever you do, don't move Python from wherever it is because sometimes that can mess up your system for certain operating systems like Mac, which comes with its own version of Python (Python 2.7 I believe, which is outdated anyway).
Lastly, you may have two different versions of Python on your computer, which is common; Python 2.7 and Python 3+. If you wrote your program in one version, but ran it from the other, the external library can only be called from whichever Python version you installed it to. Try running your .py program with python3 instead of python (or vice versa) and see what happens. If it works with one python version over the other, that tells you that the external library is installed in the other version's directory.
That should solve your issue.
I would suggest that you not use PyCharm to install packages, at least not
if the result deviates in the slightest from doing a "pip install" at the command line. I see no reason to involve PyCharm in configuring Python installations. It's just asking for trouble.
I admit that I'm not familiar with the practice I'm suggesting you avoid. I've been using PyCharm since pretty much the week it came out (was an avid user of the IntelliJ Python plugin before that), and have never once considered doing anything but installing Python modules at the command line. That way, I'm sure right where those modules are going (into which base Python install or venv). Also, I know I'm doing all that I can to minimize the differences that I might see between running code in PyCharm and running it at the command line. I'm making my suggestion based solely on this practice having never gone wrong for me.
I have multiple base Python versions installed, and dozens of venvs defined on top of those. PyCharm is great at allowing me to indicate which of these I want to apply to any project or Run/Debug configuration, and utilizing them seamlessly. But agin, I administer these environments at the command line exclusively.
I still experience issues in switching between the command line and PyCharm in terms of one module referencing others in a single source tree. My company has come up with a simple solution to this that insures that all of our Python scripts still run when moving away from PyCharm and its logic for maintaining the Python Path within a project. I've explained the mechanism before on S.O. I'd be happy to find that if anyone is interested.
The library win10toast installed in the directory: YOUR_PYCHARM_WORKSPACE\PycharmProjects\YOUR_PROJECT_NAME\venv\Lib\site-packages
but when you are running your program using cmd, pycharm interpreter uses site-packages directory that you installed python at there. for Ex: C:\Python27\Lib\site-packages
So, you can install the win10toast library to this windows directory using pip.
I have a python app with its setup.py that's working just fine to install it through setuptools. I am then packaging it up in DEB and PKGNG using the excellent Effing package management. I've also made some quick tests with setuptools-pkg and that seems to work too.
Now I have a need to distribute the packages including init scripts to start/stop/manage the service. I have my init scripts in the source repo and, according to what seems to be best practice, I'm not doing anything with them in setuptools and I'm handling them in the os-specific packaging: for debian-based systems I use the --deb-init, --deb-upstart and --deb-systemd FPM options as needed.
How can I build a FreeBSD package that includes the correct rc.d script, using FPM or through any other means?
All the examples I've seen are adding the rc.d script when building a package through the ports collection but this is an internal app and is not going to be published to the Ports or on PyPi. I want to be able to check out the repository on a FreeBSD system, launch a command that gives me a package, distribute it to other FreeBSD systems, install it using pkg and have my init script correctly deployed to /usr/local/etc/rc.d/<myappname>. There's no need to keep using FPM for that, anything works as long as it gives me a well-formed package.
I would highly suggest creating your package as if it were any other port either if is going to be published or not.
One of the advantages you can inherit by doing this is that you could also include all your test and automate the deployment having out of the box the base for a continues integration/delivery setup.
Check out poudriere. You could indeed maintain a set of custom ports with your very own settings and distribute them across your environments without any hassle:
pkg install -r your-poudriere yourpkg
In case this is probably too much or probably doesn't adapt well to your use case, you can always fallback to ansible, in where you could create a custom rc.d within a template of an ansible role.
If you just want to build and deploy something, let's say a microservice, then probably pkg is not the best tool, maybe you just need a supervisor that can work on all your platforms (sysutils/immortal) so that you could just distribute your code and have a single recipe for starting/stoping the service.
nbari's answer is probably the Right Way™ to do this and I'd probably create my own "port" and use that to build the package on a central host.
At the time of my original question I had taken a different approach that I'm reporting here for the sake of completeness.
I am still building the applications package (ie. myapp-1.0.0.txz) with fpm -s python -t freebsd, which basically uses Python's setuptools infrastructure to get the necessary informations, and I don't include any rc.d file in it.
I also build a second package which I will call myapp-init-1.0.0.txz with the source directory type (ie. fpm -s dir -t freebsd) and I only include the init script in that package.
Both packages get distributed to hosts and installed, thus solving my distribution issue.
I have a project with multiple dependencies installed using virtualenv and pip. I want to run my project on a server which does not have pip installed. Unfortunately, installing pip is not an option.
Is there a way to export my required packages and bundle them with my project? What is the common approach in this situation?
Twitter uses pex files to bundle Python code with its dependencies. This will produce a single file. Another relevant tool is platter which also aims to reduce the complexity of deploying Python code to a server.
Another alternative is to write a tool yourself which creates a zip file with the Python and dependencies and unzips it in the right location on the server.
In Python 3.5 the module zipapp was introduced to improve support for this way of deploying / using code. This allows you to manage the creation of zip files containing Python code and run them directly using the Python interpreter.
#Simeon Visser's answer is a good way to deal with that. Mine is to build my python project with buildout.
This may be outside of scope of the question, but if your need is deploying applications to servers with their dependencies, have a look at virtualization and linux containers.
It is by far the most used solution to this problem, and will work with any type of application (python or not), and it is lightweight (the performance hit of LXC is not noticeable in most cases, and isolating apps is a GREAT feature).
Docker containers, besides being trendy right now, are a very convenient way to deploy applications without caring about dependencies, etc...
The same goes for development envs with vagrant.
First let me explain the current situation:
We do have several python applications which depend on custom (not public released ones) as well as general known packages. These depedencies are all installed on the system python installation. Distribution of the application is done via git by source. All these computers are hidden inside a corporate network and don't have internet access.
This approach is bit pain in the ass since it has the following downsides:
Libs have to be installed manually on each computer :(
How to better deploy an application? I recently saw virtualenv which seems to be the solution but I don't see it yet.
virtualenv creates a clean python instance for my application. How exactly should I deploy this so that usesrs of the software can easily start it?
Should there be a startup script inside the application which creates the virtualenv during start?
The next problem is that the computers don't have internet access. I know that I can specify a custom location for packages (network share?) but is that the right approach? Or should I deploy the zipped packages too?
Would another approach would be to ship the whole python instance? So the user doesn't have to startup the virutalenv? In this python instance all necessary packages would be pre-installed.
Since our apps are fast growing we have a fast release cycle (2 weeks). Deploying via git was very easy. Users could pull from a stable branch via an update script to get the last release - would that be still possible or are there better approaches?
I know that there are a lot questions. Hopefully someone can answer me r give me some advice.
You can use pip to install directly from git:
pip install -e git+http://192.168.1.1/git/packagename#egg=packagename
This applies whether you use virtualenv (which you should) or not.
You can also create a requirements.txt file containing all the stuff you want installed:
-e git+http://192.168.1.1/git/packagename#egg=packagename
-e git+http://192.168.1.1/git/packagename2#egg=packagename2
And then you just do this:
pip install -r requirements.txt
So the deployment procedure would consist in getting the requirements.txt file and then executing the above command. Adding virtualenv would make it cleaner, not easier; without virtualenv you would pollute the systemwide Python installation. virtualenv is meant to provide a solution for running many apps each in its own distinct virtual Python environment; it doesn't have much to do with how to actually install stuff in that environment.