I downloaded Snappy library sources for working with compression and everything was great on one machine, but it didn't work on another machine. They have completely same configurations of hardware/OS + python 2.7.3.
All I was doing is "./configure && make && make install".
There were 0 errors during any of these processes and it installed successfully to the default lib directory, but python cant see it anyhow. help('modules') and pip freeze doesn't show snappy on the second machine and as the result I cant import it.
I tried even 'to break' structure and install it to different lib catalogs, but even that didn't work. I don't think if its related to system environment variables, since python should have completely same configuration on any of these machines (Amazon EC2).
Anyone knows how to fix this issue?
I just found python-snappy on github and installed it via python. Not a permanent solution, but at least something.
Related
I'm planning to run a machine learning script consistently on one of my Google Cloud VMs.
When I configured the remote interpreter, unfortunately all the imported libraries where not recognized anymore (as they might not be installed in the virtual envoirement in the cloud). I tried to install the missing modules (for example yfinance) through the Pycharm terminal extension within my remote host connection over SSH and SFTP. So I basically chose the 188.283.xxx.xxx #username in the Pycharm terminal, and used pip3 install to install the missing modules. Unfortnately my server (due to limited ressources) collapses during the build process.
Is there a way to automatically install the needed libraries when connecting the script to the remote interpreter?
Shouldn't that be the standard procedure? And if not: does my approach make sense?
Thank you all in advance
Peter
You could use something like importlib to install your modules at runtime, but I'd just opt for creating a requirements file using pip freeze > requirements.txt which you can then use on the server to get all your dependencies in one go (pip install -r requirements.txt) before your first run.
If it fails for any reason (or when you've updated the requirements file) you can run it again and it will only install whatever wasn't installed before.
This way it's clear what modules (and which version of each module) you've installed. In my experience with machine learning using the right version or combination of versions can be important, so it makes sense to define those and not just always get the latest version. This especially helps when trying to run some older project.
I'm having a crazy problem installing a package I built and placed in a bitbucket repository to a local environment.
I built the package.
I was successfully able to connect to it from Pycharm locally by doing:
pip install -e path/to/repository
I then pushed the built package to bitbucket.
I then switched local environments and pip installed the package from bitbucket as follows:
pip install git+https://my_name#bitbucket.org/my_company/my_repo.git
The package successfully installed locally.
I see it in pycharm and pycharm sees it in the environment site-packages. It looks like this:
I can't tell if it is installed properly but I note there are no .py files.
The script in this environment doesn't see the package.
I get the following error:
Any guidance on what could be wrong? Again, everything works fine when I'm local and using pip install -e. The code works. Just can't get it to work from pushed distribution packages from a remote repository.
Thanks in advance.
I don't know if this counts as a true solve but got it working. And after about 10 hours of work on this I'm not exactly sure how. I believe the issue may boil down to a stray comma - yes a comma - in the setup file which was not enough to throw an error but somehow left the build without some necessary parameter which resulted in a bad build when trying to install from the repository.
This also raises questions for me as to how pip install e works. This because my local editable install worked fine.
Very troublesome that I can't definitively identify the cause of this issue though I've seemed to resolve it.
I am trying to compile my python program into an android APK file. I installed the package 'python-for-android'. when i tried to use it, i go an error saying C:\Program Files\Python39\python.exe: No module named python-for-android. Can someone please tell me what is going wrong?
https://pypi.org/project/python-for-android/
It looks like it is being installed into a different location than where you pip installed the library. If you can give more details about how you installed the package, it might help identify the issue.
Is it possible that you have two python versions, and it installed it into the wrong version.
Using virtual environments might make it easier to understand what is going on. If you pip install into a virtual environment, you can be pretty confident that it is installing into the correct version of python.
First, my reasons to do this - I know it's a bad idea but I am out of ideas.
I want to install a package which requires a ld version, which is higher than the one in the repo of my Centos 6.5. So I should either go for setting up everything in a Docker and running it in production - something I lack experience with and I don't feel comfortable doing for a serious project. Or upgrade ld manually building from external source. Which I read, could result in devastation of my Centos. So the last option I am left with is install the packed on other machine and manually copy it to site-packages.
I have successfully installed the package on my home laptop under Debian.
I encountered everywhere advice to copy the whole site-packages directory. Something which I don't want to do as I have different packages on both machines and I want to avoid messing up with other stuff.
I copied the .so build and .egginfo of the package. Then, on the target machine, pip freeze indeed showed me the transferred package. However, Python can't find it when I try to import and use it.
Am I missing something else?
Not any of that.
Don't mess with system Python's site-packages dir, this belongs to the system Python env only. You should only add/remove code in there by using the package manager of your OS (that's yum for CentOS). This is especially true in Linux where many OS services can rely on system Python.
So what to do instead? Use a virtualenv and/or pipx to isolate any other dependencies of the package you want to install from the system versions.
I've been trying to connect opencv and python in both Ubuntu and Windows XP. I've failed on both.
I've read many webpages and threads about "how to install" it but none has worked (the worst part is that they all say kind of the same).
Steps (windows xp):
Installed Python 2.7 by default (works perfectly)
Installed PIL and cx_Freeze (may they
create a conflict? I don't think so
:s)
Installed Opencv 2.2 by default
(OpenCV-2.2.0-win32-vs2010.exe) and
it isn't recognized inside a py nor
as import opencv.cv nor using the
cookbook way, import cv (I skipped
the visual studio steps since I'll
use it with python)
Checked path (it's ok, it has the
Opencv2.2\bin thing)
Rechecked webpages and stuff
Steps (ubuntu):
Had python working
sudo apt-get install, cmake, make,
sudo make install, etcetc (from the
tutorials)
same thing... module not recognized
Please can you help?
Update:
I managed to install it and have it recognized by the system (I used http://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv and it worked perfectly after that).
The problem now is that it crashes when I try to use CaptureFromFile. Someone else has reported it 3 days ago so now I wait.
I'll check the other wrappers, maybe one of them will work.
For windows see my web page: http://www.modernmind.org/wiki/OpenCV
For Ubuntu you should just need to apt-get install python-dev then generate the make files with Cmake, build it and then make install. In order to build the python bindings you need to have the python header files on your system and you probably don't. When you run configure in Cmake make sure that you don't see any messages at the top about PYTHON_INCLUDE not being defined.
To access a library it needs a Python library installed in the Python version you are using. From what you write above it seems to me that you install OpenCV in general, but that you don't specifically install the Python library. This is why it doesn't work.
I'm not sure how to install the Python wrappers, and the OpenCV documentation is a bit sparse on that info. But if you did build them (and that needs to be turned on explicitly, says the docs) they seem to end up in opencv/release/lib .
Look at "Testing Python wrappers" on http://opencv.willowgarage.com/wiki/InstallGuide
If there is still no luck, there is a bunch of alternative Python wrappers available: http://pypi.python.org/pypi?%3Aaction=search&term=opencv&submit=search
Maybe they are better documented.