I'm trying to set up a python script as an azure webjob and the script is using several external dependencies, and in the documantation there seem to be no reference to using virtual env for webjobs.
how can i set a virtual env for the webjob? preferably without collecting the enviroment locally and running the script thorugh run.cmd
If you are trying to activate an already existing virtualenv, you can call its activate script. For instance, if you want to activate the web app's virtualenv, you can run
/path/to/web-app/env/Scripts/activate.bat
to acticate that particular virtulenv.
I had the same question, and found an answer in this post.
Short answer : put the directory of the module you want to include in the ZIP file you upload in the webjob. You can then reference it directly in your code.
Hope that helps!
This is kind of workaround but it works. Just add these lines to web job script.
import sys
site_packages = "D:\\home\\site\\wwwroot\\env\\Lib\\site-packages"
sys.path.append(site_packages)
import requests
Related
I have a Ubuntu server with restricted access. There I will be hosting my application.
I trying to run Python scripts which were working with the default packages provided by the server. I want to work with numpy and other modules.
As I cannot install or download or do anything, I created a python server in my local machine (WINDOWS) using WSL to emulate the Linux file system and copied the python environment files to the application directory and deployed in cloud.
The problem is no matter in whatever way I try I cannot import numpy (or any module which I copied). I moved all the site-packages to the location of my Python script (As the current script's path will be there in the system path) and tried to import but no luck.
Please help me with crack this in any possible or impossible way.
I am trying to achieve this for the past 6 days and cannot do it.
Please, I have to achieve this at any cost. I have attached my latest structure.
Thank you in advance.
My Folder structure screenshot:
EDIT:
Ok. Let me get this straight. I have a Linux server (Ubuntu 18.04) where I am hosting an application. From that application, I am calling python scripts for some machine learning purposes. It is restricted server and I cannot access it. The only way that I found out the Linux distro version is through Java code by calling some terminal commands using "ProcessBuilder". As the server is highly restricted I cannot run any of the Linux commands like echo, set, export, sudo, wget/curl,...etc., Since, python3 is already provided by Linux (by default) I am using that python3 command to call my python scripts (from Java code using "ProcessBuilder") and execute them.
If it is a normal script (if I am using python standard libraries) it is working fine. In one of the scripts I am using "numpy". So, I want to import that module. I am doing the development in a windows environment. So, to emulate the Linux file system for importing packages I created a virtual environment in WSL with same Ubuntu version and installed numpy and then replaced all the symlinks inside those packages with the required files. Then I copied the entire environment and pasted in my resources directory (which is in windows environment) and deployed. No luck.
So, I made a zip file for only "site-packages" folder inside that environment. Then I copied the zip file and pasted in my resources folder and deployed. No luck. The error that I always see is "numpy.core._multiarray_umath". All the articles and in GitHub also tell us to re-install the package. But, I cannot install. I don't have any such access.
How can I import numpy without installation? If there is any work around to achieve this please explain, I will do it. Even if it is harder, complex and time-consuming I am okay with it. I want to achieve this.
Let me preface this with:
a warning to please check the AUP (acceptable use policy) of the server you are using, and/or contact the server administrator to make sure you are not violating any rules.
I can think of quite a few reasons why this won't work. If it doesn't, then there may still be workarounds, but they'll be technically complex.
So if I'm understanding you correctly:
You have very limited access to the server; basically only the ability to upload (apparently) and run Java code.
You've also been able to upload Python code and run it through your Java code through ProcessBuild.
You do not have access to log in to a shell, execute arbitrary command other than through ProcessBuild, etc.
Of course, you do not have the ability to install site-packages into the system Python environment.
So ultimately, what you'll probably need to do is something like:
Create a Python3 virtual environment (which doesn't seem to be what you are actually doing) on WSL. By a "Python3 virtual environment", I mean venv, which allows you to create a user-level (not system-level) directory with your packages.
So something like (from inside your project directory):
python3 -v venv venv
source ./venv/bin/activate
Your path will be adjusted so that your python3 and pip3 commands will be found in the venv path. pip3 install numpy will install it into this virtual environment (not the global/system Python).
Upload that entire venv directory to the server. You seem to have some way of doing this already.
You're going to have to have some way of running the Bash shell through ProcessBuilder. Since you have the ability to run python3 through ProcessBuilder, I'm kind of assuming that you will be able to do this as well.
You'll need to (through ProcessBuild) activate the virtual environment on the server, <path_to_project>/venv/bin/activate and, in the same Bash shell run your code.
This will look something like:
bash -c "source ./venv/bin/activate; python3 main.py"
I have a python Script when i run my script by using my account it's run without problem ,
But when i use the root account i have an issue like this
I don't know why i have a problem with import module
use an absolute python path to point to the version of python that has the right packages installed.
The command which python executed as the user where the script works should help.
If you used pip to install any package locally, then you might have to
ensure as well, that the environment variable $HOME points to /home/robot.
In general I'd recommend to create a virtualenv foe your specific python project and use it in the sudo call explicitely or if you refer change the #! line in your script to point to the right virtualenv,
I need to run the python scripts on Azure web jobs but i am getting the below error. I tried all the possible ways like scripts with virtualenv and append the path but none of them is working.
[10/08/2018 11:27:27 > ca6024: ERR ] ImportError: No module named request
Can you please help me to fix?
The script used in the file is,
import urllib.request
print('success')
according to
https://docs.python.org/2/library/urllib.html
you can check your python version. it's different between python2 and python3.
in python2.7, use :
urllib.urlopen()
instead of :
urllib.request.urlopen()
Please refer to below steps which I uploaded Python script into Webjobs previously.
1: Use the virtualenv component to create an independent Python runtime environment in your system.If you don't have it, just install it first with command pip install virtualenv
If you installed it successfully, you could see it in your python/Scripts file.
2 : Run the command to create independent Python runtime environment.
3: Then go into the created directory's Scripts folder and activate it (this step is important , don't miss it)
Please don't close this command window and use pip install <your libraryname> to download external libraries in this command window. Such as pip install request for you.
4:Keep the Sample.py uniformly compressed into a folder with the libs packages in the Libs/site-packages folder that you rely on.
5: Create webjob in Web app service and upload the zip file, then you could execute your Web Job and check the log
You could also refer to the SO thread :Options for running Python scripts in Azure
I would like to make changes (and possibly contribute if its any good) to a public project on GitHub. I've forked and cloned the module but Im unclear how to get my program to import the local library instead of the 'official' installed module.
I tried cloning it into my project folder but when I imported it and tried to use it things got weird calmap\calmap.plot()
I also tried doing sys.path.append and the folder location. But it seems to still import the official one instead of the forked.
I'm assuming that I could put my program inside the module folder so that module would be found first but I can't image thats the 'correct' way to do it.
|
|-->My_Project_Folder/
|
|-->Forked_Module/
|-->docs/
|-->Forked_Module/
|-->__init__.py
If you're already using anaconda, then you can create a new environment just for the development of this feature.
First, create a new environment:
# develop_lib is the name of the environment.
# You can pick anything that is memorable instead.
# You can also use whatever python version you require ...
conda create -n develop_lib python3.5
Once you have the environment, then you probably want to enter that environment in your current session:
source activate develop_lib
Ok, now that you have the environment set up, you'll probably need to install some requirements for whatever third party library you're developing. I don't know what those dependencies are, but you can install them in your environment using conda install (if they're available) or using pip. Now you're ready to start working with the library that you want to update. python setup.py develop should be available assuming that the package has a standard build process. After you've run that, things should be good to go. You can make changes, run tests, etc.
If you use sys.path.append() the new "path" will be used if none of the previous contains the module you are importing. If you want that the "added path" has precedence over all the older, you have to use
sys.path.insert(0, "path")
In this way, if you print the sys.path you will see that the added path is at the beginning of the list and the module you are importing will be loaded from the path you have specified.
to import from the forked repo instead of python package you should
make a virtual environment for the cloned project then activate it, that way the environment is isolated from the globally installed packages.
1- you need to fork your repo;
2- create a virtual env and activate it;
3- clone your repo.
now if you print your import you will see the path of the forked repo.
import any_module
print(any_module)
I have a bunch of Python scripts that I want to deploy to other machines. Thing is, I want to have everything self-contained and not depend on the other machines' libraries. For example I don't want to request users to have virtual environment and pip as installed in order for my app to work.
On my local machine I use virtual environment with --no-site-packages and pip install -r requirements.txt to get everything in place.
The bad news is virtualenvironment's activate script has my local path hardcoded into it and using the --relocatable option does not help with this situation so I suppose virtualenvironment is out of the question?
What I would like to have is something similar to this:
base_app_dir:
- main_app_dir
- my_init_script.py
- bin(includes python binary)
- lib(includes pip installed packages and python libraries)
so that I can instruct the end user to just cd into base_app_dir and do a ./bin/python -m my_init_script.py but that means I now need to instruct Python to look into my ./lib folder when importing packages.
I've tried setting os.path.insert(1, 'base_app_dir/lib/site-packages') but this work on per module basis.
Also how about lookup for default Python modules? Right now for example when import hashlib it tries to get it from /usr/lib/python2.7/hashlib.py. I would like to deploy these default Python modules as well and instruct the app to import them from my custom location.
Py2exe or creating a .deb file is not an option right now so please try to address my specific question.