virtualenv doesn't work on network drive on other computers - python

I set up virtualenv on network drive using virtualenv python module. It generated me few folders: Include, Lib, Scripts, share, tcl. Inside the folder with these folders i created new one called scr and then pasted there a django app called vistool. Now i created two batch scripts, to allow other users of network drive to run this app:
run.bat (main file):
cd /d "Z:\xx\Tools\New\Widget - Graphs"
cd visual\Scripts
start server.bat
timeout 30
start http://127.0.0.1:8000/graphtool/
This one seems to work, as it does everything it should.
The other one that is called inside run.bat is server.bat, which i placed in Scripts(the same folder where activate.bat is.
server.bat (for calling activate.bat and django built-in server):
call activate.bat
cd..
cd src/vistool
call python manage.py runserver
Now, these two scripts are working fine for me and they do what they want - open a tab with a django project in the browser. But on other computers in my company this doens't work and I have no idea why. Activate.bat from virtualenv package takes the
set "VIRTUAL_ENV=Z:\xxx\Tools\New\Widget - Graphs\visual"
Which should be ok, since everyone has got mapped this network disc the same. The errors that they get are: "Could not find django". Even if i set up my env correctly and installed all the packages I use in it. And the activate.bat seems to work for them. It looks like the virtualenv can't see installed packages? But then why is it working for me?
Edit: When I try to install something on Their computers I get an error, that I don't have runpy installed. When I try to pip install runpy I still can't

Related

How to run GitHub Actions CI workflows locally from within a Python venv using act tool? (FATA[0000]: .env is a directory)

I forked a repository to my GH page, cloned and changed directory into it, started a venv inside with python3.9 -m venv .env.
I want to use act to run github actions' CI workflows every time locally before pushing commits to a PR. Installed this version of act using pikaur. However, when trying to list available act commands like act -l (or any other command) from project root dir, it gives out the error FATA[0000] Error loading from ~/Development/cvat/.env: read ~/Development/cvat/.env: is a directory. The workflows are all there. It's said that act should work out of the box if the workflow .yamls exist. I understand the env file is required too and should be explicitly created? Renaming the .env directory will clearly break everything within, so it's not an option. What can I do at this point?
Using Arch Linux, Python 3.9 and act 0.2.31-1.

Importing python packages in Ubuntu server

I have a Ubuntu server with restricted access. There I will be hosting my application.
I trying to run Python scripts which were working with the default packages provided by the server. I want to work with numpy and other modules.
As I cannot install or download or do anything, I created a python server in my local machine (WINDOWS) using WSL to emulate the Linux file system and copied the python environment files to the application directory and deployed in cloud.
The problem is no matter in whatever way I try I cannot import numpy (or any module which I copied). I moved all the site-packages to the location of my Python script (As the current script's path will be there in the system path) and tried to import but no luck.
Please help me with crack this in any possible or impossible way.
I am trying to achieve this for the past 6 days and cannot do it.
Please, I have to achieve this at any cost. I have attached my latest structure.
Thank you in advance.
My Folder structure screenshot:
EDIT:
Ok. Let me get this straight. I have a Linux server (Ubuntu 18.04) where I am hosting an application. From that application, I am calling python scripts for some machine learning purposes. It is restricted server and I cannot access it. The only way that I found out the Linux distro version is through Java code by calling some terminal commands using "ProcessBuilder". As the server is highly restricted I cannot run any of the Linux commands like echo, set, export, sudo, wget/curl,...etc., Since, python3 is already provided by Linux (by default) I am using that python3 command to call my python scripts (from Java code using "ProcessBuilder") and execute them.
If it is a normal script (if I am using python standard libraries) it is working fine. In one of the scripts I am using "numpy". So, I want to import that module. I am doing the development in a windows environment. So, to emulate the Linux file system for importing packages I created a virtual environment in WSL with same Ubuntu version and installed numpy and then replaced all the symlinks inside those packages with the required files. Then I copied the entire environment and pasted in my resources directory (which is in windows environment) and deployed. No luck.
So, I made a zip file for only "site-packages" folder inside that environment. Then I copied the zip file and pasted in my resources folder and deployed. No luck. The error that I always see is "numpy.core._multiarray_umath". All the articles and in GitHub also tell us to re-install the package. But, I cannot install. I don't have any such access.
How can I import numpy without installation? If there is any work around to achieve this please explain, I will do it. Even if it is harder, complex and time-consuming I am okay with it. I want to achieve this.
Let me preface this with:
a warning to please check the AUP (acceptable use policy) of the server you are using, and/or contact the server administrator to make sure you are not violating any rules.
I can think of quite a few reasons why this won't work. If it doesn't, then there may still be workarounds, but they'll be technically complex.
So if I'm understanding you correctly:
You have very limited access to the server; basically only the ability to upload (apparently) and run Java code.
You've also been able to upload Python code and run it through your Java code through ProcessBuild.
You do not have access to log in to a shell, execute arbitrary command other than through ProcessBuild, etc.
Of course, you do not have the ability to install site-packages into the system Python environment.
So ultimately, what you'll probably need to do is something like:
Create a Python3 virtual environment (which doesn't seem to be what you are actually doing) on WSL. By a "Python3 virtual environment", I mean venv, which allows you to create a user-level (not system-level) directory with your packages.
So something like (from inside your project directory):
python3 -v venv venv
source ./venv/bin/activate
Your path will be adjusted so that your python3 and pip3 commands will be found in the venv path. pip3 install numpy will install it into this virtual environment (not the global/system Python).
Upload that entire venv directory to the server. You seem to have some way of doing this already.
You're going to have to have some way of running the Bash shell through ProcessBuilder. Since you have the ability to run python3 through ProcessBuilder, I'm kind of assuming that you will be able to do this as well.
You'll need to (through ProcessBuild) activate the virtual environment on the server, <path_to_project>/venv/bin/activate and, in the same Bash shell run your code.
This will look something like:
bash -c "source ./venv/bin/activate; python3 main.py"

Run python program from pip venv without system python

The answer to this question could be "You're as dumb as a wooden bowl" but I have searched a lot and haven't found a solution without installing python on other computers.
I have a python/flask web app that I need to distribute to many users. However, I can't install python on all those computers and there is no computer which everyone can access. And I cant serve the app internally from a server either. Yes, that's what I'm dealing with.
I have saved the git repo it in a network drive that everyone can access. I hoped I could run a batch file to spin the localhost server from a copied environment for the user and then use the web app.
I copied a conda environment over to the network drive and tried to use that but that gave me a Importing the numpy c-extensions failed error.
I tried including a pip environment (.\env) in the folder. So I thought any user could just activate the environment using the batch file ...
cd %cd%
.\env\Scripts\activate.bat
.\env\Scripts\python.exe run.py
but it's not working.
The .\env\Scripts\activate just crashes. I amended the activate.bat set "VIRTUAL_ENV=%cd%\env" to ensure it uses the current folder. Still crashes.
If I exclude that then .\env\Scripts\python.exe run.py still looks for a python installation at the path I have on my machine rather than the path I provided above.
Is there a solution to this?
All the computers will be using Windows but may vary between Windows 7 and Windows 10. I'm doing the development from my Windows 10 computer.
After activating venv My below code worked:
(Monday) C:\Users\Resurctova\Desktop\PoraPuski\Monday>python new.py
Output :
testing
since the new.py has code to print testing
As Monday is my venv I activated it and executed the script.
Do not execute in Scripts folder of your venv environment
Have you though about creating an exec file and that will save all your folders. Using a tool lik PyInstaller. You just need to share the output exe file without installing python.

Python can't find my Flask script under virtualenv

I'm trying to build a simple API server using Flask that will validate and store international phone numbers using this phonenumbers.py. I previously installed Flask and several Flask extensions in a virtualenv, and started building my app. So far so good. But after downloading and installing phone numbers.py using its own installer, I found that python running inside virtualenv could no longer find my app script! Looking at the directory, it's sitting right there, but python does not recognize it any more. The only other wrinkle I can think of is that after doing the install, I used an external text editor (outside the venv) to edit my app script and re-save it.
What have I done wrong?
I compared my environment variables inside and outside virtualenv. They are the same except for the following 3 additions:
VIRTUAL_ENV=/Users/tokrum/callcenter/venv
PATH=/Users/tokrum/callcenter/venv/bin # (was added to the beginning of my $PATH, but the rest of pre-existing PATH is as beforeā€¦.)
PS1=(venv)\h:\W \u\$
My app is called callcenter-v0-1.py. It lives in a directory called /callcenter/, along with the phone numbers-7.0.1 and venv folders at the same level of my directory structure.
Thanks for any light you can shed on this.
Install Flask_script in your virtual env using
$pip install Flask-Script
Make sure that you activated the virtualenv before you installed Flask and your other dependencies -
$ virtualenv env
$ source env/bin/activate
$ pip install flask
Then when you're done working, make sure to deactivate the environment -
$ deactivate
Finally, when you want to start working again, navigate to your project directory and reactivate the environment -
$ source env/bin/activate
At this point, I would just remove the virtualenv and start over.
Hope that helps!

Directory change not occuring with setvirtualenvproject

I'm embracing VirtualEnvWrapper - and like what I see a lot. However as I try to get going I'm not seeing the behaviour I expect when trying to set up project directory association with virtual envs.
I've installed virtualenv and -wrapper. I can create envs and "workon" lists them fine. I can deactivate and rm them happily. So all appears functional. I read the docs regarding project mgmt. (Also a good video tutorial, and the desired proj association behaviour explained at 10:39 )
When I try to associate a work directory with an env, it accepts my cmds fine, but when I "workon" the project, it does not put me into my designated working directory.
e.g. I have a working area ~/Ross_code (and I've set this in my .bashrc as $PROJECT_HOME). In there is an existing project folder ~/Ross_code/superproj
So now I create an env with
mkvirtualenv superp
Then I go to my existing project dir and associate it with the env:
cd ~/Ross_code/superproj
setvirtualenvproject
Setting project for superp to /Users/ross/Ross_code/superproj
Then I exited the virtual env with "deactivate" and reactivated with
workon superp
But the present working dir remains my ~/ folder.
I checked the .project file which seems to have been set properly by the call to setvirtualenvproject:
cdvirtualenv
more .project
/Users/ross/Ross_Code/superproj
but calling "workon" never sticks me into the expected spot. I thought maybe the env and the project directory needed to be of the same name, but that didn't make any difference either.
Any idea why that very attractive project association capability doesn't work for me?
-Ross.
LATER - More info:
I tried to also use the mkproject command, which should create a directory for my code in the $PROJECT_HOME area, and create the virtualenv at the same time and associate them with each other.
Calling
mkproject junkproj
does in fact create the project directory nicely, and sticks me into the virtualenv, and cd's into the junkproj directory. But when I deactivate, and then "workon junkproj" again, I'm still left in my ~/ directory, rather than going into the project directory in $PROJECT_HOME
:(
The problem here is that the newer versions (this hit me upgrading from ubuntu 14.04 to 16.04) of virtualenvwrapper use a slightly different protocol for the setvirtualenvproject parameters:
setvirtualenvproject [virtualenv_path project_path]
In order to make the association you want in any virtual env, be in the project folder and the virtualenv and use:
setvirtualenvproject $VIRTUAL_ENV .
The dot is for the present directory - or you can use the path of the directory you want workon to take you to. Once you do this workon will switch you to the folder you want and cdproject will work as expected.
If you used the old protocol, you'll have a .project file in your project folder - you can move this to the $VIRTUAL_ENV folder rather than invoking the command with the new protocol to make the association. The file just contains the project directory you want to associate with virtualenvwrapper shortcut commands like workon and cdproject.
workon doesn't auto change directory to project or environment directory.
You can do this with the postactivate script - there's a really quick how-to in the second half of the virtualenvwrapper tips and tricks section.

Categories