currently I need to setup a production Django app in my computer and I would like to know what's the best way to do it? The production server uses virtualenv and I executed the following commands to get some info about the environment. Thanks
$ uname -a
Linux domU-12-31-39-0C-75-E2 2.6.34.7-56.40.amzn1.x86_64 #1 SMP Fri Oct 22 18:48:49 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux
$ pwd
/home/ec2-user/virtenvs/django-1.2.5/
$ ./pip freeze
Cheetah==2.4.1
Django==1.2.5
M2Crypto==0.20.2
Markdown==2.0.1
MySQL-python==1.2.3
PIL==1.1.7
PyYAML==3.05
Pygments==1.1.1
South==0.7.3
boto==2.0b4
cloud-init==0.5.15
configobj==4.6.0
distribute==0.6.10
django-classy-tags==0.3.3
django-cms==2.1.3
django-haystack==1.1.0
django-tinymce==1.5.1a1
iniparse==0.3.1
policycoreutils-default-encoding==0.1
pycurl==7.19.0
pygeoip==0.1.5
pygpgme==0.1
pysolr==2.0.13
pysqlite==2.6.0
python-Levenshtein==0.10.2
pytz==2011c
pywurfl==7.2.1
setools==1.0
urlgrabber==3.9.1
virtualenv==1.5.1
yum-metadata-parser==1.1.2
You should be able to do this on the current server:
pip freeze -l > requirements.txt
Then this on other machines:
pip install -r requirements.txt
There are related pip docs online which describe the functionality.
If you want to have the code open-sourced and able to be developed with others/on other computers... You could use git or mercurial.
http://git-scm.com/ is the homepage for git,
https://github.com/ is the site for storing git repositories, allowing for "social" coding as they describe it, which is explained at...
http://gitready.com/, which teaches you how to use git.
Related
I'm struggling with this for one week. I'm trying to run a python flask app that connect with a remote Oracle Database using instant client version 11.2.0.3.0.
After a lot of problems, I ended using 3 buildpacks, two of them I need to customize and then I could install cx_Oracle in Heroku, but when I run the code I got the error:
import cx_Oracle
ImportError: libaio.so.1: cannot open shared object file: No such file or directory
Well, this error is very well documented, so I just needed to do:
$ apt-get install libaio1 libaio-dev
But the problem is how to run apt-get in a Heroku App? Using the third buildpack:
github.com/heroku/heroku-buildpack-apt
The other buildpacks:
github.com/Maethorin/oracle-heroku-buildpack
github.com/Maethorin/heroku-buildpack-python
After everything is configured, I runned a Heroku deploy and got the same error on execution. I could see in Heroku deploy log that heroku-buildpack-apt did its job but I got the same error in import cx_Oracle. Btw, just to be sure, I changed the forked python buildpack, that I'm using, to do pip uninstall cx_Oracle at each deploy so I can have a freshly compiled version of it.
At this point, the Great Internet was not able to help me anymore. Anywhere that I looked, I got the option to install libaio. I tried to search about using apt-get in Heroku App but everything points to heroku-buildpack-apt
I think the problem could be cx_Oracle cannot find the installed libaio and I setted a lot of Heroku App environment variables:
$ heroku config:set ORACLE_HOME=/app/vendor/oracle_instantclient/instantclient_11_2
$ heroku config:set LD_LIBRARY_PATH=/app/.apt/usr/lib/x86_64-linux-gnu:/app/vendor/oracle_instantclient/instantclient_11_2:/app/vendor/oracle_instantclient/instantclient_11_2/sdk:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib:/lib
$ heroku config:set LIBRARY_PATH=/app/.apt/usr/lib/x86_64-linux-gnu:/app/vendor/oracle_instantclient/instantclient_11_2:/app/vendor/oracle_instantclient/instantclient_11_2/sdk:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib:/lib
$ heroku config:set INCLUDE_PATH=/app/.apt/usr/include
$ heroku config:set PATH=/bin:/sbin:/usr/bin:/app/.apt/usr/bin
$ heroku config:set PKG_CONFIG_PATH=/app/.apt/usr/lib/x86_64-linux-gnu/pkgconfig
$ heroku config:set CPPPATH=/app/.apt/usr/include
$ heroku config:set CPATH=/app/.apt/usr/include
EDIT: I forgot to mention this:
When I run a heroku run ls -la /app/.apt/usr/lib/x86_64-linux-gnu where the libaio should be installed I got this:
drwx------ 3 u32473 dyno 4096 Dec 21 2013 .
drwx------ 3 u32473 dyno 4096 Dec 21 2013 ..
-rw------- 1 u32473 dyno 16160 May 9 2013 libaio.a
lrwxrwxrwx 1 u32473 dyno 37 May 9 2013 libaio.so -> /lib/x86_64-linux-gnu/libaio.so.1.0.1
drwx------ 2 u32473 dyno 4096 May 17 16:57 pkgconfig
But when I run heroku run ls -l /lib/x86_64-linux-gnu/libaio.so.1.0.1 there is no file there. So the real problem is where are libaio installed?
Anyone can help me make this work? Or there is another good substitution for cx_Oracle?
Thanks!
I solve this... the problem was really the location of the libaio.so.
I started to look for all possibles places where this lib could be installed. I found it in /app/.apt/lib/x86_64-linux-gnu and not in /app/.apt/usr/lib/x86_64-linux-gnu, where heroku-buildpack-apt think it was installed, nor in any of the system lib folders.
So I added this path in LD_LIBRARY_PATH and everything works fine!
Ty All!!!
I was also stuck with the same problems and fixed it after putting some efforts. I am sharing here the steps for hosting python flask app which connects external Oracle Database:
cd {ProjectDir}
pip install cx_Oracle
pip install gunicorn
Make file with name Procfile and put following in it: web: gunicorn yourapp:app --log-file=- //yourapp is your flask python file
pip freeze > requirements.txt
git init
create heroku
heroku buildpacks:add heroku/python
heroku buildpacks:add https://github.com/featurist/oracle-client-buildpack
heroku buildpacks:add https://github.com/heroku/heroku-buildpack-apt
heroku config:set BUILD_WITH_GEO_LIBRARIES=1 //This is for shapely python package
(Optional)
create file with name Aptfile and put libaio1 in it
git push heroku master
set DYLD_LIBRARY_PATH=$ORACLE_HOME and LD_LIBRARY_PATH=$ORACLE_HOME and try again
I am using ansible to connect with server. But I am getting errors for certain pip packages because of older version of python. How can I install a specific version of python (2.7.10) using ansible.
Current python version on the server is 2.7.6
For now I have compiled and installed the python version manually but would prefer to have a way to do it via ansible.
Additionally to #Simon Fraser's answer, the following playbook is what I use in Ansible to prepare a server with some specific Python 3 version:
# python_version is a given variable, eg. `3.5`
- name: Check if python is already latest
command: python3 --version
register: python_version_result
failed_when: "{{ python_version_result.stdout | replace('Python ', '') | version_compare(python_version, '>=') }}"
- name: Install prerequisites
apt: name=python-software-properties state=present
become: true
- name: Add deadsnakes repo
apt_repository: repo="ppa:deadsnakes/ppa"
become: true
- name: Install python
apt: name="python{{ python_version }}-dev" state=present
become: true
I have the above in a role too, called ansible-python-latest (github link) in case you are interested.
The first thing to consider is that you probably don't want to replace or upgrade the system version of Python. This is because it's used by the system itself for things like package management, and so replacing it may cause other important things to break.
Installing an extra copy of Python that someone else made
To get an extra version of Python installed, the easiest option is to use a ppa, such as https://launchpad.net/~fkrull/+archive/ubuntu/deadsnakes-python2.7 so that someone else has done the work of turning Python into a package for you.
A PPA can be added with Ansible's apt repository module using a directive like the one below, which will then allow you to install packages from it the normal ansible way:
apt_repository: repo='ppa:fkrull/deadsnakes-python2.7'
Building a package yourself
If there is no ppa that has the version of Python you require, then you may need to build a .deb package yourself. The simplest way of doing this is a tool like checkinstall. There's also fpm, which can take lots of different sources and make deb, rpm and so on with them. It can also take a Python module only available with pip install and turn it into a system package for you, which is very useful.
Once you have a deb package you can install it with Ansible's apt module
apt: deb=/tmp/mypackage.deb
I want to point out that there are potentially 2 or 3 different Pythons involved in an Ansible build and that's it's useful not to mix them up.
1️⃣ The system/Ansible Python. On my VirtualBox guest Ubuntu 20.04, that's 3.8.x right now.
2️⃣ Your application/production Python, and that includes what pip/venv are tasked with doing, via Ansible. In my case, my own application code uses 3.10.
3️⃣ The development, not system, Python you use on your host machine, which is again a separate concern. (macos, Python 3.10, in my case).
You want to keep your application's development 3️⃣ and production 2️⃣ Pythons at an equivalent level.
Ansible does not need to use 3.10 however, so I will leave Ansible and system Python alone.
For what it's worth, my host Macbook is running Python 3.10 too, but that does not interfere with the guest’s being on 3.8
The following details some of what I did. I don't claim it is best practices, but it does show how I chose to separate those concerns:
ancible.cfg
#dont start out with 3.10, because it may not exist yet
# ansible_python_interpreter=/usr/bin/python3.10
ansible_python_interpreter=/usr/bin/python3 #1️⃣
I did not adjust ansible_python_interpreter throughout my playbook, i.e. Ansible was left with 20.04's delivered Python.
vars.yml
Track the application Python version in different variables.
# application, not system/Ansible, python
py_appver: "3.10" #2️⃣
py_app_bin: "/usr/bin/python{{py_appver}}"
I very rarely used py_appver in the playbook.
Starting Python state on VM:
(ssha)vagrant bin$pwd
/usr/bin
(ssha)vagrant bin$ls -l python3*
lrwxrwxrwx 1 root root 9 Mar 13 2020 python3 -> python3.8
(ssha)vagrant bin$/usr/bin/python3 --version #1️⃣
Python 3.8.10
playbook.yml:
adding a repository for apt to get 3.10 from:
- name: add Python dead snakes repo for 3.10
ansible.builtin.apt_repository:
repo: 'ppa:deadsnakes/ppa'
Installing Python3.10 and some other packages
##############################################
# These utilities may be used by any of the tasks
# so might as well put them in early
##############################################
- name: install system-level components
package: "name={{ item }} state=present"
with_items:
- monit
- runit
....
# needed by ANXS.postgresql
- python3-psycopg2
# not sure I needed but..
- python3-pip
#Application Python
- python{{py_appver}} #2️⃣
- python{{py_appver}}-venv #2️⃣
what I did NOT do: symlink python3.10 -> python3
# DONT DO THIS
# - name: symlink python executables
# file:
# src: "/usr/bin/{{item.from_}}{{pyver}}"
# dest: "/usr/bin/{{item.to_}}"
# state: link
# force: true
# with_items:
# - {from_: "python", to_: "python3"}
# - {from_: "pyvenv-", to_: "pyvenv"}
# when: false
How I used 3.10 for my virtual environment:
Again, this may not necessarily be best practice, but it worked.
The result is to have /srv/venv 4️⃣ virtualenv using Python 3.10
- name: create virtualenv manually
command: "{{py_app_bin}} -m venv ./venv" #2️⃣
args:
chdir: "/srv"
become: yes
become_user: builder
when: not venv_exists.stat.exists
And now, I ask pip to self-update and install stuff:
- name: pip self-update to 20.x
pip:
name: pip
state: latest
virtualenv: "/srv/venv" # 4️⃣
- name: pip requirements 1st pass
pip:
requirements: "{{ dir_app }}/requirements.txt"
virtualenv: "/srv/venv" # 4️⃣
virtualenv_python: "python{{py_appver}}" #2️⃣
And that was it. My pip/venv stuff gets its 3.10 to play with and everything else, including ansible, uses 3.8.
What's in /usr/bin on the VM at the end:
(ssha)vagrant bin$pwd
/usr/bin
(ssha)vagrant bin$ls -l python3*
lrwxrwxrwx 1 root root 9 Mar 13 2020 python3 -> python3.8
-rwxr-xr-x 1 root root 5454104 Dec 21 09:46 python3.10
-rwxr-xr-x 1 root root 5490488 Nov 26 12:14 python3.8
lrwxrwxrwx 1 root root 33 Nov 26 12:14 python3.8-config -> x86_64-linux-gnu-python3.8-config
lrwxrwxrwx 1 root root 16 Mar 13 2020 python3-config -> python3.8-config
(ssha)vagrant bin$python3 --version # 1️⃣
Python 3.8.10
activating the application Python:
(ssha)vagrant bin$source /srv/venv/bin/activate # 4️⃣
(venv) (ssha)vagrant bin$python --version # 2️⃣
Python 3.10.1
Environment
host Macos BigSur, Python 3.10
vagrant 2.2.19
virtualbox 6.1.30,148432
$ansible --version
ansible [core 2.12.1]
config file = /Users/myuser/.ansible.cfg
configured module search path = ['/Users/myuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/myuser/kds2/venvs/bme/lib/python3.10/site-packages/ansible
ansible collection location = /Users/myuser/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/myuser/kds2/venvs/bme/bin/ansible
python version = 3.10.1 (main, Dec 10 2021, 12:10:01) [Clang 12.0.5 (clang-1205.0.22.11)]
guest Ubuntu 20.04 with Python 3.8
Not sure how relevant guest-side ansible is but I'll add it anyway:
$apt list | egrep -i ^ansible
ansible-doc/focal 2.9.6+dfsg-1 all
ansible-lint/focal 4.2.0-1 all
ansible-tower-cli-doc/focal 3.3.0-1.1 all
ansible-tower-cli/focal 3.3.0-1.1 all
ansible/focal 2.9.6+dfsg-1 all
I'm trying to deploy django with uwsgi, and I think I lack understanding of how it all works. I have uwsgi running in emperor mode, and I'm trying to get the vassals to run in their own virtualenvs, with a different python version.
The emperor configuration:
[uwsgi]
socket = /run/uwsgi/uwsgi.socket
pidfile = /run/uwsgi/uwsgi.pid
emperor = /etc/uwsgi.d
emperor-tyrant = true
master = true
autoload = true
log-date = true
logto = /var/log/uwsgi/uwsgi-emperor.log
And the vassal:
uid=django
gid=django
virtualenv=/home/django/sites/mysite/venv/bin
chdir=/home/django/sites/mysite/site
module=mysite.uwsgi:application
socket=/tmp/uwsgi_mysite.sock
master=True
I'm seeing the following error in the emperor log:
Traceback (most recent call last):
File "./mysite/uwsgi.py", line 11, in <module>
import site
ImportError: No module named site
The virtualenv for my site is created as a python 3.4 pyvenv. The uwsgi is the system uwsgi (python2.6). I was under the impression that the emperor could be any python version, as the vassal would be launched with its own python and environment, launched by the master process. I now think this is wrong.
What I'd like to be doing is running the uwsgi master process with the system python, but the various vassals (applications) with their own python and their own libraries. Is this possible? Or am I going to have to run multiple emperors if I want to run multiple pythons? Kinda defeats the purpose of having virtual environments.
The "elegant" way is building the uWSGI python support as a plugin, and having a plugin for each python version:
(from uWSGI sources)
make PROFILE=nolang
(will build a uWSGI binary without language support)
PYTHON=python2.7 ./uwsgi --build-plugin "plugins/python python27"
will build the python27_plugin.so that you can load in vassals
PYTHON=python3 ./uwsgi --build-plugin "plugins/python python3"
will build the plugin for python3 and so on.
There are various way to build uWSGI plugins, the one i am reporting is the safest one (it ensure the #ifdef are honoured).
Having said that, having a uWSGI Emperor for each python version is viable too. Remember Emperor are stackable, so you can have a generic emperor spawning one emperor (as its vassal) for each python version.
Pip install uWSGI
One option would be to simply install uWSGI with pip in your virtualenvs and start your services separately:
pip install uwsgi
~/.virtualenvs/venv-name/lib/pythonX.X/site-packages/uwsgi --ini path/to/ini-file
Install uWSGI from source and build python plugins
If you want a system-wide uWSGI build, you can build it from source and install plugins for multiple python versions. You'll need root privileges for this.
First you may want to install multiple system-wide python versions.
Make sure you have any dependencies installed. For pcre, on a Debian-based distribution use:
apt install libpcre3 libpcre3-dev
Download and build the latest uWSGI source into /usr/local/src, replacing X.X.X.X below with the package version (e.g. 2.0.19.1):
wget http://projects.unbit.it/downloads/uwsgi-latest.tar.gz
tar vzxf uwsgi-latest.tar.gz
cd uwsgi-X.X.X.X/
make PROFILE=nolang
Symlink the versioned folder uwsgi-X.X.X.X to give it the generic name, uwsgi:
ln -s /usr/local/src/uwsgi-X.X.X.X /usr/local/src/uwsgi
Create a symlink to the build so it's on your PATH:
ln -s /usr/local/src/uwsgi/uwsgi /usr/local/bin
Build python plugins for the versions you need:
PYTHON=pythonX.X ./uwsgi --build-plugin "plugins/python pythonXX"
For example, for python3.8:
PYTHON=python3.8 ./uwsgi --build-plugin "plugins/python python38"
Create a plugin directory in an appropriate location:
mkdir -p /usr/local/lib/uwsgi/plugins/
Symlink the created plugins to this directory. For example, for python3.8:
ln -s /usr/local/src/uwsgi/python38_plugin.so /usr/local/lib/uwsgi/plugins
Then in your uWSGI configuration (project.ini) files, specify the plugin directory and the plugin:
plugin-dir = /usr/local/lib/uwsgi/plugins
plugin = python38
Make sure to create your virtualenvs with the same python version that you created the plugin with. For example if you created python38_plugin.so with python3.8 and you have plugin = python38 in your project.ini file, then an easy way to create a virtualenv with python3.8 is with:
python3.8 -m virtualenv path/to/project/virtualenv
I have been searching and tried various alternatives without success and spent several days on it now - driving me mad.
Running on Red Hat Linux with Python 2.5.2
Began using most recent Virtualenv but could not activate it, I found somewhere suggesting needed earlier version so I have used Virtualenv 1.6.4 as that should work with Python 2.6.
It seems to install the virtual environment ok
[necrailk#server6 ~]$ python virtualenv-1.6.4/virtualenv.py virtual
New python executable in virtual/bin/python
Installing setuptools............done.
Installing pip...............done.
Environment looks ok
[necrailk#server6 ~]$ cd virtual
[necrailk#server6 ~/virtual]$ dir
bin include lib
Trying to activate
[necrailk#server6 ~/virtual]$ . bin/activate
/bin/.: Permission denied.
Checked chmod
[necrailk#server6 ~/virtual]$ cd bin
[necrailk#server6 bin]$ ls -l
total 3160
-rw-r--r-- 1 necrailk biz12 2130 Jan 30 11:38 activate
-rw-r--r-- 1 necrailk biz12 1050 Jan 30 11:38 activate.csh
-rw-r--r-- 1 necrailk biz12 2869 Jan 30 11:38 activate.fish
-rw-r--r-
Problem, so I changed it
[necrailk#server6 bin]$ ls -l
total 3160
-rwxr--r-- 1 necrailk biz12 2130 Jan 30 11:38 activate
-rw-r--r-- 1 necrailk biz12 1050 Jan 30 11:38 activate.csh
-rw-r--r-- 1 necrailk biz12 2869 Jan 30 11:38 activate.fish
-rw-r--r-- 1 necrailk biz12 1005 Jan 30 11:38 activate_this.py
-rwxr-xr-x 1 necrailk biz
Try activate again
[necrailk#server6 ~/virtual]$ . bin/activate
/bin/.: Permission denied.
Still no joy...
Here is my workflow after creating a folder and cd'ing into it:
$ virtualenv venv --distribute
New python executable in venv/bin/python
Installing distribute.........done.
Installing pip................done.
$ source venv/bin/activate
(venv)$ python
You forgot to do source bin/activate where source is a executable name.
Struck me first few times as well, easy to think that manual is telling "execute this from root of the environment folder".
No need to make activate executable via chmod.
You can do
source ./python_env/bin/activate
or just go to the directory
cd /python_env/bin/
and then
source ./activate
Good Luck.
Go to the project directory. In my case microblog is the flask project directory and under microblog directory there should be app and venv folders. then run the below command, This is one worked for me in Ubuntu.
source venv/bin/activate
Cd to the environment path, go to the bin folder.
At this point when you use ls command, you should see the "activate" file.
now type
source activate
$ mkdir <YOURPROJECT>
Create a new project
$ cd <YOURPROJECT>
Change directory to that project
$ virtualenv <NEWVIRTUALENV>
Creating new virtualenv
$ source <NEWVIRTUALENV>/bin/activate
Activating that new virtualenv
run this code it will get activated if you on a windows machine
source venv/Scripts/activate
run this code it will get activated if you on a linux/mac machine
. venv/bin/activate
The problem there is the /bin/. command. That's really weird, since . should always be a link to the directory it's in. (Honestly, unless . is a strange alias or function, I don't even see how it's possible.) It's also a little unusual that your shell doesn't have a . builtin for source.
One quick fix would be to just run the virtualenv in a different shell. (An obvious second advantage being that instead of having to deactivate you can just exit.)
/bin/bash --rcfile bin/activate
If your shell supports it, you may also have the nonstandard source command, which should do the same thing as ., but may not exist. (All said, you should try to figure out why your environment is strange or it will cause you pain again in the future.)
By the way, you didn't need to chmod +x those files. Files only need to be executable if you want to execute them directly. In this case you're trying to launch them from ., so they don't need it.
instead of ./activate
use source activate
For Windows You can perform as:
TO create the virtual env as: virtualenv envName –python=python.exe (if not create environment variable)
To activate the virtual env : > \path\to\envName\Scripts\activate
To deactivate the virtual env : > \path\to\env\Scripts\deactivate
It fine works on the new python version .
Windows 10
In Windows these directories are created :
To activate Virtual Environment in Windows 10.
down\scripts\activate
\scripts directory contain activate file.
Linux Ubuntu
In Ubuntu these directories are created :
To activate Virtual Environment in Linux Ubuntu.
source ./bin/activate
/bin directory contain activate file.
Virtual Environment copied from Windows to Linux Ubuntu vice versa
If Virtual environment folder copied from Windows to Linux Ubuntu then according to directories:
source ./down/Scripts/activate
I would recommend virtualenvwrapper as well. It works wonders for me and how I always have problems with activating. http://virtualenvwrapper.readthedocs.org/en/latest/
Create your own Python virtual environment called <Your Env _name >:.
I have given it VE.
git clone https://github.com/pypa/virtualenv.git
python virtualenv.py VE
To activate your new virtual environment, run (notice it's not ./ here):
. VE/bin/activate
Sample output (note prompt changed):
(VE)c34299#a200dblr$
Once your virtual environment is set, you can remove the Virtualenv repo.
On Mac, change shell to BASH (keep note that virtual env works only in bash shell )
[user#host tools]$. venv/bin/activate
.: Command not found.
[user#host tools]$source venv/bin/activate
Badly placed ()'s.
[user#host tools]$bash
bash-3.2$ source venv/bin/activate
(venv) bash-3.2$
Bingo , it worked. See prompt changed.
On Ubuntu:
user#local_host:~/tools$ source toolsenv/bin/activate
(toolsenv) user#local_host~/tools$
Note : prompt changed
I had trouble getting running source /bin/activate then I realized I was using tcsh as my terminal shell instead of bash. once I switched I was able to activate venv.
Probably a little late to post my answer here but still I'll post, it might benefit someone though,
I had faced the same problem,
The main reason being that I created the virtualenv as a "root" user
But later was trying to activate it using another user.
chmod won't work as you're not the owner of the file, hence the alternative is to use chown (to change the ownership)
For e.g. :
If you have your virtualenv created at /home/abc/ENV
Then CD to /home/abc
and run the command : chown -Rv [user-to-whom-you want-change-ownership] [folder/filename whose ownership needs to be changed]
In this example the commands would be : chown -Rv abc ENV
After the ownership is successfully changed you can simply run source /ENV/bin/./activate and your should be able to activate the virtualenv correctly.
1- open powershell and navigate to your application folder
2- enter your virtualenv folder ex : cd .\venv\Scripts\
3- active virtualenv by type .\activate
I'm trying to create a python source package, but it fails when creating hard links for files.
$ python setup.py sdist
running sdist
running check
reading manifest template 'MANIFEST.in'
writing manifest file 'MANIFEST'
making hard links in foo-0.1...
hard linking README.txt -> foo-0.1
error: Operation not permitted
I've tried running the command with sudo, but it produces the same error.
This also produces the same error:
ln foo bar
I'm using vbox to run a virtual instance of ubuntu, which is probably where the problem comes from. Is there a way round using hard links when creating source distributions?
System information:
Ubuntu server 11.04;
VirtualBox 4.14;
osx 10.6.6;
python 2.7.1;
Same issue. I am using vagrant, my host OS is Windows while the Gust OS is Ubuntu. I am not a vim fan, so #simo's answer does not help me much because I really rely on virtual box shared folders to sync changes made by sublime editor to the Ubuntu virtual machine.
Thanks to Fabian Kochem, he found a quick and dirty workaround: post
# if you are not using vagrant, just delete os.link directly,
# The hard link only saves a little disk space, so you should not care
if os.environ.get('USER','') == 'vagrant':
del os.link
I ran into the same issues.
I was able to get it working by moving the python sources from the virtual box shared folder to my debian home folder. No error on sdist anymore.
I hope it helps.
It is unclear from your question what step is failing. Might be the hard linking right before the error. You can try strace to see what system call is failing. That should give a better picture of the problem at least.
This python bug report looks like they're not going to fix this until distutils2. Someone did supply a patch that might be useful to you. You might also be able to mount a directory over NFS and build there. I believe that NFS allows hard linking.
Looks like this was fixed in Python version 2.7.9 - https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS
Issue #8876: distutils now falls back to copying files when hard linking
doesn't work. This allows use with special filesystems such as VirtualBox
shared folders
This is the way I reached a working uwsgi(Ubuntu 14.04, default Python 2.7.6) with Python-2.7.10.
Steps
Before continuing, you must compile new Python with --enable-shared:
$ ./configure --enabled-shared
$ sudo make altinstall
Context: Ubuntu 14.04 with Python 2.7.6 with uwsgi and uwsgi-python-plugin installed with apt-get
Problem: I have a virtualenv for my all with compiled Python-2.7.10
# Previously installed Python-2.7.10 as altinstall
$ python2.7
Python 2.7.10 (default, Nov 25 2015, 11:21:38)
$ source ~/env/bin/activate
$ python
Python 2.7.10 (default, Nov 25 2015, 11:21:38)
Preparing stuff:
$ cd /tmp/
$ git clone https://github.com/unbit/uwsgi.git
$ cd uwsgi
$ make PROFILE=nolang
# On /tmp/uwsgi
$ PYTHON=python ./uwsgi --build-plugin "plugins/python python27"
On ini file:
[uwsgi]
plugins = python27
Results on:
** Starting uWSGI 1.9.17.1-debian (64bit) on [Thu Nov 26 12:56:42 2015] ***
compiled with version: 4.8.2 on 23 March 2014 17:15:32
os: Linux-3.19.0-33-generic #38~14.04.1-Ubuntu SMP Fri Nov 6 18:17:28 UTC 2015
nodename: maquinote
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 12
current working directory: /etc/uwsgi/apps-enabled
detected binary path: /usr/bin/uwsgi-core
your processes number limit is 257565
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: enabled
uwsgi socket 0 bound to UNIX address /var/run/uwsgi/app/pypi-server/socket fd 3
Python version: 2.7.10 (default, Nov 26 2015, 11:44:40) [GCC 4.8.4]
None of the above answers solved my problem. However, I was running the following command in a vagrant shared folder on Centos 6:
python setup.py bdist_bdrpm
And ended up with the error:
ln: creating hard link `xxx': Operation not permitted
error: Bad exit status from /var/tmp/rpm-tmp.S9pTDl (%install)
It turns out that it's a bash file that eventually executes the hard links:
cat /usr/lib/rpm/redhat/brp-python-hardlink
#!/bin/sh
# If using normal root, avoid changing anything.
if [ -z "$RPM_BUILD_ROOT" -o "$RPM_BUILD_ROOT" = "/" ]; then
exit 0
fi
# Hardlink identical *.pyc and *.pyo, originally from PLD's rpm-build-macros
# Modified to use sha1sum instead of cmp to avoid a diffutils dependency.
find "$RPM_BUILD_ROOT" -type f -name "*.pyc" | while read pyc ; do
pyo="$(echo $pyc | sed -e 's/.pyc$/.pyo/')"
if [ -f "$pyo" ] ; then
csha="$(sha1sum -b $pyc | cut -d' ' -f 1)" && \
osha="$(sha1sum -b $pyo | cut -d' ' -f 1)" && \
if [ "$csha" = "$osha" ] ; then
ln -f "$pyc" "$pyo"
fi
fi
done
Therefore you should be able to replace the hard link ln -f "$pyc" "$pyo" with a copy command cp "$pyc" "$pyo" in the above shell script.