Following is an excerpt from the requirements.txt file. I understand the value after == specifies the version. What does the value after = specify? Can I exclude it?
icu==67.1=he1b5a44_0
lz4-c==1.9.2=he6710b0_1
xz==5.2.5=h7b6447c_0
qt==4.8.7=2
This happens if the file is created by exporting a conda environment (and in this case it should normally be called something like environment.yml, i.e. a yml file).
If one creates this file with the basic command
conda env export > environment.yml
it exports the strictest definition of the packages, which includes the build number (that's what you see after the second =), and it is often OS-specific. That guarantees that you can reproduce exactly the same environment as the original (but will also not work on other OSs). This issue is also captured in this question. In the vast majority of cases, you should be fine without it and you're free to remove the build numbers. If you have access to this environment, you can export it "properly", like
conda env export > environment.yml --no-build
It signifies that you don't actually have a pip requirements.txt file in the first place. This is a conda export, likely created with conda list --export, and can not be processed with pip.
The value after the = is just a build string, you may think of it is as an identifier to allow installing this exact same build. A version number is not sufficient here, since you may have different builds of the same version.
Example showing the h7b6447c_0 build of the xz package that you referenced:
$ conda search xz=5.2.5 --info
Loading channels: done
xz 5.2.5 h7b6447c_0
-------------------
file name : xz-5.2.5-h7b6447c_0.tar.bz2
name : xz
version : 5.2.5
build : h7b6447c_0
build number: 0
size : 438 KB
license : LGPL-2.1 and GPL-2.0
subdir : linux-64
url : https://repo.anaconda.com/pkgs/main/linux-64/xz-5.2.5-h7b6447c_0.tar.bz2
md5 : e17620ef8fc8654e77f53b4f2995b288
timestamp : 2020-04-16 04:36:07 UTC
dependencies:
- libgcc-ng >=7.3.0
Due to requirements.txt doc there is nothing about them, It seems like they are meaningless for pip.
https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format
Related
I found in this script: https://unidata.github.io/python-training/gallery/declarative_500_hpa/
a plot done using a grb2 file. I just copy and paste the code, it works well.
I am trying to do the same, for another date with a downloaded grb2 file from https://www.ncei.noaa.gov/products/weather-climate-models/global-forecast
and i get this error, just after replacing the name of the file with a local grb2 downloaded from NCEI:
ValueError: did not find a match in any of xarray's currently installed IO backends ['netcdf4', 'h5netcdf', 'scipy', 'pydap', 'zarr']. Consider explicitly selecting one of the installed backends via the engine parameter to xarray.open_dataset(), or installing additional IO dependencies
I also tried pip install xarray[complete] and pip install netcdf4. Nothing worked. What am i doing wrong?
Best regards,
Fede
The original example you linked, while the source data is in GRIB2 format, is accessing the data from a THREDDS server using the OPeNDAP protocol. You can tell this from looking at the URL and seeing https://www.ncei.noaa.gov/thredds/dodsC/. This protocol is readily supported by xarray. The important point is that for that case, the GRIB2 format was not being processed by xarray.
To open GRIB2 data with xarray, you need to install cfgrib. You can do this with pip using:
pip install cfgrib
or from conda-forge using conda:
conda install -c conda-forge cfgrib
I have this repository with me ; https://github.com/layog/Accurate-Binary-Convolution-Network . As requirements.txt says, it requires tensorflow==1.4.1. So I am using miniconda (in Ubuntu18.04) and for the love of God, I can't get it to run (errors out at the below line)
from tensorflow.examples.tutorial.* import input_data
Gives me an ImportError saying it can't find tensorflow.examples. I have diagnosed the problem that a few modules are missing after I installed tensorflow (Have tried all of the below ways)
pip install tensorflow==1.4.1
conda install -c conda-forge tensorflow==1.4.1
#And various wheel packages avaliable on the internet for 1.4.1
pip install tensorflow-1.4.0rc1-cp36-cp36m-manylinux1_x86_64.whl
Question is, if I want all the modules which are present in the git repo source as my installed copy, do I have to COMPLETELY build tensorflow from source ? If yes, can you mention the flag I should use? Are there any wheel packages available that have all modules present in them ?
A link would save me tonnes of effort!
NOTE: Even if I manually import the examples directory, it says tensorflow.contrib is missing, and if I local import that too, another ImportError pops up. There has to be an easier way I am sure of it
Just for reference for others stuck in the same situation:-
Use latest tensorflow build and bezel 0.27.1 for installing it. Even though the requirements state that we need an older version - use newer one instead. Not worth the hassle and will get the job done.
Also to answer the above question about building only specific directories is possible. Each module consists of BUILD file which is fed to bezel.
See the names category of the file to build specific to that folder. For reference the command I used to generate the wheel package for examples.tutorial.mnist :
bazel build --config=opt --config=cuda --incompatible_load_argument_is_label=false //tensorflow/examples/tutorials/mnist:all_files
Here all_files is the name found in the examples/tutorials/mnist/BUILD file.
I do not remember since when, but whenever I install any pip based package, my system (Ubuntu 14.04, Python 2.7.6) trows a warning/error :
Url 'file:///home/username/.pip-wheelhouse' is ignored: it is neither a file nor a directory.
I don't know where this line came from but the pip package I'm installing installs very well, but this line is always shown. How can I remove this ?
There are a few things that can cause this. First take a look at your pip configuration file at ~/.pip/pip.conf and see if it contains a section like this:
[wheel]
wheel-dir = /home/username/.pip-wheelhouse
If so, comment that section out, use pip, and see if that gets rid of the message.
This config file specifies the same information as some environment variables. Try this:
env | grep -i wheel
This command will list all of your environment variables that contain the word wheel. If you see any output, look for a line that specifies the .pip-wheelhouse directory. For example, PIP_FIND_LINKS is the top suspect. When you've found the culprit and you just need to track down where those variables are getting set. The top candidate files for setting variables like that would be ~/.bashrc, ~/.profile, and /etc/profile. Search those files for the word wheel and I suspect you'll be able to resolve this.
I am using conda version 3.19.0 from Ubuntu 14.04 64-bit. When I try conda update conda I receive:
$ conda update conda
Fetching package metadata: .......Error: Could not find URL: http://repo.continuum.io/pkgs/gpl/linux-64/
The output of conda --debug update conda is not very informative in this case. But I did notice at http://repo.continuum.io/pkgs/ that the correct URL now seems to be:
http://repo.continuum.io/pkgs/free/linux-64/
Is there a way to change conda's configuration to look there instead of the gpl/linux-64 URL that appears to be deprecated?
I have never manually adjusted .condarc. Will removing the /gpl/... URL there solve it without requiring me to do additional manual URL management and without compromising any other conda defaults or settings?
As you suspect, this error is caused by an offending entry in your ~/.condarc, namely the following entry under channels:
http://repo.continuum.io/pkgs/gpl
Remove or comment this entry s.t. you're left with the following:
channels:
- http://repo.continuum.io/pkgs/free
So I managed to get aubio 0.4.0 installed so that it imports into python without errors however I haven't figured out how to pass files to be analyzed.
Here's the steps I've done to install aubio 0.4.0 taken from here:
Downloaded the most recent git build of Aubio 0.4.0 source download - http://git.aubio.org/
Unpacked onto C:\
installed python 2.7.6
appended C:\python27 to the 'Path' environment variable
installed MinGW v-0.6.2 mingw.org/download/installer
inside MinGW Installation manager I included - [mingw32-base]
appended C:\MinGW\bin to the 'Path' environment variable
created file "C:\Python27\Lib\distutils\distutils.cfg" containing:
[build]
compiler = mingw32
--------------- INCLUDING LIBAV libraries ---------------------------
download pygtk-all-in-one-2.24.2.win32-py2.7.msi to get pkgconfig and all it's dependancies: ftp.gnome.org/pub/GNOME/binaries/win32/pygtk/2.24/
download libav win32 build win32.libav.org/win32/ and unpack into C:\libav\
create a new environment variable name: "PKG_CONFIG_PATH" with the value at: C:\libav\usr\lib\pkgconfig
append C:\libav\usr\bin\ to the 'Path' environment variable
-------------------- END LIBAV ---------------------------------------
Inside the aubio path run the command: python .\waf configure build -j 1 --check-c-compiler=gcc
I get a crash at 168/193 with test-delnull.exe but the build keeps going and returns "build" finished successfully
Install numpy v-1.8.0 sourceforge.net/projects/numpy/files/NumPy/
Inside the aubio\python path run the command: python setup.py build
Inside the aubio\python path run the command: python setup.py install
I had to copy the dll from aubio\build\src\libaubio-4.dll into python27\Lib\site-packages\aubio\
Then I added one of my own test.mp3 and test.wav files into aubio\python\tests\sounds\
Inside the aubio\python\tests path I ran the command: python run_all_tests -v
------------------- EDIT ---------------------------------
The above instructions should work now without the problem originally asked
------------------- END EDIT -----------------------------
So from the results I get a lot of 'ok' regarding the many different tests being made however it's first problem is with "test_many_sinks" where it tries to use the .wav file from sounds and gives:
AUBIO ERROR: failed creating aubio source with [wav file path]
It continues giving the same error for the rest of the tests until it crashes on "test_zero_hop_size" and stops.
Any further advice as to what I still need to do would be much appreciated.
Thanks!
With help from Paul Brossier we found out two issues:
Because I never included libav into my build I can't use .mp3's to test
Using a newer git repository ended up allowing me successfully run demo_bpm_extract.py which was previously erroring even when I tested with a .wav file. The git source I used can be found here: http://git.aubio.org/?p=aubio.git;a=commit;h=4a1378c12ffe7fd518448f6a1ab00f99f0557286
There are still quite a few errors showing up when executing "run_all_tests" which I've tried to pass over to Paul.