MetPy and grb2 file opening issue: xarray's Value Error - python

I found in this script: https://unidata.github.io/python-training/gallery/declarative_500_hpa/
a plot done using a grb2 file. I just copy and paste the code, it works well.
I am trying to do the same, for another date with a downloaded grb2 file from https://www.ncei.noaa.gov/products/weather-climate-models/global-forecast
and i get this error, just after replacing the name of the file with a local grb2 downloaded from NCEI:
ValueError: did not find a match in any of xarray's currently installed IO backends ['netcdf4', 'h5netcdf', 'scipy', 'pydap', 'zarr']. Consider explicitly selecting one of the installed backends via the engine parameter to xarray.open_dataset(), or installing additional IO dependencies
I also tried pip install xarray[complete] and pip install netcdf4. Nothing worked. What am i doing wrong?
Best regards,
Fede

The original example you linked, while the source data is in GRIB2 format, is accessing the data from a THREDDS server using the OPeNDAP protocol. You can tell this from looking at the URL and seeing https://www.ncei.noaa.gov/thredds/dodsC/. This protocol is readily supported by xarray. The important point is that for that case, the GRIB2 format was not being processed by xarray.
To open GRIB2 data with xarray, you need to install cfgrib. You can do this with pip using:
pip install cfgrib
or from conda-forge using conda:
conda install -c conda-forge cfgrib

Related

OSError: Could not find kaggle.json. Make sure it's located in C:\Users\Lior\.kaggle. Or use the environment method

I have a similar problem as: Kaggle API issue "Could not find kaggle.json. Make sure it's located in......"
I have the same error when I type kaggle competitions download -c spaceship-titanic
But in my case the folder ".kaggle/" is actually empty. So I assume I downloded kaggle api incorrectly, how I download it correctly?
Things I have tried acccording to https://github.com/Kaggle/kaggle-api
pip install kaggle, pip install --user kaggle, sudo pip install kaggle
The first two compiled, but didnot create the kaggle.json file.
The third didnot compiled and it said sudo command not found.
Sample photo for finding the accounts page

Getting Error as "ValueError: Zip does not support timestamps before 1980" while installing Setuptools-scm Pypi

Describe the bug
Am working on a Client machine where I don't have access to external servers. So I have to download the packages from Python websites and extract the zip-files and I have to install the packages in my machine by passing a command python setup.py install in command prompt. First two packages (Selenium and urllib3) working fine in my machine and setup also good.
I have tried to install the Pytest Pypi but that requires setuptools-scm. So I downloaded the setuptools-scm packages and I tried to install it but I am getting an error ValueError: Zip does not support timestamps before 1980.
Expected behavior
Setuptools-scm should be installed
To Reproduce
Download the setuptools-scm package from https://pypi.org/project/setuptools-scm/
Extract the zip-files and installing the setuptools-scm by running python setup.py install
Observe the error ValueError: Zip does not support timestamps before 1980.
Command Prompt response:
File "C:\Program Files\Python 3.8\lib\zipfile.py", line 360, in __init__
raise ValueError('ZIP does not support timestamps before 1980')
ValueError: ZIP does not support timestamps before 1980
In my case, it was because files had a last modified date on them on 1st of january 1970. I simply touch all the files to update the last modified to today and everything works.
$ touch `find . -type f`
I got this error in python3.9. I could resolve it by changing all instance's of strict_timestamps from True to False (i.e. strict_timestamps=False) in zipfile.py inside Lib folder (..\Python\Python39\Lib\). Reference
I got this error too. That's because I use "WinRAR" to decompress package( *.tar.gz), so many files don't have dates on them. Then I use cmd tools and the command tar -zxvf *.tar.gz to decompress this package, and the problem is solved.
I noticed that the files in src/setuptools_scm/ didn't have a timestamp for date created or modified. I simply opened the files in a text editor and saved them without changes to establish a timestamp.
After that, >python ./setup.py install worked as expected.

Full installation of tensorflow (all modules)?

I have this repository with me ; https://github.com/layog/Accurate-Binary-Convolution-Network . As requirements.txt says, it requires tensorflow==1.4.1. So I am using miniconda (in Ubuntu18.04) and for the love of God, I can't get it to run (errors out at the below line)
from tensorflow.examples.tutorial.* import input_data
Gives me an ImportError saying it can't find tensorflow.examples. I have diagnosed the problem that a few modules are missing after I installed tensorflow (Have tried all of the below ways)
pip install tensorflow==1.4.1
conda install -c conda-forge tensorflow==1.4.1
#And various wheel packages avaliable on the internet for 1.4.1
pip install tensorflow-1.4.0rc1-cp36-cp36m-manylinux1_x86_64.whl
Question is, if I want all the modules which are present in the git repo source as my installed copy, do I have to COMPLETELY build tensorflow from source ? If yes, can you mention the flag I should use? Are there any wheel packages available that have all modules present in them ?
A link would save me tonnes of effort!
NOTE: Even if I manually import the examples directory, it says tensorflow.contrib is missing, and if I local import that too, another ImportError pops up. There has to be an easier way I am sure of it
Just for reference for others stuck in the same situation:-
Use latest tensorflow build and bezel 0.27.1 for installing it. Even though the requirements state that we need an older version - use newer one instead. Not worth the hassle and will get the job done.
Also to answer the above question about building only specific directories is possible. Each module consists of BUILD file which is fed to bezel.
See the names category of the file to build specific to that folder. For reference the command I used to generate the wheel package for examples.tutorial.mnist :
bazel build --config=opt --config=cuda --incompatible_load_argument_is_label=false //tensorflow/examples/tutorials/mnist:all_files
Here all_files is the name found in the examples/tutorials/mnist/BUILD file.

Apache Beam Error: Unable to get file system for GCS

I'm trying to write to GCS bucket via Beam (and TF Transform). But I keep getting the following error:
ValueError: Unable to get the Filesystem for path [...]
The answer here and some other sources suggest that I need to pip install aache-beam[gcp] to get a different variant of Apache Beam that works with GCP.
So, I tried changing the setup.py of my training package as:
REQUIRED_PACKAGES = ['apache_beam[gcp]==2.14.0', 'tensorflow-ranking', 'tensorflow_transform==0.14.0']
which didn't help. I also tried adding the following to the beginning of my code:
subprocess.check_call('pip uninstall apache-beam'.split())
subprocess.check_call('pip install apache-beam[gcp]'.split())
which didn't work either.
The logs of the failed GCP job is here. The traceback and the error message appear on row 276.
I should mention that running the same code using Beam's DirectRunner and writing the outputs to local disk runs fine. But I'm now trying to switch to DataflowRunner.
Thanks.
It turns out that you need to uninstall google-cloud-dataflow in addition to installing apache-beam with the gcp option. I guess this happens because google-cloud-dataflow is installed on GCP instances by default. Not sure if the same would be true on other platforms like AWS. But anyway, here are the commands I used:
pip uninstall -y google-cloud-dataflow
pip install apache-beam[gcp]
I noticed this in the very first cell of [this notebook] (https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive/10_recommend/wals_tft.ipynb).

How can I make this script run

I found this script (tutorial) on GitHub (https://github.com/amyoshino/Dash_Tutorial_Series/blob/master/ex4.py) and I am trying to run in my local machine.
Unfortunately I am having and Error
I would really appreciate if anyone can help me to run this script.
Perhaps this is something easy but I am new in coding.
Thank you!
You probably just need to pip install the dash-core-components library!
Take a look at the Dash Installation documentation. It currently recommends running these commands:
pip install dash==0.38.0 # The core dash backend
pip install dash-html-components==0.13.5 # HTML components
pip install dash-core-components==0.43.1 # Supercharged components
pip install dash-table==3.5.0 # Interactive DataTable component (new!)
pip install dash-daq==0.1.0 # DAQ components (newly open-sourced!)
For more info on using pip to install Python packages, see: Installing Packages.
If you have run those commands, and Flask still throws that error, you may be having a path/environment issue, and should provide more info in your question about your Python setup.
Also, just to give you a sense of how to interpret this error message:
It's often easiest to start at the bottom and work your way up.
Here, the bottommost message is a FileNotFound error.
The program is looking for the file in your Python37/lib/site-packages folder. That tells you it's looking for a Python package. That is the directory to which Python packages get installed when you use a tool like pip.

Categories