Install Package with Pip on Google Datalab - No Space on Device - python

I'm trying to install the package sodapy using Pip on a fresh instance of Google Datalab but I'm receiving the error 'No space left on device.' I created this instance with over 100 GB of disk space so I'm a bit confused why I would be getting this error and I've tried deleting instances and creating new ones with no luck.
I'm using the command
!pip install sodapy
as is explained in the documentation- https://cloud.google.com/datalab/docs/how-to/adding-libraries
Thanks in advance for your help!

You may have run into a bug where the Disk was not being attached: https://github.com/googledatalab/datalab/issues/1898
(If that is the case, reseting the VM once should fix it and you should update to gcloud version 186.0.0 or later.)

Try adding the --user flag to the pip install command. This switches to using your persistent disk, which is the 100GB disk, instead of your VM's boot disk for the installed package.

Related

failed to install tensorflow on a EC2 instance Ubuntu 20.04

I have a flask application that I would like to run it on an EC2 instance and TensorFlow is needed cause it is image classification. However, after the necessary updates and upgrades, I try to install TensorFlow but after the progress bar completes, I don't see the successfully installed tensorflow==2.7.0. Images are attached, is there any reason for why it is not letting me install TensorFlow, or does the instance have limitations that won't let me install it. Please help and thanks in advance.
installing the TensorFlow on the EC2 instance
Increase the storage on the hard disk (30 GB on my case_
pip install tensorflow-cpu
link
I also faced the same error and apparently the storage was not the reason. I tried using large instance with large memory and I was able to install tensorflow in the anaconda environment successfully.

How to permanently install Rapids on Google colab?

Is there a way to install Rapids permanently on Google colab? I tried many solutions given on StackOverflow and other websites but nothing is working. This is a very big library and it is very frustrating to download this every time I want to work on colab.
I tried this code from Rapids but it is also not working. When I close colab and start again later, I get ModuleNotFoundError: No module named 'cudf'.
# Install RAPIDS
!git clone https://github.com/rapidsai/rapidsai-csp-utils.git
!bash rapidsai-csp-utils/colab/rapids-colab.sh stable
import sys, os, shutil
sys.path.append('/usr/local/lib/python3.7/site-packages/')
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
os.environ["CONDA_PREFIX"] = "/usr/local"
for so in ['cudf', 'rmm', 'nccl', 'cuml', 'cugraph', 'xgboost', 'cuspatial']:
fn = 'lib'+so+'.so'
source_fn = '/usr/local/lib/'+fn
dest_fn = '/usr/lib/'+fn
if os.path.exists(source_fn):
print(f'Copying {source_fn} to {dest_fn}')
shutil.copyfile(source_fn, dest_fn)
# fix for BlazingSQL import issue
# ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by /usr/local/lib/python3.7/site-packages/../../libblazingsql-engine.so)
if not os.path.exists('/usr/lib64'):
os.makedirs('/usr/lib64')
for so_file in os.listdir('/usr/local/lib'):
if 'libstdc' in so_file:
shutil.copyfile('/usr/local/lib/'+so_file, '/usr/lib64/'+so_file)
shutil.copyfile('/usr/local/lib/'+so_file, '/usr/lib/x86_64-linux-gnu/'+so_file)
A solution has been suggested but which uses pip to install libraries - How do I install a library permanently in Colab? but Rapids can't be installed using pip. It can only be installed using Conda. This is the code to install it.
conda create -n rapids-0.19 -c rapidsai -c nvidia -c conda-forge \
rapids-blazing=0.19 python=3.7 cudatoolkit=11.0
I tried to include the google drive path(nb_path) to this code using the --prefix flag as suggested by the above link !pip install --target=$nb_path jdc but I am getting a syntax error.
Can anyone tell me how to set this nb_path to the conda create code above?
For reference, the conda target path for RAPIDS install is /usr/local. We use a different location in the RAPIDS-Colab install script to get it to work.
At the moment, I'm not aware of any way for a user to permanently install RAPIDS into Google Colab. Google Colab isn't designed for the purpose of persisting libraries - or any data for that matter- that aren't preinstalled in the environment. While you have a decent looking workaround there for pip libraries and datasets with Google Drive mounting, with RAPIDS, it is a little more tricky as we update quite a bit of the Colab environment in order to get it to even install RAPIDS. What you propose is an interesting path to explore. We do encourage and work with RAPIDS community members in our Slack channel who try new methods and improve some of our community code like the RAPIDS-Colab installation script.
Just remember, the RAPIDS + Google Colab effort was never meant to be more than a fun, easy way to "Try RAPIDS out". For Google Cloud users, GCP is supposed to be the next step. While it's heartening to see the usage grow over time, Google would need to create a Colab instance that has RAPIDS preinstalled for what you want to happen. You should let the know you want this by
Open any Colab notebook
Go to the Help menu and select ”Send feedback...”
In the meantime, if you need a ready-to-go instance, there are some inexpensive, RAPIDS-enabled, quick start options on the horizon.

Apache Beam Error: Unable to get file system for GCS

I'm trying to write to GCS bucket via Beam (and TF Transform). But I keep getting the following error:
ValueError: Unable to get the Filesystem for path [...]
The answer here and some other sources suggest that I need to pip install aache-beam[gcp] to get a different variant of Apache Beam that works with GCP.
So, I tried changing the setup.py of my training package as:
REQUIRED_PACKAGES = ['apache_beam[gcp]==2.14.0', 'tensorflow-ranking', 'tensorflow_transform==0.14.0']
which didn't help. I also tried adding the following to the beginning of my code:
subprocess.check_call('pip uninstall apache-beam'.split())
subprocess.check_call('pip install apache-beam[gcp]'.split())
which didn't work either.
The logs of the failed GCP job is here. The traceback and the error message appear on row 276.
I should mention that running the same code using Beam's DirectRunner and writing the outputs to local disk runs fine. But I'm now trying to switch to DataflowRunner.
Thanks.
It turns out that you need to uninstall google-cloud-dataflow in addition to installing apache-beam with the gcp option. I guess this happens because google-cloud-dataflow is installed on GCP instances by default. Not sure if the same would be true on other platforms like AWS. But anyway, here are the commands I used:
pip uninstall -y google-cloud-dataflow
pip install apache-beam[gcp]
I noticed this in the very first cell of [this notebook] (https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive/10_recommend/wals_tft.ipynb).

How can I make this script run

I found this script (tutorial) on GitHub (https://github.com/amyoshino/Dash_Tutorial_Series/blob/master/ex4.py) and I am trying to run in my local machine.
Unfortunately I am having and Error
I would really appreciate if anyone can help me to run this script.
Perhaps this is something easy but I am new in coding.
Thank you!
You probably just need to pip install the dash-core-components library!
Take a look at the Dash Installation documentation. It currently recommends running these commands:
pip install dash==0.38.0 # The core dash backend
pip install dash-html-components==0.13.5 # HTML components
pip install dash-core-components==0.43.1 # Supercharged components
pip install dash-table==3.5.0 # Interactive DataTable component (new!)
pip install dash-daq==0.1.0 # DAQ components (newly open-sourced!)
For more info on using pip to install Python packages, see: Installing Packages.
If you have run those commands, and Flask still throws that error, you may be having a path/environment issue, and should provide more info in your question about your Python setup.
Also, just to give you a sense of how to interpret this error message:
It's often easiest to start at the bottom and work your way up.
Here, the bottommost message is a FileNotFound error.
The program is looking for the file in your Python37/lib/site-packages folder. That tells you it's looking for a Python package. That is the directory to which Python packages get installed when you use a tool like pip.

How to install packages on networked computer

I'm using an Ipython2.7 notebook to run some code. Recently discovered that all my data was corrupted and I need to do it all again (meaning I am very very behind schedule) I figured I could half the time required if I could run it on a second computer. So I've gone into a uni computer cluster where the computers have python 2.7 installed. I can open the notebook, but it won't run as the first line is
import mlpy.wavelet
And it gives me an import error. I've tried downloading and installing it from sourceforge, but it seems to install it to the a Q drive, which I don't have access to. I am completely lost on what to do here, I can't even remember how I first installed it on my laptop. I have a feeling I pip installed it but I have no clue how to do this on a uni comp.
Any rapid responses would be greatly appreciated
You can use pip to install packages in your user's home directory.
Run pip install --user mply to install mply and your other dependencies.
See this answer for reference.

Categories