I am running some theano code making use of tensor.advanced_subtensor
I am getting the following error :
NotImplementedError: Could not import inplace_increment, so some advanced indexing features are disabled. They will be available if you update NumPy to version 1.8 or later, or to the latest development version. You may need to clear the cache (theano-cache clear) afterwards.
I have the latest version of theano (0.6.0.dev-60b5ccc2bcabb1010714376764daf8a50722cee9) and numpy (1.8.0). Why am I still getting this error? How can I resolve this error? How do I clear theano cache?
The theano cache is usually in ~/.theano/ if you are using *ix.
You need to clear Theano cache. Cache live in ~/.theano/ folder.
Follow below steps to clear it manually.
import theano
print (theano.config.compiledir)
# and then delete directory returned from above.
If you don not want to delete manually then use below command.
theano-cache purge
Related
I am trying to install TensorFlow in Python. I am getting the following error message, I tried uninstalling NumPy and re-installing NumPy but still getting the same error message. Can someone please help me to resolve this issue?
AttributeError: module 'numpy' has no attribute 'typeDict'
I was trying to use the package pyensembl and ran into this same issue. I was able to work around it for now with
pip install numpy==1.21
Which should suffice until some of these less active packages are able to update to the new API.
As we can see in NumPy 1.21.0 Release Notes
np.typeDict is a deprecated alias for np.sctypeDict and has been so
for over 14 years
(6689502).
A deprecation warning will now be issued whenever getting np.typeDict.
(gh-17586)
This means you are using a NumPy version that removed the deprecated ways AND the library you are using wasn't updated to match that version (uses something like np.typeDict instead of np.sctypeDict).
You have at least three options now
Report the issue and wait until it gets fixed by TensorFlow.
Use an older version of numpy (one before it started to issue the deprecation warning) and wait for it to be fixed.
Change np.typeDict to np.sctypeDict wherever is being used.
I just installed the tensorboard profiler with
pip install -U tensorboard_plugin_profile
The version is 2.3.
Tensorflow-Version 2.3
Tensorboard-Version 2.3
cudatoolkit-Version 10.1.243
When i now try to open the Profil-Tab in Tensorboard i see the Profiler-Window normaly but empty and the Error-Message:
DEM6561: Failed to load libcupti (is it installed and accessible?)
And the warning:
No step marker observed and hence the step time is unknown. This may happen if (1) training steps are not instrumented (e.g., if you are not using Keras) or (2) the profiling duration is shorter than the step time. For (1), you need to add step instrumentation; for (2), you may try to profile longer.
I think it has something to do with the enviroment-pathes- and variables but i dont know how they work with the virtuel enviroments of Anaconda. (I dont have a Cuda-Folder i can link to)
Had someone the same problem like me or any ideas what i can try?
Thanks ahead!
First, make sure that the CUPTI has been set to Path (via Environment Variables if you're using Windows), adding a path which should look like this:
%CUDA_PATH%\extras\CUPTI\lib64
Second, check if Tensorflow is looking for the correct CUPTI dll. I've encountered this exact same issue and as I checked, it appears that TF 2.4 is looking for cupti64_110.dll instead of cupti64_2020.1.1.dll. It is currently a known issue and will be addressed in TF 2.5. I'm not sure if that's the case too with TF 2.3.
I basically resolved the issue by copying the dll in the same directory and renaming it. Let me know if this helps!
I use Anaconda on a Windows 10 laptop with Python 2.7 and Spark 2.1. Built a deep learning model using Sknn.mlp package. I have completed the model. When I try to predict using the predict function, it throws an error. I run the same code on my Mac and it works just fine. Wondering what is wrong with my windows packages.
'NoneType' object is not callable
I verified input data. It is numpy.array and it does not have null value. Its dimension is same as training one and all attributed are the same. Not sure what it can be.
I don't work with Python on Windows, so this answer will be very vague, but maybe it will guide you in the right direction. Sometimes there are cross-platform errors due to one module still not being updated for the OS, frequently when another related module gets an update. I recall something happened to me with a django application which required somebody more familiar with Windows to fix it for me.
Maybe you could try with an environment using older versions of your modules until you find the culprit.
I finally solved the problem on windows. Here is the solution in case you face it.
The Theano package was faulty. I installed the latest version from github and then it threw another error as below:
RuntimeError: To use MKL 2018 with Theano you MUST set "MKL_THREADING_LAYER=GNU" in your environment.
In order to solve this, I created a variable named MKL_Threading_Layer under user environment variable and passed GNU. Reset the kernel and it was working.
Hope it helps!
I'm using Keras with the Theano backend on Ubuntu 16.04. My setup has been working without issues, however, all of a sudden I get the following error when I import Keras (import keras):
ValueError: You are trying to use the old GPU back-end. It was removed from Theano. Use device=cuda* now. See https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29 for more information.
How do I resolve this?
You should change (or add) your environmental variable called THEANO_FLAGS. If you set the variable so that it contains device=cuda instead of device=gpu the error will be gone.
Also set the floating point precision to float32 when working on the GPU as that is usually much faster (THEANO_FLAGS='device=cuda,floatX=float32').
More info on this variable can be found here and here.
go to .theanorc file and change device=gpu to device=cuda
for me no gpu,use cpu,work :
export THEANO_FLAGS='mode=FAST_RUN,device=cpu,floatX=float32'
I am trying to debug a Fortran warning in some Sklearn code that runs perfectly on my laptop...but after transferring to my desktop (which is a fresh Ubuntu 15.10, fresh Pycharm, and fresh Anaconda3), I get the following error when running sklearn.cross_validation.cross_val_score:
/anaconda3/lib/python3.5/site-packages/sklearn/externals/joblib
/hashing.py:197: DeprecationWarning: Changing the shape of non-C contiguous
array by
descriptor assignment is deprecated. To maintain
the Fortran contiguity of a multidimensional Fortran
array, use 'a.T.view(...).T' instead
obj_bytes_view = obj.view(self.np.uint8)
The command I am submitting to cross_val_score is:
test_results = cross_val_score(learner(**learner_args),data,y=classes,n_jobs=n_jobs,scoring='accuracy',cv=LeaveOneOut(data.shape[0]))
Where the iterator is the sklearn cross validation object...and nothing else special is going on. What could be happening here? Am I missing some installation step?
Just for the record for people like me who found this SO post via Google, this is has been recorded as issue #6370 for scikit-learn.
As mentioned there:
This problem has been fixed in joblib master. It won't be fixed in scikit-learn until:
1) we do a new joblib release
2) we update scikit-learn master to have the new joblib release
3) if you are using released versions of scikit-learn, which I am guessing you are, you will have to wait until there is a new scikit-learn release
I was able to use the above workaround from #bordeo :
import warnings
warnings.filterwarnings("ignore")