I have a problem with rdkit and spark. I gives me an error
Depickling from a version number (9.0)that is higher than our version (7.2)
I am using conda, I tried to update pickle (I thought maybe that is the problem) however it seems that conda does not have pickle.
Can anyone provide some direction please with regards to the source of the problems. Thanks
Related
I am working on an optimization process through which I need some data using Pyswmm output model such as flood depths and pollutants mass.
It is only possible with latest version of pyswmm to get output data.
When I install pyswmm latest version I face this error.
(ImportError: DLL load failed while importing _solver: The specified module could not be found.)
I already looked for some suggestions regarding system environment and path changes but it did not work.
I hope anyone help me with this.
PySWMM is now updated to v1.2 which resolved this error. Sorry about it! And sorry for the delay here. I never thought to check S.O. until today!
Check out pypi https://pypi.org/project/pyswmm/
I try to inititialize HDBSCAN for clustering in JupytherLab. I use Python 3.7.6..
import numpy as np
import pandas as pd
from sklearn.datasets import load_digits
from sklearn.manifold import TSNE
import hdbscan
There always always appears the same error (see headline) and until now I do not know, from what exactly it comes from.
I have looked in several post after solutions, but no solution has helped me until yet.
For example:
uninstalled and installed numpy.
installed numpy >= 1.20.0
tried lines like pip install package --no-cache-dir --no-binary :all:
tried following package version combination: hdbscan=0.8.19, matplotlib=3.2.2, numpy=1.15.4, pandas=0.23.4, scikit-learn=0.20.1, scipy=1.1.0, tensorflow=1.13.1.
I have also tried to install packages like tensorboard, but it did not helped. Everything is installed via the Terminal and with pip.
I start to think, that the problem might be deeper - but maybe I overlooked something important.
Can somebody help me to find the bug, please?
Best regards
Philipp
I guess you've probably seen this very long HDBSCAN GitHub issue where there still doesn't seem to be a clear solution. Unfortunately it seems to affect different systems in weird ways and there is a huge list of possible solutions and things that worked for other people (personally, just reinstalling numpy worked for me when I had a similar problem last week.)
The fact that you can try so many things and still have it not work seems suspicious. Maybe something else about your Python install or the way you're trying them is affecting the solutions? For instance, is JupyterLab definitely using the same Python environment that you're trying these solutions on? (You could test this by uninstalling HDBSCAN and seeing if the error changes instead to "package not found.")
Other than the many solutions in the GitHub issue (which it sounds like you've already tried), I really don't think there's much else you can try other than freshly reinstalling Python. Something about NumPy 1.20 and a change to the C API is causing this issue and it could be that something is lurking in your install every time you try these solutions.
Alternatively, you could make a new Python install/environment with a tool like pyenv or anaconda so that it doesn't break your existing install, and you can try and install just the bare minimum on this new install (i.e. just HDBSCAN.)
Upgrading the numpy library solved the problem.
My numpy version is 1.22.0 and sklearn version is 0.24.1.
you should also look here: ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject
I have a 2021 Macbook pro M1.
It is a well recognized problem that it conflicts with Tensoflow library, see here or here; in particular this last one was exactly the issue I experienced: when I tried to import tensorflow in jupyter notebook via the command
import tensorflow as tf
then the message
The kernel appears to have died. It will restart automatically.
appeared. Then searching in the above linked discussions, I have the feeling that the suggestion given at some point, which points at this link, SEEMS to be useful.
FIRST QUESTION: is this a/the solution for the M1-Tensorflow conflict?
I say "it seems" since before trying that I have been into the kind of tornado of desperate attempts leading a beginner like me to search for hints all around the web and then copy-paste commands taken here and there into the Terminal without understanding them all properly.
On one hand it sounds dumb, I admit, on the other the cost of understanding everything goes well beyond my humble intentions of learning some ML.
So, the final result is that I have a complete mess in my computer; the old libraries like numpy don't work anymore (when I import them inside a Python3 page opened with jupyter notebook with the command import numpy as np, the message
ModuleNotFoundError: No module named 'numpy'
appears), then the pip command doesn't work, if I use the pip3 to install, nothing changes. I read somewhere to use a virtual enviroment, and I followed the instructions even if I wasn't really aware of what I was doing; I downloaded XCode, miniforge3...
Well, I guess that there is somebody out there who can relate with this.
SECOND PROBLEM: I would like to clean-up everything dealing with Python/pip/anaconda and so on and install everything from scratch, possibly following the above link to solve the M1-tensorflow conflict...if it is correct. How can I do that?
Can somebody help me, please? Thanks
I just installed the tensorboard profiler with
pip install -U tensorboard_plugin_profile
The version is 2.3.
Tensorflow-Version 2.3
Tensorboard-Version 2.3
cudatoolkit-Version 10.1.243
When i now try to open the Profil-Tab in Tensorboard i see the Profiler-Window normaly but empty and the Error-Message:
DEM6561: Failed to load libcupti (is it installed and accessible?)
And the warning:
No step marker observed and hence the step time is unknown. This may happen if (1) training steps are not instrumented (e.g., if you are not using Keras) or (2) the profiling duration is shorter than the step time. For (1), you need to add step instrumentation; for (2), you may try to profile longer.
I think it has something to do with the enviroment-pathes- and variables but i dont know how they work with the virtuel enviroments of Anaconda. (I dont have a Cuda-Folder i can link to)
Had someone the same problem like me or any ideas what i can try?
Thanks ahead!
First, make sure that the CUPTI has been set to Path (via Environment Variables if you're using Windows), adding a path which should look like this:
%CUDA_PATH%\extras\CUPTI\lib64
Second, check if Tensorflow is looking for the correct CUPTI dll. I've encountered this exact same issue and as I checked, it appears that TF 2.4 is looking for cupti64_110.dll instead of cupti64_2020.1.1.dll. It is currently a known issue and will be addressed in TF 2.5. I'm not sure if that's the case too with TF 2.3.
I basically resolved the issue by copying the dll in the same directory and renaming it. Let me know if this helps!
I use Anaconda on a Windows 10 laptop with Python 2.7 and Spark 2.1. Built a deep learning model using Sknn.mlp package. I have completed the model. When I try to predict using the predict function, it throws an error. I run the same code on my Mac and it works just fine. Wondering what is wrong with my windows packages.
'NoneType' object is not callable
I verified input data. It is numpy.array and it does not have null value. Its dimension is same as training one and all attributed are the same. Not sure what it can be.
I don't work with Python on Windows, so this answer will be very vague, but maybe it will guide you in the right direction. Sometimes there are cross-platform errors due to one module still not being updated for the OS, frequently when another related module gets an update. I recall something happened to me with a django application which required somebody more familiar with Windows to fix it for me.
Maybe you could try with an environment using older versions of your modules until you find the culprit.
I finally solved the problem on windows. Here is the solution in case you face it.
The Theano package was faulty. I installed the latest version from github and then it threw another error as below:
RuntimeError: To use MKL 2018 with Theano you MUST set "MKL_THREADING_LAYER=GNU" in your environment.
In order to solve this, I created a variable named MKL_Threading_Layer under user environment variable and passed GNU. Reset the kernel and it was working.
Hope it helps!