I have a python script using Scikit-learn libraries and it works completely fine when I run it manually in either Jupyter notebook or command line. However, it doesn't work when I schedule it in Windows Task Scheduler. After spending a lot of time I realised the issue is due to the sklearn imports. If I comment out the sklearn imports, the script works fine in the scheduler but the moment I include a sklearn import, the scheduler doesn't execute one line of the script. I have no idea what is causing this and it's even more surprising that the script works like a charm when running it manually. I've uninstalled and reinstalled Anaconda distribution on my PC but no result. Any help on how I can fix this please?
I finally managed to fix this. Turns out it's a corrupted scipy package. Uninstalling and re-installing it fixed the issue.
The way I figured it out is by running my python script as a batch file which threw the following error
from scipy.sparse.linalg import lsqr as sparse_lsqr
ImportError: DLL load failed: The specified module could not be found.
I never got this error when I ran my python script from the windows scheduler or Jupyter notebook or even command line, which is very strange.
Related
I got a problem I can't seem to figure out. The first time I imported Spacy into a Jupyter notebook I had no problems. It just imported it as I expected.
The second time I tried to import it (using a different notebook) I got:
ImportError: cannot import name 'prefer_gpu' from 'thinc.api' (C:\python-environments\nlp\lib\site-packages\thinc\api.py)
So I tried to restart the kernel and tried it again (thinking that might be the issue). That did not solve it. Also trying to run the same cell that imported Spacy in the first notebook also throws the error now after it went well the first time.
It sounds like you have an old version of Thinc somewhere; try uninstalling and reinstalling Thinc.
Another thing to check is if you're running in the right Python environment. Sometimes Jupyter notebooks pull in a different environment than the one you're expecting in non-obvious ways. There was a thread in spaCy discussions about this recently. You can run this command to check which Python executable is being used in the notebook and make sure it's the one you think it is:
import sys
print(sys.executable)
I had a similar issue, followed the git hub link, created a new environment, and installed all required packages, and it resolved my issue. I'm using Visual code, so I had to install other dependencies since VC uses this as a conda environment as a base for my code implementation
I was trying to use CuPy inside a Jupyter Notebook on Windows10 and got this error :
---> from cupy_backends.cuda.libs import nvrtc
ImportError: DLL load failed while importing nvrtc: The specified procedure could not be found.
This is triggered by import cupy.
I know there is a bunch of threads about similar issues (DLLs not found by Jupyter under Windows), but everyone of them relies on conda, that I'm not using anymore.
I checked os.environ['CUDA_PATH'], which is set to C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6 and is the right path.
Also, os.environ['PATH'] contains C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6\\bin which is where the DLL is located.
I fixed it once by running pip install -U notebook, then it started failing again after restarting Jupyter. Running the same command (even with --force-reinstall) could not produce the same effect again.
I have no problems with CuPy when using a shell or a regular Python IDE. I could use workarounds like executing CuPy based commands outside Jupyter for myself, but that would go against using notebooks for pedagogy and examples, which is my major use of notebooks.
Would anyone have a fix for that not relying on conda ?
The error was showing up because I was importing PyTorch before CuPy.
Solution was to import cupy before torch.
I have a MacBook Pro M1 running on Big Sur with python 3.8, Matlab R2020b. Both running under rosetta2 perfectly fine.
Since I need to use one Matlab function, that I have in an python Script of me, I wanted to use Matlab.engine in my python script and followed the instructions with sudo privileges and python3 instead of python on:
https://www.mathworks.com/help/matlab/matlab_external/install-the-matlab-engine-for-python.html
Then I entered python3 in the terminal and tried import matlab.engine which resulted in the error: No module named 'matlab.engine'; 'matlab' is not a package
My .zshrc file contains the Path to my Python. I tried export PATH="/Users/flo/Library/Python/3.8/bin:$PATH"and as that didn't work I also tried export PATH="/Library/Python/3.8:$PATH"
Matlab is also on my path, as I can call matlab from terminal and it starts, as expected. It's just the matlab.engine That I can't get running.
Since the only thing, that I want to achieve by it, is calling a script, containing a function with 2 input and 7 output arguments (which I need in python for further calculations), is there another way to do it without the need of matlab.engine, in case I can't get it running?
Oh dear, I could resolve the issue, that I had for quite some days, some minutes after posting the question..
It seems, I did the pip3 install matlab some weeks/months ago and didn't remember. When I tried to import matlab.engine, python thought, I want to import engine from the installed matlab from pip. And that's why I got the error, that matlab is not a package.
Simply do pip3 uninstall matlab resolved this issue for me!
Hope, this helps other people save the days, that I wasted with such a silly error..
I got a problem I can't seem to figure out. The first time I imported Spacy into a Jupyter notebook I had no problems. It just imported it as I expected.
The second time I tried to import it (using a different notebook) I got:
ImportError: cannot import name 'prefer_gpu' from 'thinc.api' (C:\python-environments\nlp\lib\site-packages\thinc\api.py)
So I tried to restart the kernel and tried it again (thinking that might be the issue). That did not solve it. Also trying to run the same cell that imported Spacy in the first notebook also throws the error now after it went well the first time.
It sounds like you have an old version of Thinc somewhere; try uninstalling and reinstalling Thinc.
Another thing to check is if you're running in the right Python environment. Sometimes Jupyter notebooks pull in a different environment than the one you're expecting in non-obvious ways. There was a thread in spaCy discussions about this recently. You can run this command to check which Python executable is being used in the notebook and make sure it's the one you think it is:
import sys
print(sys.executable)
I had a similar issue, followed the git hub link, created a new environment, and installed all required packages, and it resolved my issue. I'm using Visual code, so I had to install other dependencies since VC uses this as a conda environment as a base for my code implementation
So I've recently updated my Anaconda environment, and now I can't import tensorflow anymore. Everytime I run a script containing it, the Spyder Console runs for a while, then just stops and resets to ln[1].
I tried to see how far the script compiles, and it does everything fine, until the import statement. Weirdly enough, the autocomplete still works for tf, which means that my installation should be fine. Reinstalling tensorflow also didn't do anything.
There are no error messages, because the compiler dies a silent death everytime I run the script. I've seen others describe a similar problem on a Jupyter, but their fixes didn't work. (Running the script without Spyder just freezes Python)
I'd greatly appreciate help
Ok so I did some black mathmagic and fixed it. What I did was reducing the tensorflow version via pip (to 1.14, but I don't think that matters) and then upgrading my entire conda set up again with conda update --all. I have no idea why, and the anaconda console screamed like it was tortured during the entire update, but now it works and I don't think I'll touch it again. If I see better fix, or if I encounter problems with this one, I'll update this post again.