I am trying to save a sklearn model on a Windows server using sklearn.joblib.dump and then joblib.load the same file on a linux server (centOS71). I get the error below:
ValueError: non-string names in Numpy dtype unpickling
This is what I have tried:
Tried both python27 and python35
Tried the built in open() with 'wb' and 'rb' arguments
I really don't care how the file is moved, I just need to be able to move and load it in a reasonable amount of time.
Python pickle should run between windows/linux. There may be incompatibilities if:
python versions on the two hosts are different (If so, try installing same version of python on both hosts); AND/OR
if one machine is 32-bit and another is 64-bit (I dont know any fix so far for this problem)
Related
How do I import and use an .SO file that I extracted from apk file?
I've used ctypes library in Linux but it gave me error on every way I tried it.
There are 2 version of the .so files: arm64, and armeabi.
When i tried import the armeabi version, which is 32-bit, it gave me
wrong ELF class: ELFCLASS32
and so I try the arm64, and somehow I got
cannot open shared object file: No such file or directory
I can assure you it is not a typo path, I tried to copy it using the same path. but I cannot import it because no such file.
code:
import ctypes
def main():
TestLib = ctypes.CDLL('/home/manalkaff/Desktop/arm64-v8a/nativelibrary.so')
if __name__ == '__main__':
main()
Is this how I am supposed to do it? Or there is another way?
You can try to decompile and port your shared object to x86. To do this you should load your binary in Ghidra and extract all the functions except utility ones like JNI initialization etc. which will be inserted by compiler automatically if required. Then rebuild using compiler and IDE of your choice like Clion + Clang. Not forget to fix some errors and switch to Windows API if Android API was used. This will require some time and effort though, depending on amount of functions and size of binary (except support stuff once again).
You can't load and execute ARM code on x86 CPU. You need a virtual machine that will emulate ARM CPU for that.
Even after loading the .so file on Linux ARM, you might still be missing some Android dependencies. Use ldd copied.so to see which.
I want to run an MXNet module in GPU.
I have a system which has Ubuntu 18.04 along Cuda 10.0 installed. Apparently this is not covered yet by MXNet binary files so I was focusing on installing 2 cuda versions in my pc (see also here).
Anyway I now have 2 cuda toolkits in my pc in different folders. I need a way to direct my system to use Cuda 9.2 when run from PyCharm. The funny thing is that from a typical console I can run it just fine (at least the MXNet loading part that is of course).
In the module I want to run the program is stuck in:
import mxnet as mx
which leads to base.py in MXNet:
def _load_lib():
"""Load library by searching possible path."""
lib_path = libinfo.find_lib_path()
lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL) # <- This is where is throws the error.
# DMatrix functions
lib.MXGetLastError.restype = ctypes.c_char_p
return lib
the strange thing is that lib_path[0] just points to the location of libmxnet.so (which is correct by the way) and suddenly it throws an error:
OSError: libcudart.so.9.2: cannot open shared object file: No such
file or directory
Even if I follow the error trace the last command is this:
self._handle = _dlopen(self._name, mode)
with self._name being the same location of libmxnet.so.
I have tried to make it work by changing the system variable with
os.environ["LD_LIBRARY_PATH"] = "/usr/local/cuda-9.2/lib64"
as the second line of the module (the 1st is of course import os!) but this does not seem to work. Apparently it's taken into consideration.
So, how can I bypass this?
Any solution would be acceptable being on the MXNet side or pyCharm side.
Well, to make this available to anyone facing the same problem I will post my solution.
I managed to make it work by defining the environmental variable inside pycharm from the run configuration menu (the one that it's available from Run->Run... or Alt+Shift+F10) and defining it there as environmental variable.
LD_LIBRARY_PATH: /usr/local/cuda-9.2/lib64
I am not sure why for that case pycharm is working fine while when the same variable is defined inside the code it does not though (any explanation welcome).
Here is my code.
Line number 65
ma=cv2.imread(str(files1[x]),1)
The result of a cv2.imread() is always a None.
I have done all basic checks.
The file which i'm trying to read exists.
There are no other variables or functions called cv2 or imread.
I'm using the same version of python in all cases.
I have only one version of opencv installed.
I'm using ubuntu 14.04 and the folder has read and write permissions for all users.
I have also tested it on Pycharm with a new python file and it works.
I have only problem with this program.
Please let me know if u have any ideas about this problem.
Not sure how I can get around this...on the cloudera-manager site it says their software requires either pyhton2.6 or python2.7
however when i try to start the cloudera-scm-agent it complains that:
/opt/versioned_apps/cm-4.8.2/lib64/cmf/agent/build/env/bin/python: error while loading shared libraries: libpython2.6.so.1.0: cannot open shared object file: No such file or directory
I'm running centos7 (which is not supported out of the box).
To make matters worst, I don't know anything about python either (sorry)...so if i need to install anything, please provide step-by-step instructions :-)
I'm getting
IOError: [Errno 22] Invalid argument
when I try to write a large bytestring to disk with f.write(), where f was opened with mode wb.
I've seen lots of people online getting this error when using a Windows network drive, but I'm on OSX (10.7 when I originally asked the question but 10.8 now, with a standard HFS+ local filesystem). I'm using Python 3.2.2 (happens on both a python.org binary and a homebrew install). I don't see this problem with the system Python 2.7.2.
I also tried mode w+b based on this Windows bug workaround, but of course that didn't help.
The data is coming from a large numpy array (almost 4GB of floats). It works fine if I manually loop over the string and write it out in chunks. But because I can't write it all in one pass, np.save and np.savez fail -- since they just use f.write(ary.tostring()). I get a similar error when I try to save it into an existing HDF5 file with h5py.
Note that I get the same problem when reading a file opened with file(filename, 'rb'): f.read() gives this IOError, while f.read(chunk_size) for reasonable chunk_size works.
Any thoughts?
This appears to be a general OSX bug with fread / fwrite and so isn't really fixable by a Python user. See numpy #3858, this torch7 commit, this SO question/answer, ....
Supposedly it's been fixed in Mavericks, but I'm still seeing the issue.
Python 2 may have worked around this or its io module may have always buffered large reads/writes; I haven't investigated thoroughly.
Perhaps try not opening with the b flag, I didn't think that was supported on all OS / filesystems.