Python: writing to another process's memory under linux - python

How to write to another process's address space using python under Ubuntu Linux?
My attempts:
1) Using the virtual file /proc/$PID/mem and seeking to the address. I have successfully used it to read memory, but attempting to write causes an IOError:
fd=open("/proc/"+pid+"/mem","r+")
fd.seek(address,0)
fd.write("ABC")
Output:
IOError: [Errno 22] Invalid argument
2) Attempting to use the python-ptrace library as suggested in other threads. However, I cannot find good documentation or example code.
Note: this is not a permissions issue, running as root produces the same behaviour.

Found a solution here: http://tito.googlecode.com/svn-history/r2/trunk/draft/fakefs.py
It uses the ctypes package to load libc, then libc.ptrace with the POKEDATA option to write the bytes.

Related

Python can't locate .so shared library with ctypes.CDLL - Windows

I am trying to run a C function in Python. I followed examples online, and compiled the C source file into a .so shared library, and tried to pass it into the ctypes CDLL() initializer function.
import ctypes
cFile = ctypes.CDLL("libchess.so")
At this point python crashes with the message:
Could not find module 'C:\Users\user\PycharmProjects\project\libchess.so' (or one of its dependencies). Try using the full path with constructor syntax.
libchess.so is in the same directory as this Python file, so I don't see why there would be an issue finding it.
I read some stuff about how shared libraries might be hidden from later versions of python, but the suggested solutions I tried did not work. Most solutions were also referring to fixes involving linux system environment variables, but I'm on Windows.
Things I've tried that have not worked:
changing "libchess.so" to "./libchess.so" or the full path
using cdll.LoadLibrary() instead of CDLL() (apparently both do the same thing)
adding the parent directory to system PATH variable
putting os.add_dll_directory(os.getcwd()) in the code before trying to load the file
Any more suggestions are appreciated.
Solved:
Detailed explanation here: https://stackoverflow.com/a/64472088/16044321
The issue is specific to how Python performs a DLL/SO search on Windows. While the ctypes docs do not specify this, the CDLL() function requires the optional argument winmode=0 to work correctly on Windows when loading a .dll or .so. This issue is also specific to Python versions greater than 3.8.
Thus, simply changing the 2nd line to cFile = ctypes.CDLL("libchess.so", winmode=0) works as expected.

Dump Python sklearn model in Windows and read it in Linux

I am trying to save a sklearn model on a Windows server using sklearn.joblib.dump and then joblib.load the same file on a linux server (centOS71). I get the error below:
ValueError: non-string names in Numpy dtype unpickling
This is what I have tried:
Tried both python27 and python35
Tried the built in open() with 'wb' and 'rb' arguments
I really don't care how the file is moved, I just need to be able to move and load it in a reasonable amount of time.
Python pickle should run between windows/linux. There may be incompatibilities if:
python versions on the two hosts are different (If so, try installing same version of python on both hosts); AND/OR
if one machine is 32-bit and another is 64-bit (I dont know any fix so far for this problem)

Disk space on Windows share over SMB from Linux

I'm developing an addon for XBMC that checks for files to delete based upon disk space availability. I am having trouble checking for disk space on my Windows 7 share from Linux-based operating systems, including Raspberry Pi, Ubuntu, etc. XBMC uses python 2.6 by the way.
My current code is as follows:
import os
loc1 = "smb://geert/e/Downloads/"
loc2 = "/home/"
stats = os.statvfs(loc)
print stats
In the code above loc1 is the path that gives trouble. It appears as if the statvfs module cannot handle smb:// paths, because I keep getting the error below. When I try loc2 everything is fine. I have searched high and low for a definitive answer to this question, but could not find any.
/usr/bin/python2.7 statvfstest.py
Traceback (most recent call last):
File "statvfstest.py", line 8, in <module>
print os.statvfs(loc)
OSError: [Errno 2] No such file or directory: 'smb://geert/e/Downloads/'
Steps I have tried:
I have tried the psutil module as I have seen recommended frequently in other questions on this site, but it appears to merely be a wrapper for the statvfs method and thus results in the same error message, only with a longer stack trace.
Mounted the e share and used the proper credentials. The share is visible from the file manager and shows the correct contents.
I have added credentials to loc1 in the form smb://user:pass#geert/e/Downloads/.
Any combination of the above three methods, all to no avail.
Could you please help me reliably test the amount of free space available on my Windows share, from a Linux PC, using smb network paths? If at all possible I would like to refrain from adding extra modules to my addon just for this feature, to ensure it stays lightweight.
Thanks in advance,
Geert

Show native import attempts in python3

I wrote a Python 3 extension module in C but cannot seem to get Python to import it.
Is there any way to let Python print out which shared libraries (.so on Linux) it tries to load and why it fails?
Sadly all the docs I read don't really help since none describes the native import procedure in a concise way.
What I tried is:
ctypes.CDLL("libmydep1.so")
ctypes.CDLL("libmydep2.so")
try:
import my_main
print("Load python")
except:
ctypes.CDLL("libmylib.so")
print("Load shared object")
Which always prints Load shared object.
libmylib.so contains the python entry point but loading it as Python 3 extension does not seem to work, although loading as a shared library does.
EDIT:
Python does not honor linux conventions. So for a lib you do not name it libmylib.so but mylib.so.
Even worse, it only loads my_main, when the so is named my_main.so. So annoying.
Try to look at /proc/<pid>/maps directory.
Or try to use lsof -p <PID> command in shell.
Source of answer is this forum. lsof man page. See also this answer.

IOError: [Errno 22] Invalid argument when reading/writing large bytestring

I'm getting
IOError: [Errno 22] Invalid argument
when I try to write a large bytestring to disk with f.write(), where f was opened with mode wb.
I've seen lots of people online getting this error when using a Windows network drive, but I'm on OSX (10.7 when I originally asked the question but 10.8 now, with a standard HFS+ local filesystem). I'm using Python 3.2.2 (happens on both a python.org binary and a homebrew install). I don't see this problem with the system Python 2.7.2.
I also tried mode w+b based on this Windows bug workaround, but of course that didn't help.
The data is coming from a large numpy array (almost 4GB of floats). It works fine if I manually loop over the string and write it out in chunks. But because I can't write it all in one pass, np.save and np.savez fail -- since they just use f.write(ary.tostring()). I get a similar error when I try to save it into an existing HDF5 file with h5py.
Note that I get the same problem when reading a file opened with file(filename, 'rb'): f.read() gives this IOError, while f.read(chunk_size) for reasonable chunk_size works.
Any thoughts?
This appears to be a general OSX bug with fread / fwrite and so isn't really fixable by a Python user. See numpy #3858, this torch7 commit, this SO question/answer, ....
Supposedly it's been fixed in Mavericks, but I'm still seeing the issue.
Python 2 may have worked around this or its io module may have always buffered large reads/writes; I haven't investigated thoroughly.
Perhaps try not opening with the b flag, I didn't think that was supported on all OS / filesystems.

Categories