Sharing a path string between modules in python - python

I am trying to make a gui that displays a path to a file, and the user can change it anytime. I have my defaults which are in my first script.The following is a simplified version without any of the gui stuff. But then the user pushes a button and it runs a different script (script2). In this script, the information on the file is read.
script1:
import os
import multiprocessing as mp
import script2
specsfile = mp.Array('c',1000, lock=True)
path_save = mp.Array('c',1000, lock=True)
p = mp.Process(target=script2, args=(specsfile,path_save))
p.start()
specsfile = '//_an_excel_sheet_directory.xlsx'
path_save = '//path/to/my/directory/'
subprocess.call([sys.executable, 'script2.py'])
script2:
import multiprocessing as mp
from script1 import specsfile
from script1 import path_save
print(specsfile)
spec= pd.read_excel(specsfile)
When I run it, it gives me this error: PermissionError: [WinError 5] Access is denied
I'm not sure if I'm initializing this wrong or not. I've never used multiprocessing but I was reading about some recommendations for that when sharing data. so basically I want to initialize a specsfile string and a path_save string but when it changes,I want it to be reflected and sent to specs2 file.

Related

Run external application on multiple processors

Currently I have a program that uses subprocess.Popen to open an executable file and pass an argument - equivalent to running ./path/to/file args on Linux.
This works very well but I have to execute this file over 1000 times and currently it is done one at a time, on a single processor. I want to be able to execute this file in sets of 8 for example, as I have an 8-core PC.
I have tried the following:
bolsig = ("/home/rdoyle/TEST_PROC/BOLSIG/bolsigminus")
infile_list = glob.glob(str(cwd)+"/BOLSIG Run Files/run*")
cmds_list = [[bolsig, infile] for infile in infile_list]
procs_list = [Popen(cmd) for cmd in cmds_list]
for proc in procs_list:
proc.wait()
But this tries to execute all 1000 commands at the same time.
Anyone have any suggestions?
I like concurrent.futures for simple cases like this, it's so simple and yet so effective.
import os
import glob
from concurrent import futures
from subprocess import Popen
optim = ("/usr/bin/jpegoptim")
img_path = os.path.join(os.path.abspath(os.path.curdir), 'images')
file_list = glob.glob(img_path+'/*jpg')
def compress(fname):
Popen([optim, fname, '-d', 'out/', '-f'])
ex = futures.ThreadPoolExecutor(max_workers=8)
ex.map(compress, file_list)
A great intro at Doug Hellman's PyMOTW. https://pymotw.com/3/concurrent.futures/
You can use the Python multiprocessing module and its multiprocessing.pool.Pool -> https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing

Python shutil module move exception handling

Destination path /tmp/abc in both Process 1 & Process 2
Say there are N number of process running
we need to retain the file generated by the latest one
Process1
import shutil
shutil.move(src_path, destination_path)
Process 2
import os
os.remove(destination_path)
Solution
1. Handle the process saying if copy fails with [ErrNo2]No Such File or Directory
Is this the correct solution? Is there a better way to handle this
Useful Link A safe, atomic file-copy operation
You can use FileNotFoundError error
try :
shutil.move(src_path, destination_path)
except FileNotFoundError:
print ('File Not Found')
# Add whatever logic you want to execute
except :
print ('Some Other error')
The primary solutions that come to mind are either
Keep partial information in per-process staging file
Rely on the os for atomic moves
or
Keep partial information in per-process memory
Rely on interprocess communication / locks for atomic writes
For the first, e.g.:
import tempfile
import os
FINAL = '/tmp/something'
def do_stuff():
fd, name = tempfile.mkstemp(suffix="-%s" % os.getpid())
while keep_doing_stuff():
os.write(fd, get_output())
os.close(fd)
os.rename(name, FINAL)
if __name__ == '__main__':
do_stuff()
You can choose to invoke individually from a shell (as shown above) or with some process wrappers (subprocess or multiprocessing would be fine), and either way will work.
For interprocess you would probably want to spawn everything from a parent process
from multiprocessing import Process, Lock
from cStringIO import StringIO
def do_stuff(lock):
output = StringIO()
while keep_doing_stuff():
output.write(get_output())
with lock:
with open(FINAL, 'w') as f:
f.write(output.getvalue())
output.close()
if __name__ == '__main__':
lock = Lock()
for num in range(2):
Process(target=do_stuff, args=(lock,)).start()

Run python script function in remote machine

I have script in remote device and I want to run specific function in python script in remote device
remote device has below script:
#connect.py
class ConnectDevice:
def __init__(self, ip):
connect.Device(ip)
def get_devicestate(self):
state = show.Device_State
return state
configured password less connection from source machine to remote machine
function get_devicestate return up or down.
How to get get_devicestate output from source machine. source machine has below script:
import os
import sys
import time
import getpass
import errno
import json
import subprocess
import threading
from subprocess import call
from subprocess import check_output
call(["ssh", "1.1.1.1", "\"python Connect.py\""])#This is just example how to run script from source to remote. Need help how to run function get_devicestate and get value.
At a first glance , it seems that connect.py has got more code than you have pasted in your question. Anyways, assuming connect.py does not require any input parameters to run, simply use subprocess's check_output method to get the stdout message and store it in a variable for further use.
from subprocess import check_output
out = check_output(["ssh", "1.1.1.1", "\"python Connect.py\""])

getting svchost path by PID using python 2.7

i am trying to get svchost.exe path by PID using python. i tried it using psutil but i got access denied error.
here is my code-
import psutil
p = psutil.Process(1832)
print p.exe()
you can get process path using wmi module.
Here is a sample code for you. This code fids all process named svchost and return path to the process. This returns full information on process.
import wmi
import psutil
c = wmi.WMI ()
process = psutil.Process(2276)
process_name = process.name()
for process in c.Win32_Process(name=process_name):
if process.ExecutablePath:
print (process.ExecutablePath)
Output
c:\windows\system32\svchost.exe
c:\windows\system32\svchost.exe
c:\windows\system32\svchost.exe

Mounted PATH Dependent import works through CLI, not through script

I have a python module called user_module, which resides in a mounted network location.
In a script I'm using, I need to import that module - but due to NFS issues, sometimes this path isn't available until we actually change directory to the relevant one and\or restarting autofs service.
In order to reproduce the issue and try to WA it - I've manually stopped autofs service, and tried to run my script with my WA - (probably not the most elegant one though):
import os
import sys
from subprocess import call
PATH="/some/network/path"
sys.path.append(PATH)
try:
os.chdir(PATH)
import user_module
except:
print "Unable to load user_module, trying to restart autofs service"
call(['service', 'autofs', 'restart'])
os.chdir(PATH)
import user_module # Throws Import error!
But, I still get import error due to path unavabilable.
Now this is what I find weird - On the same machine, I've tried executing the same operations as in my script, with intentionally pre stopping autofs service, and it works perfect -
[root#machine]: service autofs stop # To reproduce the import error
[root#machine]: python
>>> import os
>>> import sys
>>> from subprocess import call
>>> PATH="/some/network/path"
>>> sys.path.append(PATH)
>>> os.chdir(PATH)
######################################################################
################## exception of no such file or directory ############
######################################################################
>>> call(['service', 'autofs', 'restart'])
>>> os.chdir(PATH) # No exception now after restarting the service
>>> import user_module # NO Import error here
Can someone shed some light on the situation
and explain to me why same methodology works through python CLI, but through a script?
What is it that I don't know or what is it that I'm missing here?
Also - How to overcome this?
Thanks

Categories