I have successfully created a Python Windows service using pywin32. In testing my application I attempted to have it print (which didn't work as I expected) and I also had it write to a file. It was able to write to a file, but the file ended up in the python library site-packages folder. This appears to be where the working directory is, though I'm not sure why? I would like to know the best way to specify what the working directory should be.
I could open files with full path names, or I could maybe use os.cwd? What is the best practice?
Here are the two files which compose my Windows service.
import os
import sys
import win32service
import win32serviceutil
from twisted.internet import reactor
import xpress
class XPressService(win32serviceutil.ServiceFramework):
_svc_name_ = 'XPress'
_svc_display_name_ = 'XPress Longer Name'
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
reactor.callFromThread(reactor.stop)
def SvcDoRun(self):
xpress.main()
reactor.run(installSignalHandlers=False)
if __name__ == "__main__":
win32serviceutil.HandleCommandLine(XPressService)
Below is "xpress.py" which is imported by the above script.
import datetime
def main():
with open('times', 'a') as f:
print str(datetime.datetime.now())
f.write(str(datetime.datetime.now()))
if __name__ == '__main__':
main()
They both work, it's what your needs are. For various reasons, it's probably best to use absolute paths to the file names, this way you don't have to worry about 'where' your app is working, you just know where the output will be (which is most important). In *nix, apps generally work in '/' when they don't have a specified working directory. If you do choose to work in another directory it's os.chdir(newDir), do this before you call win32serviceutil.HandleCommandLine
I don't know the windows default, but you probably nailed it with the library's directory in site-packages.
Related
I need to get the location of the home directory of the current logged-on user. Currently, I've been using the following on Linux:
os.getenv("HOME")
However, this does not work on Windows. What is the correct cross-platform way to do this ?
You want to use os.path.expanduser.
This will ensure it works on all platforms:
from os.path import expanduser
home = expanduser("~")
If you're on Python 3.5+ you can use pathlib.Path.home():
from pathlib import Path
home = str(Path.home())
I found that pathlib module also supports this.
from pathlib import Path
>>> Path.home()
WindowsPath('C:/Users/XXX')
I know this is an old thread, but I recently needed this for a large scale project (Python 3.8). It had to work on any mainstream OS, so therefore I went with the solution #Max wrote in the comments.
Code:
import os
print(os.path.expanduser("~"))
Output Windows:
PS C:\Python> & C:/Python38/python.exe c:/Python/test.py
C:\Users\mXXXXX
Output Linux (Ubuntu):
rxxx#xx:/mnt/c/Python$ python3 test.py
/home/rxxx
I also tested it on Python 2.7.17 and that works too.
This can be done using pathlib, which is part of the standard library, and treats paths as objects with methods, instead of strings.
Path.expanduser()
Path.home()
from pathlib import Path
home: str = str(Path('~').expanduser())
This doesn't really qualify for the question (it being tagged as cross-platform), but perhaps this could be useful for someone.
How to get the home directory for effective user (Linux specific).
Let's imagine that you are writing an installer script or some other solution that requires you to perform certain actions under certain local users. You would most likely accomplish this in your installer script by changing the effective user, but os.path.expanduser("~") will still return /root.
The argument needs to have the desired user name:
os.path.expanduser(f"~{USERNAME}/")
Note that the above works fine without changing EUID, but if the scenario previously described would apply, the example below shows how this could be used:
import os
import pwd
import grp
class Identity():
def __init__(self, user: str, group: str = None):
self.uid = pwd.getpwnam(user).pw_uid
if not group:
self.gid = pwd.getpwnam(user).pw_gid
else:
self.gid = grp.getgrnam(group).gr_gid
def __enter__(self):
self.original_uid = os.getuid()
self.original_gid = os.getgid()
os.setegid(self.uid)
os.seteuid(self.gid)
def __exit__(self, type, value, traceback):
os.seteuid(self.original_uid)
os.setegid(self.original_gid)
if __name__ == '__main__':
with Identity("hedy", "lamarr"):
homedir = os.path.expanduser(f"~{pwd.getpwuid(os.geteuid())[0]}/")
with open(os.path.join(homedir, "install.log"), "w") as file:
file.write("Your home directory contents have been altered")
im currently facing an issue with eel and multiprocessing opening multiple eel windows when an external python script executes that uses multiprocessing.
Folder Structure
Project[folder]
|
start.py
|_______web[folder]
|_______partials[folder]
|
partial.py
From start.py(script that starts my app) i import partial.py as i use its functionality in part part of the application and for speed requires the use of multiprocessing.
Code snippet(s)
START.PY (putting a snippet as its way to long)
import eel
#import ... other imports(NO multiprocessing here)
sys.path.append(os.getcwd() + "/web/partials/")
import partials
#... several non relevant #eel.exposed functions are here...
APP ={#...dict with all the keys seen below...}
if __name__ == "__main__":
try:
# Start the application and pass all initial params below from APP dict
eel.start(APP["start_html"], host=APP["host"], size=APP["size"], mode=APP["mode"], port=APP["port"])
except (SystemExit, MemoryError, KeyboardInterrupt):
# Handle bottle errors, in all cases call a function close of all remaining windows
close_handler()
...
PARTIAL.PY
import eel
import os
import re
import socket
import ntc_templates
from mac_vendor_lookup import MacLookup, BaseMacLookup
import netmiko
from netmiko import Netmiko, NetMikoAuthenticationException, NetMikoTimeoutException
import concurrent.futures
def device_details_builder_mp():
#... code that will run with multiprocessing from every entry in device_data below...
#eel.expose
def device_details_search(site, user, password, device_data):
# Init list that will hold all the final data
final_device_details = []
with concurrent.futures.ProcessPoolExecutor() as executor_detail:
results_details = [executor_detail.submit(device_details_builder_mp, device, site, user, password) for device in device_data]
for result_detail in concurrent.futures.as_completed(results_details):
final_device_details.append(result_detail.result())
Issue
So i think my issue is the way im importing partial.py into start.py, if i run start.py and use the app and trigger the feature that uses multiprocessing which gets called from the java-script side it just works.
The issue happens when i pack the app and execute it from an .exe when i use the feature that use multiprocessing it doesn't work and it opens multiple eel windows, Was reading that multiprocessing re imports everything again but i couldn't figure that out exatly, i saw this as well: https://github.com/samuelhwilliams/Eel/issues/224
Hoping someone can assist me in how to better import partial.py into start.py so it doesn't cause the issue with multiprocessing.
Thanks
Desktop
OS: Windows 10 1809
Browser: Brave
Version: EEL 0.12.4
That time when a single line does it.
Added multiprocessing.freeze_support() in the if name == "main": and after packing, multiprocessing works fine.
I wasnt looking in the right place, it was not an eel issue but a PyInstaller one
https://github.com/pyinstaller/pyinstaller/wiki/Recipe-Multiprocessing#additional-code-for-pyinstaller--33-and-windows
I have a program consistring of several modules specifying the respective web application handlers and one, specifying the respective router.
The library I use can be found here.
Excerpt from webapp.service (there are more such modules):
from webapp.router import ROUTER
#ROUTER.route('/service/[id:int]')
class ServicePermissions(AuthenticatedService):
"""Handles service permissions."""
NODE = 'services'
NAME = 'services manager'
DESCRIPTION = 'Manages services permissions'
PROMOTE = False
webapp.router:
ROUTER = Router()
When I import the webapp.router module, the webapp.service module does obviously not run. Hence, the #ROUTER.route('/service/[id:int]') decorator is not run and my web aplication will fail with the message, that the respective route is not available.
What is the best practice in that case to run the code in webapp.service to "run" the decorators? I do not really need to import the module itself or any of its members.
As stated in the comments fot the question,
you simply have to import the modules. As for linter complaints, those are the lesser of your problems. Linters are there to help - if they get into the way, just don't listen to them.
So, the simple way just to get your things working is, at the end of your __main__.py or __init__.py, depending on your app structure, to import explicitly all the modules that make use of the view decorator.
If you have a linter, check how to silence it on the import lines - that is usually accomplished with a special comment on the import line.
Python's introspection is fantastic, but it can't find instances of a class, or subclasses, if those are defined in modules that are not imported: such a module is just a text file sitting on the disk, like any data file.
What some frameworks offer as an approach is to have a "discovery" utility that will silently import all "py" files in the project folders. That way your views can "come into existence" without explicit imports.
You could use a function like:
import os
def discover(caller_file):
caller_folder = os.path.dirname(caller_file)
for current, folders, files in os.walk(caller_folder):
if current == "__pycache__":
continue
for file in files:
if file.endswith(".py"):
__import__(os.path.join(current, file))
And call it on your main module with discover(__file__)
As we all know we need to protect the main() when running code with multiprocessing in Python using if __name__ == '__main__'.
I understand that this is necessary in some cases to give access to functions defined in the main but I do not understand why this is necessary in this case:
file2.py
import numpy as np
from multiprocessing import Pool
class Something(object):
def get_image(self):
return np.random.rand(64,64)
def mp(self):
image = self.get_image()
p = Pool(2)
res1 = p.apply_async(np.sum, (image,))
res2 = p.apply_async(np.mean, (image,))
print(res1.get())
print(res2.get())
p.close()
p.join()
main.py
from file2 import Something
s = Something()
s.mp()
All of the functions or imports necessary for Something to work are part of file2.py. Why does the subprocess need to re-run the main.py?
I think the __name__ solution is not very nice as this prevents me from distribution the code of file2.py as I can't make sure they are protecting their main.
Isn't there a workaround for Windows?
How are packages solving that (as I never encountered any problem not protecting my main with any package - are they just not using multiprocessing?)
edit:
I know that this is because of the fork() not implemented in Windows. I was just asking if there is a hack to let the interpreter start at file2.py instead of main.py as I can be sure that file2.py is self-sufficient
When using the "spawn" start method, new processes are Python interpreters that are started from scratch. It's not possible for the new Python interpreters in the subprocesses to figure out what modules need to be imported, so they import the main module again, which in turn will import everything else. This means it must be possible to import the main module without any side effects.
If you are on a different platform than Windows, you can use the "fork" start method instead, and you won't have this problem.
That said, what's wrong with using if __name__ == "__main__":? It has a lot of additional benefits, e.g. documentation tools will be able to process your main module, and unit testing is easier etc, so you should use it in any case.
As others have mentioned the spawn() method on Windows will re-import the code for each instance of the interpreter. This import will execute your code again in the child process (and this will make it create it own child, and so on).
A workaround is to pull the multiprocessing script into a separate file and then use subprocess to launch it from the main script.
I pass variables into the script by pickling them in a temporary directory, and I pass the temporary directory into the subprocess with argparse.
I then pickle the results into the temporary directory, where the main script retrieves them.
Here is an example file_hasher() function that I wrote:
main_program.py
import os, pickle, shutil, subprocess, sys, tempfile
def file_hasher(filenames):
try:
subprocess_directory = tempfile.mkdtemp()
input_arguments_file = os.path.join(subprocess_directory, 'input_arguments.dat')
with open(input_arguments_file, 'wb') as func_inputs:
pickle.dump(filenames, func_inputs)
current_path = os.path.dirname(os.path.realpath(__file__))
file_hasher = os.path.join(current_path, 'file_hasher.py')
python_interpreter = sys.executable
proc = subprocess.call([python_interpreter, file_hasher, subprocess_directory],
timeout=60,
)
output_file = os.path.join(subprocess_directory, 'function_outputs.dat')
with open(output_file, 'rb') as func_outputs:
hashlist = pickle.load(func_outputs)
finally:
shutil.rmtree(subprocess_directory)
return hashlist
file_hasher.py
#! /usr/bin/env python
import argparse, hashlib, os, pickle
from multiprocessing import Pool
def file_hasher(input_file):
with open(input_file, 'rb') as f:
data = f.read()
md5_hash = hashlib.md5(data)
hashval = md5_hash.hexdigest()
return hashval
if __name__=='__main__':
argument_parser = argparse.ArgumentParser()
argument_parser.add_argument('subprocess_directory', type=str)
subprocess_directory = argument_parser.parse_args().subprocess_directory
arguments_file = os.path.join(subprocess_directory, 'input_arguments.dat')
with open(arguments_file, 'rb') as func_inputs:
filenames = pickle.load(func_inputs)
hashlist = []
p = Pool()
for r in p.imap(file_hasher, filenames):
hashlist.append(r)
output_file = os.path.join(subprocess_directory, 'function_outputs.dat')
with open(output_file, 'wb') as func_outputs:
pickle.dump(hashlist, func_outputs)
There must be a better way...
The main module is imported (but with __name__ != '__main__' because Windows is trying to simulate a forking-like behavior on a system that doesn't have forking). multiprocessing has no way to know that you didn't do anything important in you main module, so the import is done "just in case" to create an environment similar to the one in your main process. If it didn't do this, all sorts of stuff that happens by side-effect in main (e.g. imports, configuration calls with persistent side-effects, etc.) might not be properly performed in the child processes.
As such, if they're not protecting their __main__, the code is not multiprocessing safe (nor is it unittest safe, import safe, etc.). The if __name__ == '__main__': protective wrapper should be part of all correct main modules. Go ahead and distribute it, with a note about requiring multiprocessing-safe main module protection.
the if __name__ == '__main__' is needed on windows since windows doesnt have a "fork" option for processes.
In linux, for example, you can fork the process, so the parent process will be copied and the copy will become the child process (and it will have access to the already imported code you had loaded in the parent process)
Since you cant fork in windows, python simply imports all the code that was imported by the parent process, in the child process. This creates a similar effect, but if you dont do the __name__ trick, this import will execute your code again in the child process (and this will make it create it own child, and so on).
so even in your example main.py will be imported again (since all the files are imported again). python cant guess what specific python script the child process should import.
FYI there are other limitations you should be aware of like using globals, you can read about it here https://docs.python.org/2/library/multiprocessing.html#windows
When changing directory with the OS module , the change is made globally. Is there a way to change directory locally?
EDIT:Yea, sorry . Locally in another thread. The normal chdir changes the working directory over all threads when you use the thread module.
It's not very difficult to write a decorator/context-manager to do this using contextlib2.contextmanager.
import contextlib2
import os
#contextlib2.contextmanager
def restore_chdir(dir_):
orig= os.getcwd()
os.chdir(dir_)
try:
yield
finally:
os.chdir(orig)
You can use this now as:
with restore_chdir('/foo'):
...
or as a function decorator.
You can simply define a function with os module, which can restore the directory.
import os
def func():
original = os.getcwd()
os.chdir("Your Directory Name")
## Do something here
os.chdir(original)