Printing PDF's using Python,win32api, and Acrobat Reader 9 - python

I have reports that I am sending to a system that requires the reports be in a readable PDF format. I tried all of the free libraries and applications and the only one that I found worked was Adobe's acrobat family.
I wrote a quick script in python that uses the win32api to print a pdf to my printer with the default registered application (Acrobat Reader 9) then to kill the task upon completion since acrobat likes to leave the window open when called from the command line.
I compiled it into an executable and pass in the values through the command line
(for example printer.exe %OUTFILE% %PRINTER%) this is then called within a batch file
import os,sys,win32api,win32print,time
# Command Line Arguments.
pdf = sys.argv[1]
tempprinter = sys.argv[2]
# Get Current Default Printer.
currentprinter = win32print.GetDefaultPrinter()
# Set Default printer to printer passed through command line.
win32print.SetDefaultPrinter(tempprinter)
# Print PDF using default application, AcroRd32.exe
win32api.ShellExecute(0, "print", pdf, None, ".", 0)
# Reset Default Printer to saved value
win32print.SetDefaultPrinter(currentprinter)
# Timer for application close
time.sleep(2)
# Kill application and exit scipt
os.system("taskkill /im AcroRd32.exe /f")
This seemed to work well for a large volume, ~2000 reports in a 3-4 hour period but I have some that drop off and I'm not sure if the script is getting overwhelmed or if I should look into multithreading or something else.
The fact that it handles such a large amount with no drop off leads me to believe that the issue is not with the script but I'm not sure if its an issue with the host system or Adobe Reader, or something else.
Any suggestions or opinions would be greatly appreciated.

Based on your feedback (win32api.ShellExecute() is probably not synchronous), your problem is the timeout: If your computer or the print queue is busy, the kill command can arrive too early.
If your script runs concurrently (i.e. you print all documents at once instead of one after the other), the kill command could even kill the wrong process (i.e. an acrobat process started by another invocation of the script).
So what you need it a better synchronization. There are a couple of things you can try:
Convert this into a server script which starts Acrobat once, then sends many print commands to the same process and terminates afterwards.
Use a global lock to make sure that ever only a single script is running. I suggest to create a folder somewhere; this is an atomic operation on every file system. If the folder exists, the script is active somewhere.
On top of that, you need to know when the job is finished. Use win32print.EnumJobs() for this.
If that fails, another solution could be to install a Linux server somewhere. You can run a Python server on this box which accepts print jobs that you send with the help of a small Python script on your client machine. The server can then print the PDFs for you in the background.
This approach allow you to add any kind of monitoring you like (sending mails if something fails or send a status mail after all jobs have finished).

Related

Python & subprocess - Open terminal session as user and execute one/two commands

As much as I hate regurgitating questions, it's a necessary evil to achieve a result to the next issue I'll present.
Using python3, tkinter and the subprocess package, my goal is to write a control panel to start and stop different terminal windows with a specific set of commands to run applications/sessions of the ROS application stack, including the core.
As such, the code would look like this per executable I wish to control:
class TestProc(object):
def __init__(self):
pass
def start(self):
self.process = subprocess.Popen(["gnome-terminal", "-c", "'cd /path/to/executable/script.sh; ./script.sh'"])
print("Process started.")
def stop(self):
self.process.terminate()
print("Process terminated.")
Currently, it is possible to start a terminal window and the assigned commands/processes, yet two issues persist:
gnome-terminal is set to launch a terminal window, then relieve control to the processes inside; as such, I have no further control once it has started. A possible solution for this is to use xterm yet that poses a slew of other issues. I am required to have variables from the user's .bashrc and/or export
Certain "global commands" eg. cd or roslaunch would be unavailable to the terminal sessions, perhaps due to the order of execution (eg. the commands are run before the bash profile is loaded) preventing any usable terminal at all
Thus, the question rings: How would I be able to start and stop a new terminal window that would run up to two commands/processes in the user environment?
There are a couple approaches you can take, the most flexible here is also the most complicated, so you'd want to consider whether you need to do it.
If you only need to show the output of the script, you can simply pipe the output to a file or to a named pipe. You can then capture that output by reading/tailing the file. This is simplest, as long as the script don't actually need to have any user interaction.
If you really only need to spawn a script that runs in the background, and you need to simulate user interaction but you don't actually need to accept actual user input, you can use expect approach (using the pexpect library).
If you need to actually allow the real user to interact with the program, then you have two approaches. First is that you can embed the VTE widget into your application, this is the most seamless integration as it'll make the terminal look seamless with your application, however it's also the most heavy.
Another approach is to start gnome-terminal as you've done here, this necessarily spawns a new window.
If you need to both script some interaction while also allowing some user input, you can do this by spawning your script in a tmux session. Using tmux send-keys command to automate the moon interactive part, and then spawn a terminal emulator for users to interact with tmux attach. If you need to go back and forth between automated part and interactive part, you can combine this approach with expect.

Close console while process is running

I'm writing a small application that uses an "index-file" to open folders in explorer from just a few button presses. Anyway I would like to update that index file in a "background process" every time the applications shuts down. Updating the index file means scanning through our network and for some remote users it could take a few minutes. That's why I would like it to hide the console during the scanning process in order to avoid the process being aborted by user.
I tried several things similar to:
#these are just dummy lines
path = get_user_input()
subprocess.Popen(r'explorer "%s"' % path)
#Here I start my update process
multiprocessing.Process(target=update_index).start()
#end of script, now I want that process to continue until finished while main console closes. I only seem to get one or the other.
I also tried using:
DETACHED_PROCESS = 0x00000008
CREATE_NO_WINDOW = 0x08000000
subprocess.Popen(command, shell=True, stdin=None, stdout=None,
stderr=None,
creationflags=DETACHED_PROCESS|CREATE_NO_WINDOW)
and managed to get a separate console window but still no way from preventing the user for closing down the process.
Also keep in mind I would like to distribute this script with something like py2exe later to make it accessible for those without python so I guess using pythonw.exe is out of question. or?
That's not really the answer you're looking for, but you could redesign your system architecture: Write your index updater as a server process that's communicating with your actual application over sockets. Then you just have the index updater server process run continuously (maybe even on another machine) and have the index updater process do all the time-consuming work.
If you just want to perform background tasks that happen at certain intervals, then use cron. If you want to run a command in the background and keep it running even if you logout of the console, use nohup.

closing files of a killed process

python: 3.4
OS: win7 / win10
I want to kill a running process with python and close all the files it opened:
for proc in psutil.process_iter():
if proc.name() == 'myprocess.exe':
opened = proc.open_files()
proc.kill()
for i in opened:
print(i.path)
io.FileIO(i.path).close()
print(io.FileIO(i.path).closed)
Somehow io.IOBase(i.path).close() does not work.
Explanation:
It's like I would like to kill Microsoft Word with python, but it leaves some files open. And I would like to close those files as well.
Microsoft Word is just an example. It is a self-written python programm. The opened files are:
fonts (.ttf)
clr.pyd
and .dll-s
How should I close these files?
You don't need to close any files that were opened by the process. That is done automatically:
Terminating a process has the following results:
Any remaining threads in the process are marked for termination.
Any resources allocated by the process are freed.
All kernel objects are closed.
The process code is removed from memory.
The process exit code is set.
The process object is signaled.
The important bit is "All kernel objects are closed." For every open file handle, there is an associated kernel object--that's actually what a handle is, a mapping from a number to a kernel object. When the process exits, the kernel will walk behind and close all associated file handles, sockets, etc.
Additionally, you're original approach has a few problems. First, the list of open files is only a snapshot of which ones were open at that time. In between asking for the list of open files and killing the process, the process could have opened many more, or closed and removed many as well. Second, the Python 3 docs say that the constructor for IOBase isn't public, so using it in this way is wrong:
class io.IOBase
The abstract base class for all I/O classes, acting on streams of bytes. There is no public constructor.
Generally, you'd use something like io.open() which takes the path. This leads to the third issue. All you have to work with is the path. In order to close a file, you really need the handle. Those handles are process-specific. This means in one process, 0x5555AAAA may correspond to "file1.txt", but in another process, it might correspond to "file2.txt" or maybe not even a file at all (it could be a socket or something else). So even if you have the kernel handle, we don't really have a way of saying "close this handle in the context of this other process." That violates some security goals of processes. Also, it means that what you're actually doing here is creating your own handle to only turn around and close it (or in this case, it possibly does nothing at all since the object wasn't created correctly).
So, if you're having a problem with files still being held, perhaps the problem is that the process didn't actually die yet before trying whatever work you needed to get done. You may need to wait for the process to exit before attempting to move on if there are files the process was using that you want to use again. It looks like you can use psutils.wait_procs() to do that.
Also, on Windows I find that anti-virus tools often get in the way. They hold open files accessed by applications making it look like a process is still holding onto them when it's actually the virus scanner doing its thing. I remember one instance of having to deal with this in Subversion. The code still exists today. So you might need to simply wait a bit and try again.
Update
Microsoft Word is just an example. It is a self-written python programm. The opened files are:
fonts (.ttf)
clr.pyd
and .dll-s
How should I close these files?
The answer is that you shouldn't need to. Just make sure the process has actually exited. It's not an instantaneous operation, so there's some time between killing it and it actually exiting that it still retains the file handles.
Given that you've actually written the process being killed, I think a far better approach would be to introduce a way to launch that process, have it do its work, then exit gracefully. Then use subprocess.run() to run the script and wait for it to exit.
It's like I would like to kill Microsoft Word with python, but it leaves some files open. And I would like to close those files as well.
There is some misunderstanding here. When you terminate Word with kill, all files are closed from a system point of view, but they will be dirty closed. When Word terminates normally, it flushes its internal buffers, removes any temporary files and mark the files as clean. When it crashes or is abruptely terminated, all that cleaning does not occur. Some modifications may not be written to disk, and temp files are still there, so on next execution, Word will warn you that the files have not been orderly closed and have to be repaired.
So you do not want to kill Microsoft Word, but to close it, meaning posting a WM_QUIT message to its main window. Unfortunately, there is no clean and neat support in Python for that. There is an example of closing Excel by the win32com module here. The convertion for Word should be (beware untested):
wd = win32com.client.Dispatch("Word.Application")
wd.Quit() #quit word, as if user hit the close button/clicked file->exit.
Take a look at the with statement syntax. There's a brief overview here

Sending commands from one xterm window to another with Python

So I have a Python app that starts different xterm windows and in one window after the operation is finished it asks the user "Do you want to use these settings? y/n".
How can I send y to that xterm window, so that the user doesn't needs to type anything.
Thanks
If you are on linux (kde) and you just want to control the xterms by sending commands between them, you could try using dcop:
http://www.linuxjournal.com/content/start-and-control-konsole-dcop
http://www.riverbankcomputing.co.uk/static/Docs/PyKDE3/dcopext.html
Otherwise you would need to actually use an inter-process communication (IPC) method between the two scripts as opposed to controlling the terminals:
http://docs.python.org/library/xmlrpclib.html
http://docs.python.org/library/ipc.html
Some other IPC or RPC library
Simply listen on a basic socket and wait for ANYTHING. And then from the other app open a socket and write SOMETHING to signal.
Or at a very very basic level, you could have one script wait on file output from the other. So once your first xterm finishes, it could write a file that the other script sees.
These are all varying difficulties of solutions.

terminate script of another user

On a linux box I've got a python script that's always started from predefined user. It may take a while for it to finish so I want to allow other users to stop it from the web.
Using kill fails with Operation not permitted.
Can I somehow modify my long running python script so that it'll recive a signal from another user? Obviously, that another user is the one that starts a web server.
May be there's entirely different way to approach this problem I can't think of right now.
If you set up your python script to run as a deamon (bottom of page under Unix Daemon) on your server (which sounds appropriate), and you give the apache user permissions to execute the init.d script for the service, then you can control the service with php code similar to this (from here - the service script name in this case is 'otto2'):
<?
$otto = "/usr/etc/init.d/otto2 ";
if( $_GET["action"] ) {
$ret = shell_exec( $otto.$_GET["action"] );
//Check your ret value
}
else {
?>
Start
Stop
<?
}
?>
The note on that is 'really basic untested code' :)
Off the top of my head, one solution would be threading the script and waiting for a kill signal via some form or another. Or rather than threading, you could have a file that the script checks every N times through a loop - then you just write a kill signal to that file (which of course has write permissions by the web user).
I'm not terribly familiar with kill, other than killing my own scripts, so there may be a better solution.
If you do not want to execute the kill command with the correct permissions, you can send any other signal to the other script. It is then the other scripts' responsibility to terminate. You cannot force it, unless you have the permissions to do so.
This can happen with a network connection, or a 'kill' file whose existence is checked by the other script, or anything else the script is able to listen to.
You could use sudo to perform the kill command as root, but that is horrible practice.
How about having the long-running script check some condition every x seconds, for example the existence of a file like /tmp/stop-xyz.txt? If that file is found, the script terminates itself immediately.
(Or any other means of inter-process communication - it doesn't matter.)

Categories