How to create a new console in Python to print message - python

I make a python script running in the console, and I want to create another console for printing important messages without running another python script to do that.
I first tried to use win32console.AllocConsole() directly, but it got Access is denied
(Seemingly because one process can attach to at most one console according to the docs).
So I tried creating a new process by using multiprocessing :
import sys, os
import win32api, win32con, win32console
import multiprocessing
def ShowConsole():
win32console.FreeConsole()
win32console.AllocConsole()
sys.stdout = open("CONOUT$", "w")
sys.stderr = open("CONOUT$", "w")
print("Test")
os.system("pause")
if __name__ == '__main__':
p = multiprocessing.Process(target=ShowConsole)
p.start()
But when I ran the code in Powershell, it exited directly with no message while no new console is created.
None of the possible solutions I found in stackoverflow works for me. What should I do?
Update: It turns out that it is because multiprocessing.Process fails to call ShowConsole function. I use multiprocessing.dummy.Process as the alternative and it works as expected.
The reason why multiprocessing.Process fails to call target is still unclear.

There's nothing wrong with your example above, it pops the console as shown below. I added a "hello" in the main section to differentiate.
But since you want to values from the first console to the second,
here's a better example. Utilize put/get to pass the information from the first console to the second console.
import win32console
import multiprocessing
import time
def secondconsole(output):
win32console.FreeConsole()
win32console.AllocConsole()
while True:
print(output.get())
if __name__ == "__main__":
output = multiprocessing.Queue()
multiprocessing.Process(target=secondconsole, args=[output]).start()
while True:
print("Hello World")
output.put("Hello to second console") #here you will provide the data to the second console
time.sleep(3) #sleep for 3 seconds just for testing

It looks like the issue might be with the way you are trying to open the console using sys.stdout and sys.stderr. Try using the following code instead:
import sys, os
import win32api, win32con, win32console
import multiprocessing
def ShowConsole():
win32console.FreeConsole()
win32console.AllocConsole()
os.dup2(win32console.GetStdHandle(win32console.STD_OUTPUT_HANDLE), sys.stdout.fileno())
os.dup2(win32console.GetStdHandle(win32console.STD_ERROR_HANDLE), sys.stderr.fileno())
print("Test")
os.system("pause")
if __name__ == '__main__':
p = multiprocessing.Process(target=ShowConsole)
p.start()

Related

How to handle console exit and object destruction

Given this code:
from time import sleep
class TemporaryFileCreator(object):
def __init__(self):
print 'create temporary file'
# create_temp_file('temp.txt')
def watch(self):
try:
print 'watching tempoary file'
while True:
# add_a_line_in_temp_file('temp.txt', 'new line')
sleep(4)
except (KeyboardInterrupt, SystemExit), e:
print 'deleting the temporary file..'
# delete_temporary_file('temp.txt')
sleep(3)
print str(e)
t = TemporaryFileCreator()
t.watch()
during the t.watch(), I want to close this application in the console..
I tried using CTRL+C and it works:
However, if I click the exit button:
it doesn't work.. I checked many related questions about this but it seems that I cannot find the right answer..
What I want to do:
The console can be exited while the program is still running.. to handle that, when the exit button is pressed, I want to make a cleanup of the objects (deleting of created temporary files), rollback of temporary changes, etc..
Question:
how can I handle console exit?
how can I integrate it on object destructors (__exit__())
Is it even possible? (how about py2exe?)
Note: code will be compiled on py2exe.. "hopes that the effect is the same"
You may want to have a look at signals. When a *nix terminal is closed with a running process, this process receives a couple signals. For instance this code waits for the SIGHUB hangup signal and writes a final message. This codes works under OSX and Linux. I know you are specifically asking for Windows but you might want to give it a shot or investigate what signals a Windows command prompt is emitting during shutdown.
import signal
import sys
def signal_handler(signal, frame):
with open('./log.log', 'w') as f:
f.write('event received!')
signal.signal(signal.SIGHUP, signal_handler)
print('Waiting for the final blow...')
#signal.pause() # does not work under windows
sleep(10) # so let us just wait here
Quote from the documentation:
On Windows, signal() can only be called with SIGABRT, SIGFPE, SIGILL, SIGINT, SIGSEGV, or SIGTERM. A ValueError will be raised in any other case.
Update:
Actually, the closest thing in Windows is win32api.setConsoleCtrlHandler (doc). This was already discussed here:
When using win32api.setConsoleCtrlHandler(), I'm able to receive shutdown/logoff/etc events from Windows, and cleanly shut down my app.
And if Daniel's code still works, this might be a nice way to use both (signals and CtrlHandler) for cross-platform purposes:
import os, sys
def set_exit_handler(func):
if os.name == "nt":
try:
import win32api
win32api.SetConsoleCtrlHandler(func, True)
except ImportError:
version = “.”.join(map(str, sys.version_info[:2]))
raise Exception(”pywin32 not installed for Python ” + version)
else:
import signal
signal.signal(signal.SIGTERM, func)
if __name__ == "__main__":
def on_exit(sig, func=None):
print "exit handler triggered"
import time
time.sleep(5)
set_exit_handler(on_exit)
print "Press to quit"
raw_input()
print "quit!"
If you use tempfile to create your temporary file, it will be automatically deleted when the Python process is killed.
Try it with:
>>> foo = tempfile.NamedTemporaryFile()
>>> foo.name
'c:\\users\\blah\\appdata\\local\\temp\\tmpxxxxxx'
Now check that the named file is there. You can write to and read from this file like any other.
Now kill the Python window and check that file is gone (it should be)
You can simply call foo.close() to delete it manually in your code.

python daemon not calling other python script

I am using python daemon to check a particular table in mongodb if there is any value it should call another python script.
Below is the code what I am trying, but it doesn't call the script.
can somebody help me out:
import daemon
import time
import os
from pymongo import MongoClient
connection = MongoClient(IPADDRESS, PORT)
monitor_db = connection.testmongo.XYZ_monitoring
def interval_monitoring():
while True:
searchForm = monitor_db.find()
for user in searchForm:
user_id=user['user_id']
for ids in user_id:
path= "python XYZ.py "+ids
os.system(path)
time.sleep(60)
def run():
print daemon.__file__
with daemon.DaemonContext():
interval_monitoring()
if __name__ == "__main__":
run()
yes i got it. Am posting as it may be it helps someone
Instead of using
os.system(path)
Use:
subprocess.call(['python', '/Path_from_root_directory/XYZ.py', ids]) // ids is my argument to be passed

calling a script from daemon

I am trying to call a script from python-daemon but its not working. this is what i am tying to do, is it correct?
I also want to pass a random argument to that script, currently i have hard coded it
import daemon
import time
import subprocess
import os
def interval_monitoring():
print "Inside interval monitoring"
while True:
print "its working"
# os.system("XYZ.py 5416ce0eac3d94693cf7dbd8") Tried this too but not working
subprocess.Popen("XYZ.py 5416ce0eac3d94693cf7dbd8", shell=False)
time.sleep(60)
print "condition true"
def run():
print daemon.__file__
with daemon.DaemonContext():
interval_monitoring()
if __name__ == "__main__":
run()
If you didn't make XYZ.py executable and added #!/usr/bin/env python in the top line, you need to call it via python, rather than directly. So your line would be something like this:
subprocess.check_output(["python", "XYZ.py", "5416ce0eac3d94693cf7dbd8"])

Urwid: how to see errors?

I am building application with interactive console interface (line htop, atop utilities) using urwid library, so my trouble is: as interface takes all the space in console window - I could not see python's errors, I tried to do that:
import sys
f = open("test_err", "w")
original_stderr = sys.stderr
sys.stderr = f
print a #a is undefined
sys.stderr = original_stderr
f.close()
It works when I dont use urwid, but not when I use it...
you could try redirecting errors to a file. after each time you run the program, you will need to refresh the file. most editors let you easily do this by pushing f5
def main():
#your code here
print someError #raises an error
try: #run main function
main()
except BaseException as err: #catch all errors
with open('errors.txt','a') as errors: #open a file to write the errors to
errors.write(err.message+'\n')#write the error
change the 'a' to 'w' in the open function if you only want to see one error in the file at a time (instead of having multiple error over a long period of time in one file).
if you want to see the error right when it happens, you can make the error catcher open a window that has the error on it.
def main():
#your code here
print someErr
try: #run main function
main()
except BaseException as err: #catch all errors
import Tkinter as tk #imports the ui module
root = tk.Tk() #creates the root of the window
#creates the text and attaches it to the root
window = tk.Label(root, text=err.message)
window.pack()
#runs the window
root.mainloop()
if you want to build your own window to catch errors, you can learn about Tkinter here. (it is built into python, you don't have to install anything)
Here's what I came up with. I'm taking advantage of unicode-rxvt (urxvt) feature to be passed in a file descriptor. Of course this means you need to be developing this in an X environment, and not a console.
from __future__ import print_function
import os
from datetime import datetime
_debugfile = None
def _close_debug(fo):
fo.close()
def DEBUG(*obj):
"""Open a terminal emulator and write messages to it for debugging."""
global _debugfile
if _debugfile is None:
import atexit
masterfd, slavefd = os.openpty()
pid = os.fork()
if pid: # parent
os.close(masterfd)
_debugfile = os.fdopen(slavefd, "w+", 0)
atexit.register(_close_debug, _debugfile)
else: # child
os.close(slavefd)
os.execlp("urxvt", "urxvt", "-pty-fd", str(masterfd))
print(datetime.now(), ":", ", ".join(map(repr, obj)), file=_debugfile)
This will open a new terminal window automatically when you call DEBUG for the first time and close it at exit. Then any messages passed to it are shown in this new window. This is your "debug window". So your main app works normally, without cluttering it up with messages, but you can still see debug output in this new terminal.

python on script unload event (destructor)

I am using file locking in python script (to control single instance of it execution).
http://code.google.com/p/pylockfile/
I release lock in finally code block.
But if script closed, for example, closing the terminal running it, the finally block doesn't execute and the file stays locked.
How to catch python script destructor event in any case?
See this blog post regarding this subject. It uses the win32api when under Windows, while under Linux the SIGTERM signal is caught. To verify its working, it might be helpful to write something to a file in the on_exit handler like as done below. As the snippet is quite brief, I'll just include it (full props to the blog author):
import os, sys
def set_exit_handler(func):
if os.name == "nt":
try:
import win32api
win32api.SetConsoleCtrlHandler(func, True)
except ImportError:
version = '.'.join(map(str, sys.version_info[:2]))
raise Exception('pywin32 not installed for Python ' + version)
else:
import signal
signal.signal(signal.SIGTERM, func)
if __name__ == '__main__':
def on_exit(sig, func=None):
f = open('log.txt', 'w')
f.write('shutdown...')
f.close()
sys.exit()
set_exit_handler(on_exit)
print 'Press to quit'
raw_input()
print 'quit!'
If you will close the terminal running that program, it will create a file to verify the callback functionality.

Categories