I tried to find information on this, but only found the WM_QUERYENDSESSION function. How can I use this to intercept reboot / shutdown messages?
import win32gui, win32con
msg = win32gui.GetMessage(None, 0, 0)
if msg and msg.message == win32con.WM_QUERYENDSESSION:
print('EXIT')
Here is an example of my code, but when I run it it doesn't handle any actions. and does not intercept shutdown messages
According to WM_QUERYENDSESSION: The WM_QUERYENDSESSION message is sent when the user chooses to end the session or when an application calls one of the system shutdown functions. A window receives this message through its WindowProc function.
So this message will only take effect when sent by the application and accepted in the WindowProc function.
Related
Hi all I have the following code but for some reason I keep getting the following error but it seems to work on a colleagues pc. We can't seem to figure out why this won't work on mine.
We have also double checked that we're importing the same socketio using dir()
I've tried specifying the namespace both on sio.connect and in the sio.emit but still no luck!
socketio.exceptions.BadNamespaceError: / is not a connected namespace.
bearerToken = 'REDACT'
core = 'REDACT'
output = 'REDACT'
import socketio
import json
def getListeners(token, coreUrl, outputId):
sio = socketio.Client(reconnection_attempts=5, request_timeout=5)
sio.connect(url=coreUrl, transports='websocket')
#sio.on('mwedge:batch:stats')
def batchStats(data):
if (outputId in data['outputStats']):
listeners = data['outputStats'][outputId][16]
print("Number of listeners ", len(listeners))
ips = []
for listener in listeners:
ips.append(listener[1])
print("Ips", ips)
def authCallback(data):
print(json.dumps(data))
sio.emit(event='auth',
data={
'token': token
},
callback=authCallback)
getListeners(bearerToken, core, output)
The Socket.IO connection involves a number of exchanges between the client and the server. The connect() function initiates this process, but this continues in the background. The connection ends when the handler for your connect event is invoked. At this point you can emit.
The problem with your code is that you are not waiting until the connection handshakes are completed, so your emit() call happens before there is a connection established. The solution is to add a connect event handler, and move your emit() call there.
As an additional note, I suggest you set up your event handlers before you call the connect() function.
Google cloud logging is printing out this message when my (python) program exits:
Program shutting down, attempting to send 1 queued log entries to Stackdriver Logging...
Waiting up to 5 seconds.
Sent all pending logs.
I would like to suppress printing that message. Is there a config setting to control whether the message above does not get printed out when the program exits? Thank you.
Use SyncTransport instead of the default BackgroundThreadTransport
from google.cloud.logging_v2.handlers import CloudLoggingHandler
from google.cloud.logging_v2.handlers.transports import SyncTransport
..........
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client, name="your_log_name", transport=SyncTransport)
I created an Azure Service Bus Queue Trigger Python Function with Visual Studio Code and I would like to return the message to the Service Bus Queue if my code fails.
import requests
import azure.functions as func
from requests.exceptions import HTTPError
def main(msg: func.ServiceBusMessage):
message = msg.get_body().decode("utf-8")
url = "http://..."
# My code
try:
requests.post(url=url, params=message)
except Exception as error:
logging.error(error)
# RETURN MESSAGE TO QUEUE HERE
I found some info about a method called unlock() and abandon() but I don't know how to implement it. Here are the links to such docs:
unlock: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-python-how-to-use-queues#handle-application-crashes-and-unreadable-messages
abandon: https://learn.microsoft.com/en-us/python/api/azure-servicebus/azure.servicebus.common.message.deferredmessage?view=azure-python#abandon--
I also found that the queue would automatically send the message to the queue if the function fails, but then, should I write a raise... to send an Exception in the function?
Also, is there a way to return this message to the queue and set a schedule to retry later?
Service bus trigger python function runs in PeekLock mode. You don't have to call unlock() and abandon() method. Check the PeekLock behavior description:
The Functions runtime receives a message in PeekLock mode. It calls
Complete on the message if the function finishes successfully, or
calls Abandon if the function fails. If the function runs longer than
the PeekLock timeout, the lock is automatically renewed as long as the
function is running.
I've been working with the example-minimal.py script from https://github.com/toddmedema/echo and need to alter it so that rather than printing the status changes to the terminal, it executes another script.
I'm a rank amateur but eager to learn and even more eager to get this project done.
Thanks in advance for any help you can provide!!
""" fauxmo_minimal.py - Fabricate.IO
This is a demo python file showing what can be done with the debounce_handler.
The handler prints True when you say "Alexa, device on" and False when you say
"Alexa, device off".
If you have two or more Echos, it only handles the one that hears you more clearly.
You can have an Echo per room and not worry about your handlers triggering for
those other rooms.
The IP of the triggering Echo is also passed into the act() function, so you can
do different things based on which Echo triggered the handler.
"""
import fauxmo
import logging
import time
from debounce_handler import debounce_handler
logging.basicConfig(level=logging.DEBUG)
class device_handler(debounce_handler):
"""Publishes the on/off state requested,
and the IP address of the Echo making the request.
"""
TRIGGERS = {"device": 52000}
def act(self, client_address, state, name):
print "State", state, "on ", name, "from client #", client_address
return True
if __name__ == "__main__":
# Startup the fauxmo server
fauxmo.DEBUG = True
p = fauxmo.poller()
u = fauxmo.upnp_broadcast_responder()
u.init_socket()
p.add(u)
# Register the device callback as a fauxmo handler
d = device_handler()
for trig, port in d.TRIGGERS.items():
fauxmo.fauxmo(trig, u, p, None, port, d)
# Loop and poll for incoming Echo requests
logging.debug("Entering fauxmo polling loop")
while True:
try:
# Allow time for a ctrl-c to stop the process
p.poll(100)
time.sleep(0.1)
except Exception, e:
logging.critical("Critical exception: " + str(e))
break
I'm going to try and be helpful by going through that script and explaining what each bit does. This should help you understand what it's doing, and therefore what you need to do to get it running something else:
import fauxmo
This is a library that allows whatever device is running the script to pretend to be a Belkin WeMo; a device that is triggerable by the Echo.
import logging
import time
from debounce_handler import debounce_handler
This is importing some more libraries that the script will need. Logging will be used for logging things, which is useful for debugging, time will be used to cause the script to pause so that you can quit it by typing ctrl-c, and the debounce_handler library will be used to keep multiple Echos from reacting to the same voice command (which would cause a software bounce).
logging.basicConfig(level=logging.DEBUG)
Configures a logger that will allow events to be logged to assist in debugging.
class device_handler(debounce_handler):
"""Publishes the on/off state requested,
and the IP address of the Echo making the request.
"""
TRIGGERS = {"device": 52000}
def act(self, client_address, state, name):
print "State", state, "on ", name, "from client #", client_address
return True
We've created a class called device_handler which contains a dictionary called TRIGGERS and a function called act.
act takes a number of variables as input; self (any data structures in the class, such as our TRIGGERS dictionary), client_address, state, and name. We don't know what these are yet, but the names are quite self explanatory, so we can guess that client_address is probably going to be the IP address of the Echo, *state" that it is in, and name will be its name. This is the function that you're going to want to edit, since it is the final function triggered by the Echo. You can probably just stick whatever you function you want after the print statement. The act function returns True when called.
if __name__ == "__main__":
This will execute everything indented below it if you're running the script directly. More detail about that here if you want it.
# Startup the fauxmo server
fauxmo.DEBUG = True
p = fauxmo.poller()
u = fauxmo.upnp_broadcast_responder()
u.init_socket()
p.add(u)
As the comment suggests, this starts the fake WeMo server. We enable debugging, which just prints any debug messages to the command line, create a poller, p, which can process incoming messages, and create a upnp broadcast responder, u, which can handle UPnP device registration. We then tell u to initialise a socket, setting itself up on the network listening for UPnP devices, and add u to p so that we can respond when a broadcast is received.
# Register the device callback as a fauxmo handler
d = device_handler()
for trig, port in d.TRIGGERS.items():
fauxmo.fauxmo(trig, u, p, None, port, d)
As the comment says, this sets up an instance of the device handler class that we made earlier. Now we for-loop through the items in our TRIGGERS dictionary in our device handler d and calls fauxmo.fauxmo using the information it has found in the dictionary. If we look at the dictionary definition in the class definition we can see that there's only one entry, a trig device on port 52000. This essentially does the bulk of the work, making the actual fake WeMo device talk to the Echo. If we look at that fauxmo.fauxmo function we see that, when it receives a suitable trigger it calls the act function in the device_handler class we defined before.
# Loop and poll for incoming Echo requests
logging.debug("Entering fauxmo polling loop")
while True:
try:
# Allow time for a ctrl-c to stop the process
p.poll(100)
time.sleep(0.1)
except Exception, e:
logging.critical("Critical exception: " + str(e))
break
And here we enter the fauxmo polling loop. This indefinitely loops through the following code, checking to see if we've received a message. The code below it tries to poll for messages, to see if its received anything, then wait for a bit, then poll again. Except, if it can't do that for some reason, then the script will break and the error will be logged so you can see what went wrong.
Just to clarify; If the Fauxmo loop is running then the script is fine, right?
I think the TO is not getting any connection between the Echo and the WeMo fake device. It can help if you install the WeMo skill first. You may require an original WeMo device initially though.
I know these are old threads but it might help someone still.
What is the easiest way to create a delay (or parking) queue with Python, Pika and RabbitMQ? I have seen an similar questions, but none for Python.
I find this an useful idea when designing applications, as it allows us to throttle messages that needs to be re-queued again.
There are always the possibility that you will receive more messages than you can handle, maybe the HTTP server is slow, or the database is under too much stress.
I also found it very useful when something went wrong in scenarios where there is a zero tolerance to losing messages, and while re-queuing messages that could not be handled may solve that. It can also cause problems where the message will be queued over and over again. Potentially causing performance issues, and log spam.
I found this extremely useful when developing my applications. As it gives you an alternative to simply re-queuing your messages. This can easily reduce the complexity of your code, and is one of many powerful hidden features in RabbitMQ.
Steps
First we need to set up two basic channels, one for the main queue, and one for the delay queue. In my example at the end, I include a couple of additional flags that are not required, but makes the code more reliable; such as confirm delivery, delivery_mode and durable. You can find more information on these in the RabbitMQ manual.
After we have set up the channels we add a binding to the main channel that we can use to send messages from the delay channel to our main queue.
channel.queue_bind(exchange='amq.direct',
queue='hello')
Next we need to configure our delay channel to forward messages to the main queue once they have expired.
delay_channel.queue_declare(queue='hello_delay', durable=True, arguments={
'x-message-ttl' : 5000,
'x-dead-letter-exchange' : 'amq.direct',
'x-dead-letter-routing-key' : 'hello'
})
x-message-ttl (Message - Time To Live)
This is normally used to automatically remove old messages in the
queue after a specific duration, but by adding two optional arguments we
can change this behaviour, and instead have this parameter determine
in milliseconds how long messages will stay in the delay queue.
x-dead-letter-routing-key
This variable allows us to transfer the message to a different queue
once they have expired, instead of the default behaviour of removing
it completely.
x-dead-letter-exchange
This variable determines which Exchange used to transfer the message from hello_delay to hello queue.
Publishing to the delay queue
When we are done setting up all the basic Pika parameters you simply send a message to the delay queue using basic publish.
delay_channel.basic_publish(exchange='',
routing_key='hello_delay',
body="test",
properties=pika.BasicProperties(delivery_mode=2))
Once you have executed the script you should see the following queues created in your RabbitMQ management module.
Example.
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
# Create normal 'Hello World' type channel.
channel = connection.channel()
channel.confirm_delivery()
channel.queue_declare(queue='hello', durable=True)
# We need to bind this channel to an exchange, that will be used to transfer
# messages from our delay queue.
channel.queue_bind(exchange='amq.direct',
queue='hello')
# Create our delay channel.
delay_channel = connection.channel()
delay_channel.confirm_delivery()
# This is where we declare the delay, and routing for our delay channel.
delay_channel.queue_declare(queue='hello_delay', durable=True, arguments={
'x-message-ttl' : 5000, # Delay until the message is transferred in milliseconds.
'x-dead-letter-exchange' : 'amq.direct', # Exchange used to transfer the message from A to B.
'x-dead-letter-routing-key' : 'hello' # Name of the queue we want the message transferred to.
})
delay_channel.basic_publish(exchange='',
routing_key='hello_delay',
body="test",
properties=pika.BasicProperties(delivery_mode=2))
print " [x] Sent"
You can use RabbitMQ official plugin: x-delayed-message .
Firstly, download and copy the ez file into Your_rabbitmq_root_path/plugins
Secondly, enable the plugin (do not need to restart the server):
rabbitmq-plugins enable rabbitmq_delayed_message_exchange
Finally, publish your message with "x-delay" headers like:
headers.put("x-delay", 5000);
Notice:
It does not ensure your message's safety, cause if your message expires just during your rabbitmq-server's downtime, unfortunately the message is lost. So be careful when you use this scheme.
Enjoy it and more info in rabbitmq-delayed-message-exchange
FYI, how to do this in Spring 3.2.x.
<rabbit:queue name="delayQueue" durable="true" queue-arguments="delayQueueArguments"/>
<rabbit:queue-arguments id="delayQueueArguments">
<entry key="x-message-ttl">
<value type="java.lang.Long">10000</value>
</entry>
<entry key="x-dead-letter-exchange" value="finalDestinationTopic"/>
<entry key="x-dead-letter-routing-key" value="finalDestinationQueue"/>
</rabbit:queue-arguments>
<rabbit:fanout-exchange name="finalDestinationTopic">
<rabbit:bindings>
<rabbit:binding queue="finalDestinationQueue"/>
</rabbit:bindings>
</rabbit:fanout-exchange>
NodeJS implementation.
Everything is pretty clear from the code.
Hope it will save somebody's time.
var ch = channel;
ch.assertExchange("my_intermediate_exchange", 'fanout', {durable: false});
ch.assertExchange("my_final_delayed_exchange", 'fanout', {durable: false});
// setup intermediate queue which will never be listened.
// all messages are TTLed so when they are "dead", they come to another exchange
ch.assertQueue("my_intermediate_queue", {
deadLetterExchange: "my_final_delayed_exchange",
messageTtl: 5000, // 5sec
}, function (err, q) {
ch.bindQueue(q.queue, "my_intermediate_exchange", '');
});
ch.assertQueue("my_final_delayed_queue", {}, function (err, q) {
ch.bindQueue(q.queue, "my_final_delayed_exchange", '');
ch.consume(q.queue, function (msg) {
console.log("delayed - [x] %s", msg.content.toString());
}, {noAck: true});
});
Message in Rabbit queue can be delayed in 2 ways
- using QUEUE TTL
- using Message TTL
If all messages in queue are to be delayed for fixed time use queue TTL.
If each message has to be delayed by varied time use Message TTL.
I have explained it using python3 and pika module.
pika BasicProperties argument 'expiration' in milliseconds has to be set to delay message in delay queue.
After setting expiration time, publish message to a delayed_queue ("not actual queue where consumers are waiting to consume") , once message in delayed_queue expires, message will be routed to a actual queue using exchange 'amq.direct'
def delay_publish(self, messages, queue, headers=None, expiration=0):
"""
Connect to RabbitMQ and publish messages to the queue
Args:
queue (string): queue name
messages (list or single item): messages to publish to rabbit queue
expiration(int): TTL in milliseconds for message
"""
delay_queue = "".join([queue, "_delay"])
logging.info('Publishing To Queue: {queue}'.format(queue=delay_queue))
logging.info('Connecting to RabbitMQ: {host}'.format(
host=self.rabbit_host))
credentials = pika.PlainCredentials(
RABBIT_MQ_USER, RABBIT_MQ_PASS)
parameters = pika.ConnectionParameters(
rabbit_host, RABBIT_MQ_PORT,
RABBIT_MQ_VHOST, credentials, heartbeat_interval=0)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.queue_declare(queue=queue, durable=True)
channel.queue_bind(exchange='amq.direct',
queue=queue)
delay_channel = connection.channel()
delay_channel.queue_declare(queue=delay_queue, durable=True,
arguments={
'x-dead-letter-exchange': 'amq.direct',
'x-dead-letter-routing-key': queue
})
properties = pika.BasicProperties(
delivery_mode=2, headers=headers, expiration=str(expiration))
if type(messages) not in (list, tuple):
messages = [messages]
try:
for message in messages:
try:
json_data = json.dumps(message)
except Exception as err:
logging.error(
'Error Jsonify Payload: {err}, {payload}'.format(
err=err, payload=repr(message)), exc_info=True
)
if (type(message) is dict) and ('data' in message):
message['data'] = {}
message['error'] = 'Payload Invalid For JSON'
json_data = json.dumps(message)
else:
raise
try:
delay_channel.basic_publish(
exchange='', routing_key=delay_queue,
body=json_data, properties=properties)
except Exception as err:
logging.error(
'Error Publishing Data: {err}, {payload}'.format(
err=err, payload=json_data), exc_info=True
)
raise
except Exception:
raise
finally:
logging.info(
'Done Publishing. Closing Connection to {queue}'.format(
queue=delay_queue
)
)
connection.close()
Depends on your scenario and needs, I would recommend the following approaches,
Using the official plugin, https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq/, but it will have a capacity issue if the total count of delayed messages exceeds certain number (https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/issues/72), it will not have the high availability option and it will suffer lose of data when it runs out of delayed time during a MQ restart.
Implement a set of cascading delayed queues just like NServiceBus did (https://docs.particular.net/transports/rabbitmq/delayed-delivery).