I am writing a script in XCHAT and from reading other scripts I notice use of return xchat.EAT_ALL in most of them. Here is what the documentation for the XCHAT Python API says:
Callback return constants (EAT_)
When a callback is supposed to return one of the EAT_ macros, it is able control how xchat will proceed after the callback returns. These are the available constants, and their meanings:
EAT_PLUGIN
Don't let any other plugin receive this event.
EAT_XCHAT
Don't let xchat treat this event as usual.
EAT_ALL
Eat the event completely.
EAT_NONE
Let everything happen as usual.
Returning None is the same as returning EAT_NONE.
I am wondering why to do this. I really don't understand what this is saying and there isn't that much documentation for the XCHAT Python API. I am curious as to when to use which one of these.
Just from what you've pasted in:
Certain events occur in XChat, which you can register a function to handle. It is possible for there to be more than one callback function registered for each event - either by plugins or by XChat itself.
So after your function has done whatever it wants to do, it needs to decide whether to allow other callbacks to be triggered as well. As a simple example, let's say you're writing a script which filters incoming messages that have certain words. It's triggered whenever a message is received, and goes something like:
if any(word in swearwords for word in message):
return xchat.EAT_ALL # The 'message received' event stops here
else:
return xchat.EAT_NONE # Let it be handled normally.
Related
Thats my function what checks broadcast
def broadcast1():
if 'isLiveBroadcast' in contents:
return True
And thats my function what must send messages to specific text channel about broadcast
#bot.event
async def broadcast2(broadcast1):
if broadcast1 is True:
channel = bot.get_channel(1067083439263727646)
await channel.send('ЖАБА СТРИМИТ')
I have been written two functions which must work together and sends messages in text channel when broadcast is live but they dont work
No Errors, just doesnt working
There's quite a few fundamental issues with your code and the program logic.
Firstly, #bot.event is typically used for registering "listeners" to Discord events. There is no broadcast2 event to listen out for. Therefore, the function will never be executed. You can look at the event reference here and see all the events that we can respond/react to.
Secondly, as the commenter above as pointed out, you've not actually invoked the function. Saying if broadcast1 is Truedoesn't work how you think it works. By itself,broadcast1is a function, and thereforebroadcast1is a function object. It will never equal True; so event if we were listening for the right event, it would never do what you want it do anyways. What we _should_ be doing, is actually invoked the function:broadcast1()`. This means the function and the code gets run. There's lots of tutorials online about how functions in python work. Linked is just one example.
Thirdly, in the broadcast1 channel, you're checking if "isLiveBroadcast" is in contents. What contents? Where is that defined? You might need to rethink that as well.
All in all, you should probably look at following a Python basics course and then following some tutorials for putting together Discord bots. The docs are a good place to start - trying following the quickstart stuff and then adapting as you go along.
I want to implement Erlang like messaging, unless already exists.
The idea is to create multiprocess application (I'm using Ray)
I can imagine how to do the send/recv :
#ray.remote
class Module:
def recv(self, folder, msg ) :
if folder not in self.inbox : self.inbox[folder] = deque()
self.inbox[folder].push(msg)
def send(self, mod, folder, msg): mod.recv(folder,msg)
You call .send() which remotely calls the target module .recv() method
my problem is i dont know how to do the internal eventloop that REACT on messages.
It has to be lightweight too, because it runs in every process.
One idea is while-loop with sleep, but it seems inefficient !!
Probably, when msg arrives it has to trigger some registered FILTER-HOOK if message matches ? So may be no event loop needed but just routines triggered by FILTER !!!
What i did for now is trigger a check routine every time i get a message, which goes trough rules defined as (key:regex, method-to-call) filters
I am using Python C Api to embed a python in our application. Currently when users execute their scripts, we call PyRun_SimpleString(). Which runs fine.
I would like to extend this functionality to allow users to run scripts in "Debug" mode, where like in a typical IDE, they would be allowed to set breakpointsm "watches", and generally step through their script.
I've looked at the API specs, googled for similar functionality, but did not find anything that would help much.
I did play with PyEval_SetTrace() which returns all the information I need, however, we execute the Python on the same thread as our main application and I have not found a way to "pause" python execution when the trace callback hits a line number that contains a user checked break point - and resuming the execution at a later point.
I also see that there are various "Frame" functions like PyEval_EvalFrame() but not a whole lot of places that demo the proper usage. Perhaps these are the functions that I should be using?
Any help would be much appreciated!
PyEval_SetTrace() is exactly the API that you need to use. Not sure why you need some additional way to "pause" the execution; when your callback has been called, the execution is already paused and will not resume until you return from the callback.
I am having difficulty figuring out how to receive events using pywin32. I have created code to do some OPC processing. According to the generated binding in the gen_py folder I should be able to register event handlers and it gives me the prototypes I should use... for example:
# Event Handlers
# If you create handlers, they should have the following prototypes:
# def OnAsyncWriteComplete(.......)
So I have written code that implements the handlers that I am interested in but have not the slightest idea how to get them attached to my client and can not seem to find examples that are making sense to me. Below I create my client and then add an object that should have events associated with it...
self.server = win32com.client.gencache.EnsureDispatch(driver)
# I can now call the methods on the server object like so....
new_group = self.server.OPCGroups.Add(group)
I want to attach my handler to the new_group object (perhaps to the self.server?) but I can not seem to understand how I should be doing that.
So my questions are:
How can I attach my handler code for the events? Any examples around I should look at?
Will the handler code have access to attributes stored on the client "self" in this case?
Any help would be greatly appreciated.
After quite a bit, I was able to find a way to do this. What I did was find that I could attach my Event handler class to the group.
self.server = win32com.client.gencache.EnsureDispatch(driver)
# I can now call the methods on the server object like so....
new_group = self.server.OPCGroups.Add(group)
self._events[group] = win32com.client.WithEvents(
new_group, GroupEvent)
Once I had that going it seems to trigger the events, but the events would not run until the end of the script. In order to get it to process the events that were queued up, I call this which seems to trigger the callbacks to execute.
pythoncom.PumpWaitingMessages()
Don't know if it will help anyone else but it seems to work for what I am doing.
Thank's for this, it was very helpful. To extend the question, I found I could simply register the driver:
import win32com
class MyEvents(object): pass
server=win32com.client.gencache.EnsureDispatch(driver)
win32com.client.WithDispatch(server, MyEvents)
I discovered this by performing help(win32com.client.WithEvents)
I'm using the django_notification module. https://github.com/pinax/django-notification/blob/master/docs/usage.txt
This is what I do in my code to send an email to a user when something happens:
notification.send([to_user], "comment_received", noti_dict)
But, this seems to block the request. And it takes a long time to send it out. I read the docs and it says that it's possible to add it to a queue (asynchronous). How do I add it to an asynchronous queue?
I don't understand what the docs are trying to say. What is "emit_notices"? When do I call that? Do I have a script that calls that every 5 seconds? That's silly. What's the right way to do it asynchronously? What do I do?
Lets first break down what each does.
``send_now``
~~~~~~~~~~~~
This is a blocking call that will check each user for elgibility of the
notice and actually peform the send.
``queue``
~~~~~~~~~
This is a non-blocking call that will queue the call to ``send_now`` to
be executed at a later time. To later execute the call you need to use
the ``emit_notices`` management command.
``send``
~~~~~~~~
A proxy around ``send_now`` and ``queue``. It gets its behavior from a global
setting named ``NOTIFICATION_QUEUE_ALL``. By default it is ``False``. This
setting is meant to help control whether you want to queue any call to
``send``.
``send`` also accepts ``now`` and ``queue`` keyword arguments. By default
each option is set to ``False`` to honor the global setting which is ``False``.
This enables you to override on a per call basis whether it should call
``send_now`` or ``queue``.
It looks like in your settings file you need to set
NOTIFICATION_QUEUE_ALL=True
And then you need to setup a cronjob (maybe every 10-30 seconds or something) to run something like,
django_admin.py emit_notices
This will periodically run and do the blocking call which sends out all the emails and whatever legwork the notification app needs. I'm sure if there is nothing to do it's not that intense of a workload.
And before you expand on your comment about this being silly you should think about it. It's not really silly at all. You don't want a blocking call to be tied to a web request otherwise the user will never get a response back from the server. Sending email is blocking in this sense.
Now, if you were just going to have the person receive this notification when they login, then you probably don't need to go this way because you do have to make an external call to sendmail or whatever you're using to send emails. But in your case, sending emails, you should do it this way.
According to those docs, send is just wrapping send_now and queue. So if you want to send the notifications asynchronous instead of synchronous you have 2 options:
Change your settings:
# This flag will make all messages default to async
NOTIFICATION_QUEUE_ALL = True
Use teh queue keyword argument:
notification.send([to_user], "comment_received", noti_dict, queue=True)
If you queue the notifications you will have to run the emit_notices management command periodically. So you could put that in a cron job to run every couple of minutes.