In circuits 3.1.0, is there a way to set at runtime the channel for a handler?
An useful alternative would be to add a handler at runtime and specify the channel.
I've checked the Manager.addHandler implementation but couldn't make it work. I tried:
self._my_method.__func__.channel = _my_method_channel
self._my_method.__func__.names = ["event name"]
self.addHandler(self._my_method)
Yes there is; however it's not really a publically exposed API.
Example: (of creating event handlers at runtime)
#handler("foo")
def on_foo(self):
return "Hello World!"
def test_addHandler():
m = Manager()
m.start()
m.addHandler(on_foo)
This is taken from tests.core.test_dynamic_handlers
NB: Every BaseComponent/Component subclass is also a subclass of Manager and has the .addHandler() and .removeHandler() methods. You can also set the #handler() dynamically like this:
def on_foo(...):
...
self.addHandler(handler("foo")(on_foo))
You can also see a good example of this in the library itself with circuits.io.process where we dynamically create event handlers for stdin, stdout and stderr.
Related
filter_ = (filters.me & ~filters.forwarded & ~filters.incoming & filters.via_bot & filters.command(".", ["ascii"]))
async def hello(client, message):
await message.reply("HELLLO WORLD")
app.add_handler(hello, filter_ )
app.start()
idle()
app.stop()
It just always goes into a loop, nothing more.
It does not work, no reply by the client.
What's wrong in it? Or am I doing something wrong?
You need to add a MessageHandler().
from pyrogram.handlers import MessageHandler
...
app.add_handler(MessageHandler(hello, filter_))
See Update Handler in the documentation for a reference.
While this is unrelated to your original question, I believe Decorators to be a better alternative, as they don't require an additional import or instantiation:
from pyrogram import Client
app = Client()
#app.on_message(filter_)
def hello(client, message):
await message.reply("hello")
app.run() # This app.run() call also skips app.start(), idle() and app.stop()
Edit to reply to the "answer" below:
For what you're testing you're using way too complicated filters.
filter_ = (
filters.me # Messages that you sent
& ~filters.forwarded # Not messages that were forwarded
& filters.incoming # Messages this session received
& ~filters.via_bot # No "via #samplebot" (ie no inline bots)
& filters.command(".", ["dict", "define", "meaning"] # The crux of your issue.
)
The Command Filter takes three arguments. commands, prefixes, and case_sensitive. Since you're not using named arguments (arg=value) you need to keep them in order.
Only the first argument is required and needs to be a single string, or a list of strings (for multiple commands). If not specified, prefixes will default to "/" and commands need to be like /this to trigger. Since you have the arguments in the other order, you're messing up the command filter.
You need to switch the argument for your command filter around (see docs) or, better yet, start with the minimal example you were asked for when creating a question.
So, right now, I have a process that spawns multiple threads, each with their own instance data. I need to inject context-specific information into each of the logging statements that are called throughout the various methods inside of the derived thread class (in this case, the context-specific info is the email of the person who triggered the thread to spawn).
Here is the filter I am currently using
class InjectionFilter(logging.Filter):
def __init__(self, runner):
self.runner = runner
def filter(self, record):
record.email = self.runner.authorEmail
return (record.threadName == self.runner.getName())
"Runner" in this case is a class that is a subclass of Thread, hence the ability to call getName().
Now, for the filter, I instantiate a new filter every time a new thread is created by adding a new instance of the filter to the logging instance in the __init__ method of the runner class.
class ThreadRunner(Thread):
def __init__(self, other_info):
... other things set here ...
ifilter = InjectionFilter(self)
_log.addFilter(ifilter)
Where _log is my global logging instance for all of these threads.
And the filter adds perfectly fine!
I can call _log.filters and see each of the individual filters.
That is working totally fine.
What isn't working is, when reading the logging statements only the first filter is actually being checked.
I added debug statements to the filter to check and see what was going on inside of it. eprint is just a helper method to print to sys.stderr
def filter(self, record):
record.email = self.runner.authorEmail
eprint("Record threadname is %s" % record.threadName)
eprint("Runner threadname is %s" % self.runner.getName())
eprint("Runner equals Record: %s" % (record.threadName == self.runner.getName()))
return (record.threadName == self.runner.getName())
When I start the manager and spawn multiple threads, I get the same filter check every single time, always the first filter that was created.
Sample log output
Record threadname is Thread-52
Runner threadname is Thread-52
Runner equals Record: True
...
Record threadname is Thread-53
Runner threadname is Thread-52
Runner equals Record: False
...
Record threadname is Thread-54
Runner threadname is Thread-52
Runner equals Record: False
It only ever compares it to Thread-52, which is the first filter that was created. But if I print out all of the filters applied to the logger
for fil in _log.filters:
print(fil.runner.getName())
I get
Thread-52
Thread-53
Thread-54
So I KNOW that all of the filters are being applied to the logger, but they aren't all being compared for some reason. I get False for the filter compare statement on every single log statement after the first thread.
Does Python only check the first filter? Am I setting something up wrong? Am I missing something here?
I feel like this should be pretty straightforward, but Python's logging documentation doesn't make the most sense to me.
If you need more context, or if I'm unclear, please let me know. I want to get this done. Haha
Figured it out, in case anyone ever gets stuck with something like this in the future (or maybe you wont because you're a better programmer than I am, haha)
For logging in Python, only one of the filters has to fail for the entire log message to be dropped. I thought it would check the filters and see if at least one of them passed, but I realize how that thought process is broken now.
So, to get around this, I instantiated a new handler for each thread that spawned, and applied a new instance of the filter to that handler, and then I applied the handler to the _log instance.
Now, _log has many handlers and each one of those handlers has a single filter. The logging statement will check through each one of the handlers and only the one with the proper filter will be sent through. :)
It works!
I want to change an urwid.Edit's text from within its "change" signal handler. However, it doesn't do anything. Minimal working example:
import urwid
input_line = urwid.Edit(multiline=True)
def input_change(widget, text):
if text.endswith("\n"):
input_line.set_edit_text('')
urwid.connect_signal(input_line, 'change', input_change)
urwid.MainLoop(urwid.Filler(input_line)).run()
If you press enter, it will actually call .set_edit_text(), but the text remains the same. How do I achieve what I want?
As you can see in the source, the set_edit_text method emits your "change" event, and then immediately afterward, it sets the _edit_text to the actual value.*
You can also verify this by, e.g., logging input_line.edit_text immediately after your set_edit_text to see that you did successfully change it.
What you need to do here is subclass the Edit widget, and override set_edit_text,** not handle the "change" signal. Then it's easy.
For example:
def set_edit_text(self, text):
if text.endswith('\n'):
super().set_edit_text('')
else:
super().set_edit_text(text)
As mentioned above, there's a very good reason for a GUI framework to have events that fire before the change is applied: that gives your event handler a way to see both the current value and the new value.
Of course there's also a very good reason for a GUI framework to have events that fire after the change is applied.
Some frameworks provide both. For example, in Cocoa, you usually get a fooWillChange: message before the change, and a fooDidChange: message afterward. Also, in some frameworks, the "before" event gives you a way to influence how the event gets handled (replace one of its values, swallow the event so it doesn't get passed up the chain, etc.). And then there's Tkinter, which provides some way to do all of these different things, but they're all completely different from each other, and different from widget to widget…
Is it a bug for a framework to not have all of the possible options? Well, there's a downside to a framework being too big and too general. It's harder to develop and maintain, and, worse, harder to learn. I think urwid made a reasonable choice here. Especially since it's written in relatively simple pure Python with a class class hierarchy that makes it easy to override any behavior you don't like.
However, you could maybe call it a documentation bug that Urwid doesn't tell you which kind of signal logic is uses (immutable "before" events), and offers very little guidance on what to override to customize behavior.
* It's also worth noting that your change handler is getting called in the middle of set_edit_text. In urwid, calling set_edit_text from this handler isn't a problem, but in many other UI libraries it could lead to infinite recursion or bizarre behavior.
** You could of course monkeypatch Edit instead of subclassing, unless you have a particular reason to do that, I wouldn't.
Here is another way to do it by overriding "keypress" and defining your own "done" signal that is emitted when you press enter:
class CustomEdit(urwid.Edit):
_metaclass_ = urwid.signals.MetaSignals
signals = ['done']
def keypress(self, size, key):
if key == 'enter':
urwid.emit_signal(self, 'done', self, self.get_edit_text()) #if you dont need a reference to the CustomEdit instance you can drop the 3rd argument
super(CustomEdit, self).set_edit_text('')
return
elif key == 'esc':
super(CustomEdit, self).set_edit_text('')
return
urwid.Edit.keypress(self, size, key)
Background:
I am currently writing a process monitoring tool (Windows and Linux) in Python and implementing unit test coverage. The process monitor hooks into the Windows API function EnumProcesses on Windows and monitors the /proc directory on Linux to find current processes. The process names and process IDs are then written to a log which is accessible to the unit tests.
Question:
When I unit test the monitoring behavior I need a process to start and terminate. I would love if there would be a (cross-platform?) way to start and terminate a fake system process that I could uniquely name (and track its creation in a unit test).
Initial ideas:
I could use subprocess.Popen() to open any system process but this runs into some issues. The unit tests could falsely pass if the process I'm using to test is run by the system as well. Also, the unit tests are run from the command line and any Linux process I can think of suspends the terminal (nano, etc.).
I could start a process and track it by its process ID but I'm not exactly sure how to do this without suspending the terminal.
These are just thoughts and observations from initial testing and I would love it if someone could prove me wrong on either of these points.
I am using Python 2.6.6.
Edit:
Get all Linux process IDs:
try:
processDirectories = os.listdir(self.PROCESS_DIRECTORY)
except IOError:
return []
return [pid for pid in processDirectories if pid.isdigit()]
Get all Windows process IDs:
import ctypes, ctypes.wintypes
Psapi = ctypes.WinDLL('Psapi.dll')
EnumProcesses = self.Psapi.EnumProcesses
EnumProcesses.restype = ctypes.wintypes.BOOL
count = 50
while True:
# Build arguments to EnumProcesses
processIds = (ctypes.wintypes.DWORD*count)()
size = ctypes.sizeof(processIds)
bytes_returned = ctypes.wintypes.DWORD()
# Call enum processes to find all processes
if self.EnumProcesses(ctypes.byref(processIds), size, ctypes.byref(bytes_returned)):
if bytes_returned.value < size:
return processIds
else:
# We weren't able to get all the processes so double our size and try again
count *= 2
else:
print "EnumProcesses failed"
sys.exit()
Windows code is from here
edit: this answer is getting long :), but some of my original answer still applies, so I leave it in :)
Your code is not so different from my original answer. Some of my ideas still apply.
When you are writing Unit Test, you want to only test your logic. When you use code that interacts with the operating system, you usually want to mock that part out. The reason being that you don't have much control over the output of those libraries, as you found out. So it's easier to mock those calls.
In this case, there are two libraries that are interacting with the sytem: os.listdir and EnumProcesses. Since you didn't write them, we can easily fake them to return what we need. Which in this case is a list.
But wait, in your comment you mentioned:
"The issue I'm having with it however is that it really doesn't test
that my code is seeing new processes on the system but rather that the
code is correctly monitoring new items in a list."
The thing is, we don't need to test the code that actually monitors the processes on the system, because it's a third party code. What we need to test is that your code logic handles the returned processes. Because that's the code you wrote. The reason why we are testing over a list, is because that's what your logic is doing. os.listir and EniumProcesses return a list of pids (numeric strings and integers, respectively) and your code acts on that list.
I'm assuming your code is inside a Class (you are using self in your code). I'm also assuming that they are isolated inside their own methods (you are using return). So this will be sort of what I suggested originally, except with actual code :) Idk if they are in the same class or different classes, but it doesn't really matter.
Linux method
Now, testing your Linux process function is not that difficult. You can patch os.listdir to return a list of pids.
def getLinuxProcess(self):
try:
processDirectories = os.listdir(self.PROCESS_DIRECTORY)
except IOError:
return []
return [pid for pid in processDirectories if pid.isdigit()]
Now for the test.
import unittest
from fudge import patched_context
import os
import LinuxProcessClass # class that contains getLinuxProcess method
def test_LinuxProcess(self):
"""Test the logic of our getLinuxProcess.
We patch os.listdir and return our own list, because os.listdir
returns a list. We do this so that we can control the output
(we test *our* logic, not a built-in library's functionality).
"""
# Test we can parse our pdis
fakeProcessIds = ['1', '2', '3']
with patched_context(os, 'listdir', lamba x: fakeProcessIds):
myClass = LinuxProcessClass()
....
result = myClass.getLinuxProcess()
expected = [1, 2, 3]
self.assertEqual(result, expected)
# Test we can handle IOERROR
with patched_context(os, 'listdir', lamba x: raise IOError):
myClass = LinuxProcessClass()
....
result = myClass.getLinuxProcess()
expected = []
self.assertEqual(result, expected)
# Test we only get pids
fakeProcessIds = ['1', '2', '3', 'do', 'not', 'parse']
.....
Windows method
Testing your Window's method is a little trickier. What I would do is the following:
def prepareWindowsObjects(self):
"""Create and set up objects needed to get the windows process"
...
Psapi = ctypes.WinDLL('Psapi.dll')
EnumProcesses = self.Psapi.EnumProcesses
EnumProcesses.restype = ctypes.wintypes.BOOL
self.EnumProcessses = EnumProcess
...
def getWindowsProcess(self):
count = 50
while True:
.... # Build arguments to EnumProcesses and call enun process
if self.EnumProcesses(ctypes.byref(processIds),...
..
else:
return []
I separated the code into two methods to make it easier to read (I believe you are already doing this). Here is the tricky part, EnumProcesses is using pointers and they are not easy to play with. Another thing is, that I don't know how to work with pointers in Python, so I couldn't tell you of an easy way to mock that out =P
What I can tell you is to simply not test it. Your logic there is very minimal. Besides increasing the size of count, everything else in that function is creating the space EnumProcesses pointers will use. Maybe you can add a limit to the count size but other than that, this method is short and sweet. It returns the windows processes and nothing more. Just what I was asking for in my original comment :)
So leave that method alone. Don't test it. Make sure though, that anything that uses getWindowsProcess and getLinuxProcess get's mocked out as per my original suggestion.
Hopefully this makes more sense :) If it doesn't let me know and maybe we can have a chat session or do a video call or something.
original answer
I'm not exactly sure how to do what you are asking, but whenever I need to test code that depends on some outside force (external libraries, popen or in this case processes) I mock out those parts.
Now, I don't know how your code is structured, but maybe you can do something like this:
def getWindowsProcesses(self, ...):
'''Call Windows API function EnumProcesses and
return the list of processes
'''
# ... call EnumProcesses ...
return listOfProcesses
def getLinuxProcesses(self, ...):
'''Look in /proc dir and return list of processes'''
# ... look in /proc ...
return listOfProcessses
These two methods only do one thing, get the list of processes. For Windows, it might just be a call to that API and for Linux just reading the /proc dir. That's all, nothing more. The logic for handling the processes will go somewhere else. This makes these methods extremely easy to mock out since their implementations are just API calls that return a list.
Your code can then easy call them:
def getProcesses(...):
'''Get the processes running.'''
isLinux = # ... logic for determining OS ...
if isLinux:
processes = getLinuxProcesses(...)
else:
processes = getWindowsProcesses(...)
# ... do something with processes, write to log file, etc ...
In your test, you can then use a mocking library such as Fudge. You mock out these two methods to return what you expect them to return.
This way you'll be testing your logic since you can control what the result will be.
from fudge import patched_context
...
def test_getProcesses(self, ...):
monitor = MonitorTool(..)
# Patch the method that gets the processes. Whenever it gets called, return
# our predetermined list.
originalProcesses = [....pids...]
with patched_context(monitor, "getLinuxProcesses", lamba x: originalProcesses):
monitor.getProcesses()
# ... assert logic is right ...
# Let's "add" some new processes and test that our logic realizes new
# processes were added.
newProcesses = [...]
updatedProcesses = originalProcessses + (newProcesses)
with patched_context(monitor, "getLinuxProcesses", lamba x: updatedProcesses):
monitor.getProcesses()
# ... assert logic caught new processes ...
# Let's "kill" our new processes and test that our logic can handle it
with patched_context(monitor, "getLinuxProcesses", lamba x: originalProcesses):
monitor.getProcesses()
# ... assert logic caught processes were 'killed' ...
Keep in mind that if you test your code this way, you won't get 100% code coverage (since your mocked methods won't be run), but this is fine. You're testing your code and not third party's, which is what matters.
Hopefully this might be able to help you. I know it doesn't answer your question, but maybe you can use this to figure out the best way to test your code.
Your original idea of using subprocess is a good one. Just create your own executable and name it something that identifies it as a testing thing. Maybe make it do something like sleep for a while.
Alternately, you could actually use the multiprocessing module. I've not used python in windows much, but you should be able to get process identifying data out of the Process object you create:
p = multiprocessing.Process(target=time.sleep, args=(30,))
p.start()
pid = p.getpid()
Context:
Imagine that you have a standard CherryPy hello word app:
def index(self):
return "Hello world!"
index.exposed = True
and you would like to do some post-processing, i.e. record request processing or just log the fact that we were called from specific IP. What you would do is probably:
def index(self):
self.RunMyPostProcessing()
return "Hello world!"
index.exposed = True
However, that will add to your request processing time. (btw. And probably you will use decorators, or even some more sophisticated method if you would like to call it on every function).
Question:
Is there a way of creating a global threading aware queue (buffer) to which each request can write messages (events) that needs be logged, while some magic function will grab it and post-process? Would you know a pattern for such a thing?
I bet that CherryPy supports something like that :-)
Thank you in advance...
The "global threading aware queue" is called Queue.Queue.
As i was looking for this and it's now outdated, i found it useful to provide the correct (2012ish) answer. Simply add this at the beginning of the function that handles your url :
cherrypy.request.hooks.attach('on_end_request', mycallbackfunction)
There's more infos on hooks in the documentation but it's not very clear to me.