Pyrogram handler not working when trying to call the function - python

filter_ = (filters.me & ~filters.forwarded & ~filters.incoming & filters.via_bot & filters.command(".", ["ascii"]))
async def hello(client, message):
await message.reply("HELLLO WORLD")
app.add_handler(hello, filter_ )
app.start()
idle()
app.stop()
It just always goes into a loop, nothing more.
It does not work, no reply by the client.
What's wrong in it? Or am I doing something wrong?

You need to add a MessageHandler().
from pyrogram.handlers import MessageHandler
...
app.add_handler(MessageHandler(hello, filter_))
See Update Handler in the documentation for a reference.
While this is unrelated to your original question, I believe Decorators to be a better alternative, as they don't require an additional import or instantiation:
from pyrogram import Client
app = Client()
#app.on_message(filter_)
def hello(client, message):
await message.reply("hello")
app.run() # This app.run() call also skips app.start(), idle() and app.stop()
Edit to reply to the "answer" below:
For what you're testing you're using way too complicated filters.
filter_ = (
filters.me # Messages that you sent
& ~filters.forwarded # Not messages that were forwarded
& filters.incoming # Messages this session received
& ~filters.via_bot # No "via #samplebot" (ie no inline bots)
& filters.command(".", ["dict", "define", "meaning"] # The crux of your issue.
)
The Command Filter takes three arguments. commands, prefixes, and case_sensitive. Since you're not using named arguments (arg=value) you need to keep them in order.
Only the first argument is required and needs to be a single string, or a list of strings (for multiple commands). If not specified, prefixes will default to "/" and commands need to be like /this to trigger. Since you have the arguments in the other order, you're messing up the command filter.
You need to switch the argument for your command filter around (see docs) or, better yet, start with the minimal example you were asked for when creating a question.

Related

Is there a way to take input instead of the "TEXT" in this async function?

I'm trying to make a GUI application to control my ELK-BlEDOM LED lights and I was wondering if this async fuction which is used to send commands to the bluetooth controller can take in user inputted commands instead of typing it in the code itself. Im very new to this please bare with me.
import asyncio
from bledom.device import BleLedDevice
from bleak import BleakScanner, BleakClient
async def main():
for device in await BleakScanner.discover():
client = BleakClient(device)
await client.connect()
device = await BleLedDevice.new(client)
await "TEXT"
asyncio.run(main())
I've tried to use the tkinter Entry but all I get is Timeout error.
Although I don't know what exactly asyncio is, I do know you can pass
an argument to a function, you could take the user input, then say on a button click it sets that input to a variable and you pass that variable to your function.
Passing an argument can be done like this:
# In the () is what you name your arg that you'll use in the function
def main(a):
# Then use "a" in the function for something
a = do_stuff_with
# you can then call on the function main(ARG_HERE) like bellow:
When you want to run the function with your argument you can
do that like this:
# Set the argument or user input to a variable
my_arg = doSomeCoolBluetoothStuff
# Run the function passing the argument
main(my_arg)
# Then my_arg becomes a in your function
Thingamabobs also has a good point though, it should be ran on a different thread to keep from the main UI "freezing" when its running a computation or something of that nature. I don't know anything about the Bluetooth stuff, but if it does cause stutters or freezing look into running this in a background thread. There's a wealth of write ups on that. hopefully this can help at least get in the right direction =)

Python Flask: How to wait for webhook to be executed?

I am working on a Python flask app, and the main method start() calls an external API (third_party_api_wrapper()). That external API has an associated webhook (webhook()) that receives the output of that external API call (note that the output that webhook() receives is actually different from the response returned in the third_party_wrapper())
The main method start() needs the result of webhook(). How do I make start() wait for webhook() to be executed? And how do wo pass the returned value of webhook() back to start()?
Here's is a minimal code snippet to capture the scenario.
#app.route('/webhook', methods=['POST'])
def webhook():
return "webhook method has executed"
# this method has a webhook that calls webhook() after this method has executed
def third_party_api_wrapper():
url = 'https://api.thirdparty.com'
response = requests.post(url)
return response
# this is the main entry point
#app.route('/start', methods=['POST'])
def start():
third_party_api_wrapper()
# The rest of this code depends on the output of webhook().
# How do we wait until webhook() is called, and how do we access the returned value?
The answer to this question really depends on how you plan on running your app in production. It's much simpler if we make the assumption that you only plan to have a single instance of your app running at once (as opposed to multiple behind a load balancer, for example), so I'll make that assumption first to give you a place to start, and comment on a more "production-ready" solution afterwards.
A big thing to keep in mind when writing a web application is that you have to understand how you want the outside world to interact with your app. Do you expect to have the /start endpoint called only once at the beginning of your app's lifetime, or is this a generic endpoint that may start any number of background processes that you want the caller of each to wait for? Or, do you want the behavior where any caller after the first one will wait for the same process to complete as the first one? I can't answer these questions for you, it depends on the use-case you're trying to implement. I'll give you a relatively simple solution that you should be able to modify to fulfill any of the ones I mentioned though.
This solution will use the Event class from the threading standard library module; I added some comments to clarify which parts you may have to change depending on the specifics of the API you're calling and stuff like that.
import threading
import uuid
from typing import Any
import requests
from flask import Flask, Response, request
# The base URL for your app, if you're running it locally this should be fine
# however external providers can't communicate with your `localhost` so you'll
# need to change this for your app to work end-to-end.
BASE_URL = "http://localhost:5000"
app = Flask(__name__)
class ThirdPartyProcessManager:
def __init__(self) -> None:
self.events = {}
self.values = {}
def wait_for_request(self, request_id: str) -> None:
event = threading.Event()
actual_event = self.events.setdefault(request_id, event)
if actual_event is not event:
raise ValueError(f"Request {request_id} already exists.")
event.wait()
return self.values.pop(request_id)
def finish_request(self, request_id: str, value: Any) -> None:
event = self.events.pop(request_id, None)
if event is None:
raise ValueError(f"Request {request_id} does not exist.")
self.values[request_id] = value
event.set()
MANAGER = ThirdPartyProcessManager()
# This is assuming that you can specify the callback URL per-request, otherwise
# you may have to get the request ID from the body of the request or something
#app.route('/webhook/<request_id>', methods=['POST'])
def webhook(request_id: str) -> Response:
MANAGER.finish_request(request_id, request.json)
return "webhook method has executed"
# Somehow in here you need to create or generate a unique identifier for this
# request--this may come from the third-party provider, or you can generate one
# yourself. There are three main paths I see here:
# - If you can specify the callback/webhook URL in each call, you can just pass them
# <base>/webhook/<request_id> and use that to identify which request is being
# responded to in the webhook.
# - If the provider gives you a request ID, you can return it from this function
# then retrieve it from the request body in the webhook route
# For now, I'll assume the first situation but you should be able to implement the second
# with minimal changes
def third_party_api_wrapper() -> str:
request_id = uuid.uuid4().hex
url = 'https://api.thirdparty.com'
# Just an example, I don't know how the third party API you're working with works
response = requests.post(
url,
json={"callback_url": f"{BASE_URL}/webhook/{request_id}"}
)
# NOTE: unrelated to the problem at hand, you should always check for errors
# in HTTP responses. This method is an easy way provided by requests to raise
# for non-success status codes.
response.raise_for_status()
return request_id
#app.route('/start', methods=['POST'])
def start() -> Response:
request_id = third_party_api_wrapper()
result = MANAGER.wait_for_request(request_id)
return result
If you want to run the example fully locally to test it, do the following:
Comment out lines 62-71, which actually make the external API call
Add a print statement after line 77, so that you can get the ID of the "in flight" request. E.g. print("Request ID", request_id)
In one terminal, run the app by pasting the above code into an app.py file and running flask run in that directory.
In another terminal, start the process via:
curl -XPOST http://localhost:5000/start
Copy the request ID that will be logged in the first terminal that's running the server.
In a third terminal, complete the process by calling the webhook:
curl -XPOST http://localhost:5000/webhook/<your_request_id> -H Content-Type:application/json -d '{"foo":"bar"}'
You should see {"foo":"bar"} as the response in the second terminal that made the /start request.
I hope that's enough to help you get started w/ whatever problem you're trying to solve.
There are a couple of design-y comments I have based on the information provided as well:
As I mentioned before, this will not work if you have more than one instance of the app running at once. This works by storing the state of in-flight requests in a global state inside your python process, so if you have more than one process, they won't all be working and modifying the same state. If you need to run more than one instance of your process, I would use a similar approach with some database backend to store the shared state (assuming your requests are pretty short-lived, Redis might be a good choice here, but once again it'll depend on exactly what you're trying to do).
Even if you do only have one instance of the app running, flask is capable of being run in a variety of different server contexts--for example, the server might be using threads (the default), greenlets via gevent or a similar library, or multiple processes, or maybe some other approach entirely in order to handle multiple requests concurrently. If you're using an approach that creates multiple processes, you should be able to use the utilities provided by the multiprocessing module to implement the same approach as I've given above.
This approach probably will work just fine for something where the difference in time between the API call and the webhook response is small (on the order of a couple of seconds at most I'd say), but you should be wary of using this approach for something where the difference in time can be quite large. If the connection between the client and your server fails, they'll have to make another request and run the long-running process that your third party is completing for you again. Some proxies and load balancers may also have time out behavior that could terminate the request after a certain amount of time even if nothing goes wrong in the connection between your server and the client making a request to it. An alternative approach would be for your /start endpoint to return quickly and give the client a request_id that they could poll for updates. As an example, AWS Athena's API is structured like this--there is a StartQueryExecution method, and separate GetQueryExecution and GetQueryResults methods that the client makes requests to check the status of a query and retrieve the results respectively (there are also other methods like StopQueryExecution and GetQueryRuntimeStatistics available as well). You can check out the documentation here.
I know that's a lot of info, but I hope it helps. Happy to update the answer w/ more specific info if you'll provide some more details about your use-case.

TwitchIO bot without blocking

I'd like to use TwitchIO to talk to Twitch chat inside another program, without needing to hijack the main loop with Bot's run().
The official documentation here (https://twitchio.readthedocs.io/en/latest/quickstart.html) shows the code being run like:
from twitchio.ext import commands
class Bot(commands.Bot):
def __init__(self):
# Initialise our Bot with our access token, prefix and a list of channels to join on boot...
# prefix can be a callable, which returns a list of strings or a string...
# initial_channels can also be a callable which returns a list of strings...
super().__init__(token='ACCESS_TOKEN', prefix='?', initial_channels=['...'])
async def event_ready(self):
# Notify us when everything is ready!
# We are logged in and ready to chat and use commands...
print(f'Logged in as | {self.nick}')
#commands.command()
async def hello(self, ctx: commands.Context):
# Here we have a command hello, we can invoke our command with our prefix and command name
# e.g ?hello
# We can also give our commands aliases (different names) to invoke with.
# Send a hello back!
# Sending a reply back to the channel is easy... Below is an example.
await ctx.send(f'Hello {ctx.author.name}!')
bot = Bot()
bot.run()
# bot.run() is blocking and will stop execution of any below code here until stopped or closed.
But as that last line says, run() will block execution.
Is there some other way of running it that doesn't block? Something like (made up)
bot.poll()
That would need to be run periodically in my program's main loop?
Are you adding any more code that uses the Bot class? If not I would suggest just making 2 processes.
The simplest way to do this is just creating 2 python files and running both of them at the same time.
If you really must run them both on the same program I would look into parallel processing. The next time you post a question I would suggest putting that "other programs" code into the question so people don't have to make those assumptions.
#ps if you need to run them in the same program edit your question to show the code you need to run together and ill take another look

Tornado Coroutine : Return value and one time execution

I'm new to Tornado so I wanted to know if the code below is a correct way to approach the problem or there is a better one. It works but I'm not sure regarding its efficiency.
The code is based on the documentation here
In the middle of my script, I need to run HTTP Requests (10-50). Apparently, it is possible to do this in parallel this way :
#gen.coroutine
def parallel_fetch_many(urls):
responses = yield [http_client.fetch(url) for url in urls]
# responses is a list of HTTPResponses in the same order
How do I access responses after the coroutine is done ? Can I just add return responses ?
Also, as I only need to use an async process once in my code, I start the IOLoop this way :
# run_sync() doesn't take arguments, so we must wrap the
# call in a lambda.
IOLoop.current().run_sync(lambda: parallel_fetch_many(googleLinks))
Is it correct to do it this way ? Or should I just start the IOLoop at the beginning of the script and stop it a the end, even though I only use an async process once.
Basically, my question is : Is the code below correct ?
#gen.coroutine
def parallel_fetch_many(urls):
responses = yield [http_client.fetch(url) for url in urls]
return responses
googleLinks = [url1,url2,...,urln]
responses = IOLoop.current().run_sync(lambda:parallel_fetch_many(googleLinks))
do_something(responses)
Yes, your code looks correct to me.

Add/Change channel for handler at runtime

In circuits 3.1.0, is there a way to set at runtime the channel for a handler?
An useful alternative would be to add a handler at runtime and specify the channel.
I've checked the Manager.addHandler implementation but couldn't make it work. I tried:
self._my_method.__func__.channel = _my_method_channel
self._my_method.__func__.names = ["event name"]
self.addHandler(self._my_method)
Yes there is; however it's not really a publically exposed API.
Example: (of creating event handlers at runtime)
#handler("foo")
def on_foo(self):
return "Hello World!"
def test_addHandler():
m = Manager()
m.start()
m.addHandler(on_foo)
This is taken from tests.core.test_dynamic_handlers
NB: Every BaseComponent/Component subclass is also a subclass of Manager and has the .addHandler() and .removeHandler() methods. You can also set the #handler() dynamically like this:
def on_foo(...):
...
self.addHandler(handler("foo")(on_foo))
You can also see a good example of this in the library itself with circuits.io.process where we dynamically create event handlers for stdin, stdout and stderr.

Categories