I'm attempting to write a quick load-test script for our ejabberd cluster that simply logs into a chat room, posts a couple of random messages, then exits.
We had attempted this particular test with tsung, but according to the authors, the muc functionality did not make it into this release.
pyxmpp seems to have this functionality, but darned if I can figure out how to make it work. Here's hoping someone has a quick explanation of how to build the client and join/post to the muc.
Thanks!
Hey I stumbled over your question a few times, while trying the same thing.
Here is my answer:
Using http://pyxmpp.jajcus.net/svn/pyxmpp/trunk/examples/echobot.py as a quickstart, all you have to do is import the MUC-Stuff
from pyxmpp.jabber.muc import MucRoomState, MucRoomManager
And once your Client is connected, you can connect to your room:
def session_started(self):
"""Handle session started event. May be overriden in derived classes.
This one requests the user's roster and sends the initial presence."""
print u'SESSION STARTED'
self.request_roster()
p=Presence()
self.stream.send(p)
print u'ConnectToParty'
self.connectToMUC()
def connectToMUC(self):
self.roomManager = MucRoomManager(self.stream);
self.roomHandler = MucRoomHandler()
self.roomState = self.roomManager.join(
room=JID('room#conference.server.domain'),
nick='PartyBot',
handler=self.roomHandler,
history_maxchars=0,
password = None)
self.roomManager.set_handlers()
To send a message, all you have to do is call self.roomState.send_message("Sending this Message")
To do stuff, inherit from MucRoomHandler and react on events. Notice the "set_handlers()" to roomManager though, it's is important, otherwise callbacks will not be called..
Related
I am working on a Python flask app, and the main method start() calls an external API (third_party_api_wrapper()). That external API has an associated webhook (webhook()) that receives the output of that external API call (note that the output that webhook() receives is actually different from the response returned in the third_party_wrapper())
The main method start() needs the result of webhook(). How do I make start() wait for webhook() to be executed? And how do wo pass the returned value of webhook() back to start()?
Here's is a minimal code snippet to capture the scenario.
#app.route('/webhook', methods=['POST'])
def webhook():
return "webhook method has executed"
# this method has a webhook that calls webhook() after this method has executed
def third_party_api_wrapper():
url = 'https://api.thirdparty.com'
response = requests.post(url)
return response
# this is the main entry point
#app.route('/start', methods=['POST'])
def start():
third_party_api_wrapper()
# The rest of this code depends on the output of webhook().
# How do we wait until webhook() is called, and how do we access the returned value?
The answer to this question really depends on how you plan on running your app in production. It's much simpler if we make the assumption that you only plan to have a single instance of your app running at once (as opposed to multiple behind a load balancer, for example), so I'll make that assumption first to give you a place to start, and comment on a more "production-ready" solution afterwards.
A big thing to keep in mind when writing a web application is that you have to understand how you want the outside world to interact with your app. Do you expect to have the /start endpoint called only once at the beginning of your app's lifetime, or is this a generic endpoint that may start any number of background processes that you want the caller of each to wait for? Or, do you want the behavior where any caller after the first one will wait for the same process to complete as the first one? I can't answer these questions for you, it depends on the use-case you're trying to implement. I'll give you a relatively simple solution that you should be able to modify to fulfill any of the ones I mentioned though.
This solution will use the Event class from the threading standard library module; I added some comments to clarify which parts you may have to change depending on the specifics of the API you're calling and stuff like that.
import threading
import uuid
from typing import Any
import requests
from flask import Flask, Response, request
# The base URL for your app, if you're running it locally this should be fine
# however external providers can't communicate with your `localhost` so you'll
# need to change this for your app to work end-to-end.
BASE_URL = "http://localhost:5000"
app = Flask(__name__)
class ThirdPartyProcessManager:
def __init__(self) -> None:
self.events = {}
self.values = {}
def wait_for_request(self, request_id: str) -> None:
event = threading.Event()
actual_event = self.events.setdefault(request_id, event)
if actual_event is not event:
raise ValueError(f"Request {request_id} already exists.")
event.wait()
return self.values.pop(request_id)
def finish_request(self, request_id: str, value: Any) -> None:
event = self.events.pop(request_id, None)
if event is None:
raise ValueError(f"Request {request_id} does not exist.")
self.values[request_id] = value
event.set()
MANAGER = ThirdPartyProcessManager()
# This is assuming that you can specify the callback URL per-request, otherwise
# you may have to get the request ID from the body of the request or something
#app.route('/webhook/<request_id>', methods=['POST'])
def webhook(request_id: str) -> Response:
MANAGER.finish_request(request_id, request.json)
return "webhook method has executed"
# Somehow in here you need to create or generate a unique identifier for this
# request--this may come from the third-party provider, or you can generate one
# yourself. There are three main paths I see here:
# - If you can specify the callback/webhook URL in each call, you can just pass them
# <base>/webhook/<request_id> and use that to identify which request is being
# responded to in the webhook.
# - If the provider gives you a request ID, you can return it from this function
# then retrieve it from the request body in the webhook route
# For now, I'll assume the first situation but you should be able to implement the second
# with minimal changes
def third_party_api_wrapper() -> str:
request_id = uuid.uuid4().hex
url = 'https://api.thirdparty.com'
# Just an example, I don't know how the third party API you're working with works
response = requests.post(
url,
json={"callback_url": f"{BASE_URL}/webhook/{request_id}"}
)
# NOTE: unrelated to the problem at hand, you should always check for errors
# in HTTP responses. This method is an easy way provided by requests to raise
# for non-success status codes.
response.raise_for_status()
return request_id
#app.route('/start', methods=['POST'])
def start() -> Response:
request_id = third_party_api_wrapper()
result = MANAGER.wait_for_request(request_id)
return result
If you want to run the example fully locally to test it, do the following:
Comment out lines 62-71, which actually make the external API call
Add a print statement after line 77, so that you can get the ID of the "in flight" request. E.g. print("Request ID", request_id)
In one terminal, run the app by pasting the above code into an app.py file and running flask run in that directory.
In another terminal, start the process via:
curl -XPOST http://localhost:5000/start
Copy the request ID that will be logged in the first terminal that's running the server.
In a third terminal, complete the process by calling the webhook:
curl -XPOST http://localhost:5000/webhook/<your_request_id> -H Content-Type:application/json -d '{"foo":"bar"}'
You should see {"foo":"bar"} as the response in the second terminal that made the /start request.
I hope that's enough to help you get started w/ whatever problem you're trying to solve.
There are a couple of design-y comments I have based on the information provided as well:
As I mentioned before, this will not work if you have more than one instance of the app running at once. This works by storing the state of in-flight requests in a global state inside your python process, so if you have more than one process, they won't all be working and modifying the same state. If you need to run more than one instance of your process, I would use a similar approach with some database backend to store the shared state (assuming your requests are pretty short-lived, Redis might be a good choice here, but once again it'll depend on exactly what you're trying to do).
Even if you do only have one instance of the app running, flask is capable of being run in a variety of different server contexts--for example, the server might be using threads (the default), greenlets via gevent or a similar library, or multiple processes, or maybe some other approach entirely in order to handle multiple requests concurrently. If you're using an approach that creates multiple processes, you should be able to use the utilities provided by the multiprocessing module to implement the same approach as I've given above.
This approach probably will work just fine for something where the difference in time between the API call and the webhook response is small (on the order of a couple of seconds at most I'd say), but you should be wary of using this approach for something where the difference in time can be quite large. If the connection between the client and your server fails, they'll have to make another request and run the long-running process that your third party is completing for you again. Some proxies and load balancers may also have time out behavior that could terminate the request after a certain amount of time even if nothing goes wrong in the connection between your server and the client making a request to it. An alternative approach would be for your /start endpoint to return quickly and give the client a request_id that they could poll for updates. As an example, AWS Athena's API is structured like this--there is a StartQueryExecution method, and separate GetQueryExecution and GetQueryResults methods that the client makes requests to check the status of a query and retrieve the results respectively (there are also other methods like StopQueryExecution and GetQueryRuntimeStatistics available as well). You can check out the documentation here.
I know that's a lot of info, but I hope it helps. Happy to update the answer w/ more specific info if you'll provide some more details about your use-case.
Is there a Python Client-Side API for Discord?
I don't need much, just to listen to events like getting a call or a message.
Is it possible?
Note that selfbots are against TOS, and you could be banned without warning.
Sounds like you want a selfbot.
What you might be looking for is Discord.py, many selfbots are written in that, such as:
https://github.com/appu1232/Discord-Selfbot
If you would rather not get banned, discord.py is still good for scripting bots for servers.
Ok late answer but maybe someone can benefit from it so here goes.
Never use discord.py for selfbots. Discord.py was created to work on bot accounts not user accounts. That being said, a lot of things in discord.py will flag your account.
If you want, you can use what I'm currently developing with Merubokkusu: discum: discord selfbot api wrapper
Here's the classic ping-pong example:
import discum
bot=discum.Client(token=yourtoken)
#bot.gateway.command
def pingpong(resp):
if resp.event.message:
m = resp.parsed.auto()
if m['content'] == 'ping':
bot.sendMessage(m['channel_id'], 'pong')
bot.gateway.run()
Here's a ping-pong example where you don't reply to yourself:
import discum
bot=discum.Client(token=yourtoken)
#bot.gateway.command
def pingpong(resp):
if resp.event.message:
m = resp.parsed.auto()
if m['author']['id'] != bot.gateway.session.user['id']
if m['content'] == 'ping':
bot.sendMessage(m['channel_id'], 'pong')
bot.gateway.run()
here's another example, this one appends live messages to a list:
import discum
bot=discum.Client(token=yourtoken)
messagelist = []
#bot.gateway.command
def pingpong(resp):
if resp.event.message:
messagelist.append(resp.raw)
bot.gateway.run()
Also, if you're just doing this in the terminal and don't want to reinitialize your gateway every time, you can just clear the commands you've set
bot.gateway.clearCommands()
and clear the current (gateway) session variables
bot.gateway.resetSession()
Discum is intended to be a raw wrapper in order to give the developer maximum freedom. It's also written to be relatively-simple, easy to build-on, and easy-to-use. Hope this helps someone! Happy coding!
I want to create the bot with telepot that ask the users frequent questions.
For example first ask 'whats your name.?' then the user reply 'user-name',then ask how old are you? and the user reply his age and ...
I had written a code for this chat between user and bot,but sometimes I am getting error. Please guide me how can I make this bot with telepot.?
I want to make conversation between bot and users with telepot
I am no longer maintaining this library. Thanks for considering
telepot.
- the maintainer, nickoala
What you're looking for is DelegatorBot.
Consider this tutorial.
Consider this scenario. A bot wants to have an intelligent
conversation with a lot of users, and if we could only use a single
line of execution to handle messages (like what we have done so far),
we would have to maintain some state variables about each conversation
outside the message-handling function(s). On receiving each message,
we first have to check whether the user already has a conversation
started, and if so, what we have been talking about. To avoid such
mundaneness, we need a structured way to maintain “threads” of
conversation.
DelegatorBot provides you with one instance of your bot for every user, so you don't have to think about what happens when multiple users talk to it. (If it helps you, feel free to have a look at how I am using it.)
The tutorial's example is a simple counter of how many messages the user has sent:
import sys
import time
import telepot
from telepot.loop import MessageLoop
from telepot.delegate import pave_event_space, per_chat_id, create_open
class MessageCounter(telepot.helper.ChatHandler):
def __init__(self, *args, **kwargs):
super(MessageCounter, self).__init__(*args, **kwargs)
self._count = 0
def on_chat_message(self, msg):
self._count += 1
self.sender.sendMessage(self._count)
TOKEN = sys.argv[1] # get token from command-line
bot = telepot.DelegatorBot(TOKEN, [
pave_event_space()(
per_chat_id(), create_open, MessageCounter, timeout=10),
])
MessageLoop(bot).run_as_thread()
while 1:
time.sleep(10)
This code creates an instance of MessageCounter for every individual user.
I had written a code for this chat between user and bot,but sometimes I am getting error.
If your question was rather about the errors you're getting than about how to keep a conversation with state, you need to provide more information about what errors you're getting, and when those appear.
I've been having issues with Wit.ai where my Python bot will retain the context after ending a conversation. This behaviour is the same in the Facebook client and the pywit interactive client.
The conversation starts with a simple 'Hi' and can end at different points within different branches if a user taps a 'Thanks, bye' quick reply after a successful query.
If the conversation is then started with 'Hi' once again, the session state is saved from before which leads to wrong responses. What is the best way to delete the context after the user has said goodbye?
I tried creating a goodbye function that triggers after the bot has sent its final message but it didn't work e.g.
def goodbye(request):
del request['context'] # or request.clear()
return request
The documentation (https://wit.ai/docs/http/20160526#post--converse-link) suggests you clear the session_id and generate a new one but gives no hints as to how.
You can generate new Session Ids using uuid. Session ID has to be any text that is unique, it can even be system date. I suggest you use uuid
Check here as to how to generate it.
I was confronted with the same issue and I solved it in the following way.
I first created a simple end_session action, to be called at the end of each conversation path:
def end_session(request):
return {'end_session': True}
Then I inserted the following code just after returning from run_actions:
if 'end_session' in context:
context = {}
session_hash = uuid.uuid1().hex
As you see, in addition to clearing the context, as you do, I also recreate a new session id (as per Swapnesh Khare's suggestion).
I'm not sure this is the best solution, but it works for me.
I am using the standard asynchronous publisher example. and i noticed that the publisher will keep publishing the same message in a loop forever.
So i commented the schedule_next_message call from publish_message to stop that loop.
But what i really want is for the publissher to start and publish only when a user give it a "message_body" and "Key"
basically publisher to publish the user inputs.
i was not able to fin any examples or hints of how to make the publisher take inputs from user in real time.
I am new to raabitmq, pika, python e.t.c
here is the snippet of code i am talking about :-
def publish_message(self):
"""If the class is not stopping, publish a message to RabbitMQ,
appending a list of deliveries with the message number that was sent.
This list will be used to check for delivery confirmations in the
on_delivery_confirmations method.
Once the message has been sent, schedule another message to be sent.
The main reason I put scheduling in was just so you can get a good idea
of how the process is flowing by slowing down and speeding up the
delivery intervals by changing the PUBLISH_INTERVAL constant in the
class.
"""
if self._stopping:
return
message = {"service":"sendgrid", "sender": "nutshi#gmail.com", "receiver": "nutshi#gmail.com", "subject": "test notification", "text":"sample email"}
routing_key = "email"
properties = pika.BasicProperties(app_id='example-publisher',
content_type='application/json',
headers=message)
self._channel.basic_publish(self.EXCHANGE, routing_key,
json.dumps(message, ensure_ascii=False),
properties)
self._message_number += 1
self._deliveries.append(self._message_number)
LOGGER.info('Published message # %i', self._message_number)
#self.schedule_next_message()
#self.stop()
def schedule_next_message(self):
"""If we are not closing our connection to RabbitMQ, schedule another
message to be delivered in PUBLISH_INTERVAL seconds.
"""
if self._stopping:
return
LOGGER.info('Scheduling next message for %0.1f seconds',
self.PUBLISH_INTERVAL)
self._connection.add_timeout(self.PUBLISH_INTERVAL,
self.publish_message)
def start_publishing(self):
"""This method will enable delivery confirmations and schedule the
first message to be sent to RabbitMQ
"""
LOGGER.info('Issuing consumer related RPC commands')
self.enable_delivery_confirmations()
self.schedule_next_message()
the site does not let me add the solution .. i was able to solve my issue using raw_input()
Thanks
I know I'm a bit late to answer the question but have you looked at this one?
Seems to be a bit more related to what you need than using a full async publisher. Normally you use those with a Python Queue to pass messages between threads.