I'm trying to use websockets with Django for small parts of my application.
Trying the first example to broadcast a message with django-websocket-redis
from ws4redis.publisher import RedisPublisher
redis_publisher = RedisPublisher(facility='foobar', broadcast=True)
redis_publisher.publish_message('Hello World')
I'm actually receiving the message into subscribed clients but I'm getting this error:
wrong number of arguments for 'set' command
[...]
Exception location my_virtualenv/local/lib/python2.7/site-packages/redis/connection.py in read_response, line 344
(traced from the publish_message() call)
My versions:
Django==1.6.2
django-websocket-redis==0.4.0
redis==2.9.1
Can someone help me to debug that ?
Looks like it's a bug.
Fix:
in ws4redis.redis_store.RedisStore in publish_message, change
self._connection.set(channel, message, ex=expire)
to
self._connection.setex(channel, expire, message)
the redis SET command does not take a 3rd argument. What I believe was meant was to set a value that expires after a number of seconds, which is the redis SETEX command. The py-redis setex method is called like setex(name, time, value).
This resolves the "Wrong number of argument for 'set'" error.
ref: https://github.com/jrief/django-websocket-redis/pull/30
I finally set the expiration time to 0 as a workaround
WS4REDIS_EXPIRE = 0
This prevents ws4redis to store anything in redis.
Fixed since 0.4.1
Related
I am interested in getting real-time data using the Gdax (Coinbase) WebSocket. I'm a total noob so I am inspecting the example Gdax posted in their documentation:
import gdax, time
class myWebsocketClient(gdax.WebsocketClient):
def on_open(self):
self.url = "wss://ws-feed.gdax.com/"
self.products = ["LTC-USD"]
self.message_count = 0
print("Lets count the messages!")
def on_message(self, msg):
self.message_count += 1
if 'price' in msg and 'type' in msg:
print ("Message type:", msg["type"],
"\t# {}.3f".format(float(msg["price"])))
def on_close(self):
print("-- Goodbye! --")
wsClient = myWebsocketClient()
wsClient.start()
print(wsClient.url, wsClient.products)
while (wsClient.message_count < 500):
print ("\nmessage_count =", "{} \n".format(wsClient.message_count))
time.sleep(1)
wsClient.close()
The output is:
...
Message type: received # 50.78.3f
Message type: open # 50.78.3f
Message type: done # 51.56.3f
Message type: received # 51.59.3f
Message type: open # 51.59.3f
Message type: done # 51.51.3f
Message type: done # 51.17.3f
Message type: done # 51.66.3f
Kernel died, restarting
I have a few question regarding this code and output:
What does the message type (received, open, done, match) mean, which type is used for doing calculations, and why are some types skipped?
Why does running the code always ends in 'Kernel died, restarting'?
The documentation states that this code is for illustration only. Does that mean that this isn't a proper way of getting real-time data in order to do stuff with it?
If you know some good articles or books that can teach a noob how to work with WebSockets, I would love to hear about them!
1) see the full documentation for each message here
2) Everything I find related to that issue stems from environment set up. Whether that is library dependencies not being properly installed or other environmental factors.
3) It's a proper way to set up a connection to the WebSocket but they don't provide any error handling or other logic. It's usually to cover themselves legally and reduce expectations for the code they provide (i.e. when someone receives error's similar to this they aren't liable to fix, update, help, etc)
Python interpreter (3.6.2 64bit win) crashed on close() for me too.
Here is a fix (from https://github.com/danpaquin/gdax-python/issues/152):
client.stop = True
client.thread.join()
client.ws.close()
I just added these in the on_close method and no more crashes (so far).
ps: the issue linked say that it should be fixed in the latest gdax version but the latest pip gdax (1.0.6) still crashed.
There are rather a few pitfalls to be aware of in building a full, real time order book (level 3) that I cannot document here but you may be interested to learn GDAX now offer a level 2 (almost real-time) update which will send you the updated prices for the order book. Probably much simpler to implement.
Using the code below I'm sending an email on error. I'm trying to include a link to the Cloud Console logs in the email but the request ID seems to be wrong about 30% of the time.
If I find the request with the wrong ID it's always almost a perfect match except the last three characters are 0 (in the Stackdriver console) instead of 101 (returned from the env variable), always the same substitution - is this a bug with cloud console or am I trying to use these IDs wrong?
The code (stripped down version):
class ErrorAlertMiddleware(object):
def process_response(self, request, response):
if response.status_code == 500:
logger.info(os.environ.get('REQUEST_LOG_ID'))
msg = 'Link to logs: https://console.cloud.google.com/logs/viewer?' + '&'.join((
'project=%s' % MY_APP_ID,
'expandAll=true',
'filters=request_id:%s' % os.environ.get('REQUEST_LOG_ID'),
'resource=gae_app',
))
# this is a utility func that simply sends email
sendemail(ERROR_RECIPIENT, msg)
return response
Note I've also logged the REQUEST_LOG_ID to ensure it's not being encoded or something and the log output matches what shows in the link
Instead of os.environ.get('REQUEST_LOG_ID'), use request.environ.get('REQUEST_LOG_ID').
It may be possible that os.environ['REQUEST_LOG_ID'] changes between the start of the current request and the time you access it, but request.environ['REQUEST_LOG_ID'] should not change once the request is initialized. The docs state that if one request ID is greater than another, than it occurred later than the other. This implies that the requestID in the Stackdriver console was generated before the one in your email link. This makes me think that somewhere along the line, os.environ['REQUEST_LOG_ID'] is being updated from '....000' to '....101' before you access it, while the copy in request.environ['REQUEST_LOG_ID'] should remain unchanged.
For more info on the request.environ, take a look at the source code in google.appengine.runtime.request_environment.py. I haven't really found documentation on that, but that code led me to believe that the os.environ is not as safe to access as the request.environ.
I'm experimenting with Django's mass_mail function. The code below keeps raising a "Too Many Values to Unpack" error, and I can't figure out why. I'm following the docs (https://docs.djangoproject.com/en/1.5/topics/email/#send-mass-mail) which seem pretty straightforward--what am I doing wrong? If it matters, the send-email address is made up, but I can't see that mattering.
if matching_record.level == 1:
users = self._get_users_to_be_notified(matching_record.category)
email_recipients = [str(user.email) for user in users if user.email]
message = 'Here is your requested notification that the service "%s" is having technical difficulties and has been set to "Critical".' %matching_record.name
mail_tuple = ('Notification',
message,
'notifications#service.com',
email_recipients)
send_mass_mail(mail_tuple)
send_mass_mail method's first argument is a tuple of message tuples, but you are sending just a message tuple. Change the function calling like below and check if it works:
send_mass_mail((mail_tuple,))
I am using boto library in Python to get Amazon SQS messages. In exceptional cases I don't delete messages from queue in order to give a couple of more changes to recover temporary failures. But I don't want to keep receiving failed messages constantly. What I would like to do is either delete messages after receiving more than 3 times or not get message if receive count is more than 3.
What is the most elegant way of doing it?
There are at least a couple of ways of doing this.
When you read a message in boto, you receive a Message object or some subclass thereof. The Message object has an "attributes" field that is a dict containing all message attributes known by SQS. One of the things SQS tracks is the approximate # of times the message has been read. So, you could use this value to determine whether the message should be deleted or not but you would have to be comfortable with the "approximate" nature of the value.
Alternatively, you could record message ID's in some sort of database and increment a count field in the database each time you read the message. This could be done in a simple Python dict if the messages are always being read within a single process or it could be done in something like SimpleDB if you need to record readings across processes.
Hope that helps.
Here's some example code:
>>> import boto.sqs
>>> c = boto.sqs.connect_to_region()
>>> q = c.lookup('myqueue')
>>> messages = c.receive_message(q, num_messages=1, attributes='All')
>>> messages[0].attributes
{u'ApproximateFirstReceiveTimestamp': u'1365474374620',
u'ApproximateReceiveCount': u'2',
u'SenderId': u'419278470775',
u'SentTimestamp': u'1365474360357'}
>>>
Other way could be you can put an extra identifier at the end of the message in your SQS queue. This identifier can keep the count of the number of times the message has been read.
Also if you want that your service should not poll these message again and again then you can create one more queue say "Dead Message Queue" and can transfer then message which has crossed the threshold to this queue.
aws has in-built support for this, just follow the below steps:
create a dead letter queue
enable Redrive policy for the source queue by checking "Use Redrive Policy"
select the dead letter queue you created in step#1 for "Dead Letter Queue"
Set "Maximum Receives" as "3" or any value between 1 and 1000
How it works is, whenever a message is received by the worker, the receive count increments. Once it reaches "Maximum Receives" count, the message is pushed to the dead letter queue. Note, even if you access the message via aws console, the receive count increments.
Source Using Amazon SQS Dead Letter Queues
Get ApproximateReceiveCount attribute from message you read.
move it to another queue(than you can manage error messages) or just delete it.
foreach (var message in response.Messages){
try{
var notifyMessage = JsonConvert.DeserializeObject<NotificationMessage>(message.Body);
Global.Sqs.DeleteMessageFromQ(message.ReceiptHandle);
}
catch (Exception ex){
var receiveMessageCount = int.Parse(message.Attributes["ApproximateReceiveCount"]);
if (receiveMessageCount >3 )
Global.Sqs.DeleteMessageFromQ(message.ReceiptHandle);
}
}
It should be done in few steps.
create SQS connection :-
sqsconnrec = SQSConnection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
create queue object :-
request_q = sqsconnrec.create_queue("queue_Name")
load the queue messages :-
messages= request_q.get_messages()
now you get the array of message objects and to find the total number of messages :-
just do len(messages)
should work like charm.
I am in the process of upgrading an older legacy system that is using Biztalk, MSMQs, Java, and python.
Currently, I am trying to upgrade a particular piece of the project which when complete will allow me to begin an in-place replacement of many of the legacy systems.
What I have done so far is recreate the legacy system in a newer version of Biztalk (2010) and on a machine that isn't on its last legs.
Anyway, the problem I am having is that there is a piece of Python code that picks up a message from an MSMQ and places it on another server. This code has been in place on our legacy system since 2004 and has worked since then. As far as I know, has never been changed.
Now when I rebuilt this, I started getting errors in the remote server and, after checking a few things out and eliminating many possible problems, I have established that the error occurs somewhere around the time the Python code is picking up from the MSMQ.
The error can be created using just two messages. Please note that I am using sample XMls here as the actual ones are pretty long.
Message one:
<xml>
<field1>Text 1</field1>
<field2>Text 2</field2>
</xml>
Message two:
<xml>
<field1>Text 1</field1>
</xml>
Now if I submit message one followed by message two to the MSMQ, they both appear correctly on the queue. If I then call the Python script, message one is returned correctly but message two gains extra characters.
Post-Python message two:
<xml>
<field1>Text 1</field1>
</xml>1>Te
I thought at first that there might have been scoping problems within the Python code but I have gone through that as well as I can and found none. However, I must admit that the first time that I've looked seriously at Python code is this project.
The Python code first peeks at a message and then receives it. I have been able to see the message when the script peeks and it has the same error message as when it receives.
Also, this error only shows up when going from a longer message to a shorter message.
I would welcome any suggestions of things that might be wrong, or things I could do to identify the problem.
I have googled and searched and gone a little crazy. This is holding an entire project up, as we can't begin replacing the older systems with this piece in place to act as a new bridge.
Thanks for taking the time to read through my problem.
Edit: Here's the relevant Python code:
import sys
import pythoncom
from win32com.client import gencache
msmq = gencache.EnsureModule('{D7D6E071-DCCD-11D0-AA4B-0060970DEBAE}', 0, 1, 0)
def Peek(queue):
qi = msmq.MSMQQueueInfo()
qi.PathName = queue
myq = qi.Open(msmq.constants.MQ_PEEK_ACCESS,0)
if myq.IsOpen:
# Don't loose this pythoncom.Empty thing (it took a while)
tmp = myq.Peek(pythoncom.Empty, pythoncom.Empty, 1)
myq.Close()
return tmp
The function calls this piece of code. I don't have access to the code that calls this until Monday, but the call is basically:
msg= MSMQ.peek()
2nd Edit.
I am attaching the first half of the script. this basically loops around
import base64, xmlrpclib, time
import MSMQ, Config, Logger
import XmlRpcExt,os,whrandom
QueueDetails = Config.InQueueDetails
sleeptime = Config.SleepTime
XMLRPCServer = Config.XMLRPCServer
usingBase64 = Config.base64ing
version=Config.version
verbose=Config.verbose
LogO = Logger.Logger()
def MSMQToIAMS():
# moved svr cons out of daemon loop
LogO.LogP(version)
svr = xmlrpclib.Server(XMLRPCServer, XmlRpcExt.getXmlRpcTransport())
while 1:
GotOne = 0
for qd in QueueDetails:
queue, agency, messagetype = qd
#LogO.LogD('['+version+"] Searching queue %s for messages"%queue)
try:
msg=MSMQ.Peek(queue)
except Exception,e:
LogO.LogE("Peeking at \"%s\" : %s"%(queue, e))
continue
if msg:
try:
msg = msg.__call__().encode('utf-8')
except:
LogO.LogE("Could not convert massege on \"%s\" to a string, leaving it on queue"%queue)
continue
if verbose:
print "++++++++++++++++++++++++++++++++++++++++"
print msg
print "++++++++++++++++++++++++++++++++++++++++"
LogO.LogP("Found Message on \"%s\" : \"%s...\""%(queue, msg[:40]))
try:
rv = svr.accept(msg, agency, messagetype)
if rv[0] != "OK":
raise Exception, rv[0]
LogO.LogP('Message has been sent successfully to IAMS from %s'%queue)
MSMQ.Receive(queue)
GotOne = 1
StoreMsg(msg)
except Exception, e:
LogO.LogE("%s"%e)
if GotOne == 0:
time.sleep(sleeptime)
else:
gotOne = 0
This is the full code that calls MSMQ. Creates a little program that watches MSMQ and when a message arrives picks it up and sends it off to another server.
Sounds really Python-specific (of which I know nothing) rather then MSMQ-specific. Isn't this just a case of a memory variable being used twice without being cleared in between? The second message is shorter than the first so there are characters from the first not being overwritten. What do the relevant parts of the Python code look like?
[[21st April]]
The code just shows you are populating the tmp variable with a message. What happens to tmp before the next message is accessed? I'm assuming it is not cleared.