Handling / Bouncing Incoming Email on Errors with App Engine Python - python

When an email is received that generates an error what is the best way to bounce the message? For example you store a file in a db.BlobProperty but an email comes in that exceeds the 1m limit. There needs to be a bounce error to the request somehow so the email doesn't keep hitting the server and increasing the billing every 15 minutes. (Don't ask me how I know :-P ... not really it is a separate but related issue I posted in another question. here )
But that other error made it clear I need to deal with this before I get that email with multiple attachments that nails me for 1gb of data.
Normally the mail server handles the bounce, like when you send to a bad address and returns an error to the client/server. I have searched and didn't find anything helpful on this. YMMV
Is there an undocumented function? What is the proper response to return so that the originating server stops sending?

There's no way to bounce a message once it arrives at your App Engine app. You have two options:
Send the user a 'bounce message' yourself using the outgoing email API
Silently discard the message
In either case, you should install a top-level exception handler (frameworks like webapp and webapp2 have support for this) that logs the exception, performs the appropriate action, and then returns status code 200 instead of 500, so the message won't be redelivered repeatedly.
In your specific case, too, I'd start storing the attachments in the blobstore instead of a blob property, to avoid the 1MB limit.

Related

QuickFIX logon trouble: multiple rapid fire logon attempts being sent

QuickFIX logon trouble: (using QuickFIX, with FIX 4.4 in Python 2.7)
Once I do initiator.start() a connection is made, and logon message is sent. However, I don't ever see the ACK and session status message that the broker is sending back (all the overloaded Application methods are just supposed to print out what they receive).
QuickFIX immediately re-tries the logon (according to the broker log files), and the same thing happens, but according to the server, I am already logged in.
QuickFIX then issues a Logout command, which the server complies with.
I have tried enter Timeout values in the settings file, but to no avail. (Do I need to explicitly reference these values in the code to have the utilized, or will the engine see them and act accordingly automatically?)
Any ideas what is going on here?
Sounds like you do not have message logs enabled. If your app rejects messages below the application level (such as if the seq no is wrong, or the message is malformed), then it'll be rejected before your custom message handlers even see it.
If you are starting your Initiator with a ScreenLogStore, change it to a FileLogStore. This will create a log file that will contain every message sent and received on the session, valid or not. Dollars to donuts you'll see your Logon acks in there as well as some Transport-layer rejections.
Solved! I think there was something wrong with my datadictionary (FIX44.xml) file. I had seen a problem in it before, but thought I fixed it. I got a new copy online and dropped it in and now everything seems to be working. Maybe the bad dictionary was not letting FIX accept the logon response?

What's the preferred method for throttle websocket connections?

I have a web app where I am streaming model changes to a backbone collection in a chrome client. There a a few backbone views that may or may not render parts of the page depending on the type of update and what is being looked at. For example some changes to a model result in the view for the collection being re-rendered and there may or may not be a detail panel view open for the model that's being updated. These model changes can happen very fast as the server side workflow involves quite verbose and rapid changes to the model.
Here's the problem: I'm getting a large number of errno 32 pipe broken messages in the webserver's process when sending messages to the client, although the websocket connection is still up and its readyState is still 1 (OPEN).
What I suspect is happening is that the various views haven't finished rendering in the onmessage callback by the time the next message is coming in. After I get these tracebacks in stdout the websocket connection can still work and the UI will still update.
If I put eventlet.sleep(0.02) in the loop that reads model changes off the message queue and sends them on the websocket the broken pipe messages go away, however this isn't a real solution and feels like a nasty hack.
Has anyone has similar problems with websocket's onmessage function trying to do too much work and still being busy when the next message comes in? Anyone have a solution?
I think the most efficient way to do this is that client app tell the server what they are displaying. The server keep track of this and send changes only to the objects currently viewed, only to the concerned client.
A way to do this is by using a "Who Watch What" list of items.
Items are indexed in two ways. From the client ID and with a isVievedBy chainlist inside each data objects (I know it doesn't look clean to mix it with data but it is very efficient).
You'll also need a lastupdate timestamp for each data object.
When a client change view, it send a "I'm viewing this, wich I have the version -timestamp-" message to the server. The server check timestamp and send back the object if required. It also remove obsolete "Who Watch What" (accessing them by client ID) items and create the new ones.
When a data object is updated, loop through the isVievedBy chainlist of this object to know which client should be updated. Put this in message buffers for each client and flush those buffers manually (in case you update several items at the same time, it will send one big message).
This is lot of work, but your app will be efficient and scale gracefully, even with lot of objects and lot of clients. It sends only usefull messages and it is very unlikely that there will be too many of them.
For your onMessage problem, I would store data in a queue and process them asynchronously.

Google App Engine Locking

just wondering if anyone of you has come across this. I'm playing around with the Python mail API on Google App Engine and I created an app that accepts a message body and address via POST, creates an entity in the datastore, then a cron job is run every minute, grabs 200 entities and sends out the emails, then deletes the entities.
I ran an experiment with 1500 emails, had 1500 entities created in the datastore and 1500 emails were sent out. I then look at my stats and see that approx. 45,000 recipients were used from the quota, how is that possible?
So my question is at which point does the "Recipients Emailed" quota actually count? At the point where I create a mail object or when I actually send() it? I was hoping for the second, but the quotas seem to show something different. I do pass the mail object around between crons and tasks, etc. Anybody has any info on this?
Thanks.
Update: Turns out I actually was sending out 45k emails with a queue of only 1500. It seems that one cron job runs until the previous one is finished and works out with the same entities. So the question changes to "how do I lock the entities and make sure nobody selects them before sending the emails"?
Thanks again!
Use tasks to send the email.
Create a task that takes a key as an argument, retrieves the stored entity for that key, then sends the email.
When your handler receives the body and address, store that as you do now but then enqueue a task to do the send and pass the key of your datastore object to the task so it knows which object to send an email for.
You may find that the body and address are small enough that you can simply pass them as arguments to a task and have the task send the email without having to store anything directly in the datastore.
This also has the advantage that if you want to impose a limit on the number of emails sent within a given amount of time (quota) you can set up a task queue with that rate.
Instantiating an email object certainly does not count against your "recipients emailed" quota. Like other App Engine services, you consume quota when you trigger an RPC, i.e. call send().
If you intended to email 1500 recipients and App Engine says you emailed 45,000, your code has a bug.

Google App Engine (Python)- Strange behaviour of REMOTE_ADDR

In order to make the registration process on my website easy, I allow users to enter their email address which I will send a verification code to or alternatively they can solve a captcha.
The problem is that in order to prevent robots from registering accounts (with fake emails) I limit the number of registrations allowed per IP address and if this limit is exceeded I trigger a warning in the logs.
However ... what seems to be happening is that I am using os.environ['REMOTE_ADDR'] to check the remote address -- but it seems that I am triggering warnings on addresses that are owned by Google (66.249.65.XXX). It is possible that this is happening only after I change the version (but not confirmed). Does anyone know how/why this might be happening? Shouldn't the REMOTE_ADDR return the address of the client computer (and hopefully in all cases it would do this)?
I am curious if there is some behind the scenes re-directions going on, and if this is a normal event or if it only happens when a new version is installed (perhaps when a new version is installed the original server then proxies the user to the new server, therefore creating the illusion that the IP address is an internal IP?)
I believe that I have figured out the reason for seeing so many warnings from google server IP addresses. It seems that immediately after a new user registers, the google crawlers are going to the same (registration) webpage (which I send information to as a GET instead of a POST for reasons which I will not get into). Of course, since many users are registering, but there are only a few crawler computers that are checking periodic updates to my website, I am triggering warning messages that a particular (google) IP is accessing a registration area repeatedly.

Bounced email on Google App Engine

I'm developing application for google app engine (python), witch needs not only to send emails, but also know which ones bounce back.
I created special account for my domain noreply#example.com, added it as an app admin and sending messages from it.
The problem is (and it was described here http://code.google.com/p/googleappengine/issues/detail?id=1800) - GAE sets the Return-Path to some internal email address, not allowing to receive bounced email messages.
Anyone aware of any possible workaround for this? Thanks.
It looks like someone bypassed this problem by switching to Yahoo's Mail API, which uses OAuth and can be used over HTTP. Until google fixes your problem, this looks like a viable solution.
Until the issue is resolved, workaround for my project is using typhoonae, bind mail service to google app's gmail as SMTP (to send messages from noreply#example.com). When sending this way, noreply#example.com receives bounced messages.
Google has actually since added a method for receiving bounced messages via an HTTP Request. It requires adding to your app.yaml:
inbound_services:
- mail_bounce
Which will cause a request to hit /_ah/bounce each time a bounce is received. You can then handle the bounce by adding a handler for it. See the section there on Handling Bounce Notifications for more details on how to glean the additional information from those requests.
You could use use a third party "email marketing" API like CampaignMonitor that keeps track of the bounced addresses:
http://www.campaignmonitor.com/api/method/subscribers-getbounced/
But you'd have to send mail through them, and sync your user list with theirs through their API.

Categories