Django send_mail "lock already in place. quitting." - python

I am having an issue with send_mail in Django. It has been working fine until I tried to change some Email settings to have it use an EMAIL_HOST_USER and EMAIL_HOST_PASSWORD. When I did that my test emails where not sent.
I reverted back to my old email settings that did work and now it is still not sending emails. I have restarted things to make sure that my current settings are in effect. When I run the commmand 'python manage.py send_mail' to test I get this response:
acquiring lock...
lock already in place. quitting.
In looking at the code in mailer.engine:
def send_all():
"""
Send all eligible messages in the queue.
"""
lock = FileLock("send_mail")
logging.debug("acquiring lock...")
try:
lock.acquire(LOCK_WAIT_TIMEOUT)
except AlreadyLocked:
logging.debug("lock already in place. quitting.")
return
except LockTimeout:
logging.debug("waiting for the lock timed out. quitting.")
return
logging.debug("acquired.")
start_time = time.time()
dont_send = 0
deferred = 0
sent = 0
It appears that because it gets the exception 'AlreadyLocked' it exits without sending the emails. If this is truly the case, how do I break this lock and start over?

most probably there is a "lock file" ('send_mail') in your directory - just remove it

Related

IMAP IDLE is not returning any information despite new emails whilst running

So I'm trying to setup a very simple imap IDLE script (Python 3.10). I've seen examples that all contain identical syntax and it compiles fine, however the mail server doesn't appear to be returning any information when a new email is received during IDLE.
I've tried using IMAPClient and imap_tools. Both of which have successfully accessed the inbox and displayed Subject, Dates etc.
None of them however are returning any data during IDLE for received emails
Is this what IDLE is meant to do? (I'm starting to question myself)
Perhaps there are settings I need to enable for it to work? (Microsoft Outlook)
Or is the syntax somehow incorrect?
Edit: Alternatively, if there are better libraries/modules to read new emails, download them and their attachments, please point me towards them.
For test data I'm sending emails to myself from 3 different accounts (gmail.com, yahoo.com, outlook.com) and none of them are working.
NOTE: I need to download attachments after I've got this working, and to my knowledge IMAPClient does not support this, so I need to use imap_tools or imaplib
General Vars
# Naturally, these are not actual values used
HOST = 'imap.outlook.com'
USER = 'address#domain.com'
PASS = 'password'
Using IMAPClient:
from imapclient import IMAPClient
server = IMAPClient(HOST)
server.login(USER, PASS)
server.select_folder("INBOX")
# Start IDLE Mode
server.idle()
while True:
try:
responses = server.idle_check(timeout=30) # IDLE for 30 seconds
print("Server sent:", responses if responses else "nothing")
except KeyboardInterrupt:
break
server.idle_done()
print("\nIDLE mode done")
server.logout()
Using imap_tools: Example 1
from imap_tools import MailBox, A
with MailBox(HOST).login(USER, PASS) as mailbox:
mailbox.idle.start()
responses = mailbox.idle.poll(timeout=60) # IDLE for 60 seconds
mailbox.idle.stop()
if responses:
for msg in mailbox.fetch(A(seen=False)):
print(msg.date, msg.subject)
else:
print('No new emails')
Using imap_tools: Example 2
import time, socket, imaplib, traceback
from imap_tools import A, MailBox, MailboxLoginError, MailboxLogoutError
done = False
while not done:
connection_start_time = time.monotonic()
connection_live_time = 0.0
try:
with MailBox('imap.outlook.com').login(USER, PASS, 'INBOX') as mailbox:
print('## new connection', time.asctime())
while connection_live_time < 29 * 60:
try:
responses = mailbox.idle.wait(timeout=3 * 60)
print(time.asctime(), 'IDLE responses:', responses)
if responses:
for msg in mailbox.fetch(A(seen=False)):
print('->', msg.date, msg.subject)
except KeyboardInterrupt:
print('~KeyboardInterrupt')
done = True
break
connection_live_time = time.monotonic() - connection_start_time
except (TimeoutError, ConnectionError, imaplib.IMAP4.abort, MailboxLoginError, MailboxLogoutError, socket.herror, socket.gaierror, socket.timeout) as e:
print(f'## Error\n{e}\n{traceback.format_exc()}\nReconnecting in a minute. . .')
time.sleep(60)
Maybe switch to another mail server will get everything works fine...
Outlook's Idle feature worked well for me until August 12, then it failed to trigger properly.

Flask redirect from a child procces - make a waiting page using only python

today I try to make a "waiting page" using Flask.
I mean a client makes a request, I want to show him a page like "wait the process can take a few minutes", and when the process ends on the server display the result.I want to display "wait" before my function manageBill.teste but redirect work only when it returned right?
#application.route('/teste', methods=['POST', 'GET'])
def test_conf():
if request.method == 'POST':
if request.form.get('confList') != None:
conf_file = request.form.get('confList')
username = request.form.get('username')
password = request.form.get('password')
date = request.form.get('date')
if date == '' or conf_file == '' or username == '' or password == '':
return "You forget to provide information"
newpid = os.fork()
if newpid == 0: # in child procces
print('A new child ', os.getpid())
error = manageBill.teste(conf_file, username, password, date)
print ("Error :" + error)
return redirect('/tmp/' + error)
else: # in parent procces
return redirect('/tmp/wait')
return error
return manageBill.manageTest()`
My /tmp route:
#application.route('/tmp/<wait>')
def wait_teste(wait):
return "The procces can take few minute, you will be redirected when the teste is done.<br>" + wait
If you are using the WSGI server (the default), requests are handled by threads. This is likely incompatible with forking.
But even if it wasn't, you have another fundamental issue. A single request can only produce a single response. Once you return redirect('/tmp/wait') that request is done. Over. You can't send anything else.
To support such a feature you have a few choices:
The most common approach is to have AJAX make the request to start a long running process. Then setup an /is_done flask endpoint that you can check (via AJAX) periodically (this is called polling). Once your endpoint returns that the work is done, you can update the page (either with JS or by redirecting to a new page).
Have /is_done be a page instead of an API endpoint that is queried from JS. Set an HTTP refresh on it (with some short timeout like 10 seconds). Then your server can send a redirect for the /is_done endpoint to the results page once the task finishes.
Generally you should strive to serve web requests as quickly as possible. You shouldn't leave connections open (to wait for a long task to finish) and you should offload these long running tasks to a queue system running separately from the web process. In this way, you can scale your ability to handle web requests and background processes separately (and one failing does not bring the other down).

Executing script from Alexa trigger

I've been working with the example-minimal.py script from https://github.com/toddmedema/echo and need to alter it so that rather than printing the status changes to the terminal, it executes another script.
I'm a rank amateur but eager to learn and even more eager to get this project done.
Thanks in advance for any help you can provide!!
""" fauxmo_minimal.py - Fabricate.IO
This is a demo python file showing what can be done with the debounce_handler.
The handler prints True when you say "Alexa, device on" and False when you say
"Alexa, device off".
If you have two or more Echos, it only handles the one that hears you more clearly.
You can have an Echo per room and not worry about your handlers triggering for
those other rooms.
The IP of the triggering Echo is also passed into the act() function, so you can
do different things based on which Echo triggered the handler.
"""
import fauxmo
import logging
import time
from debounce_handler import debounce_handler
logging.basicConfig(level=logging.DEBUG)
class device_handler(debounce_handler):
"""Publishes the on/off state requested,
and the IP address of the Echo making the request.
"""
TRIGGERS = {"device": 52000}
def act(self, client_address, state, name):
print "State", state, "on ", name, "from client #", client_address
return True
if __name__ == "__main__":
# Startup the fauxmo server
fauxmo.DEBUG = True
p = fauxmo.poller()
u = fauxmo.upnp_broadcast_responder()
u.init_socket()
p.add(u)
# Register the device callback as a fauxmo handler
d = device_handler()
for trig, port in d.TRIGGERS.items():
fauxmo.fauxmo(trig, u, p, None, port, d)
# Loop and poll for incoming Echo requests
logging.debug("Entering fauxmo polling loop")
while True:
try:
# Allow time for a ctrl-c to stop the process
p.poll(100)
time.sleep(0.1)
except Exception, e:
logging.critical("Critical exception: " + str(e))
break
I'm going to try and be helpful by going through that script and explaining what each bit does. This should help you understand what it's doing, and therefore what you need to do to get it running something else:
import fauxmo
This is a library that allows whatever device is running the script to pretend to be a Belkin WeMo; a device that is triggerable by the Echo.
import logging
import time
from debounce_handler import debounce_handler
This is importing some more libraries that the script will need. Logging will be used for logging things, which is useful for debugging, time will be used to cause the script to pause so that you can quit it by typing ctrl-c, and the debounce_handler library will be used to keep multiple Echos from reacting to the same voice command (which would cause a software bounce).
logging.basicConfig(level=logging.DEBUG)
Configures a logger that will allow events to be logged to assist in debugging.
class device_handler(debounce_handler):
"""Publishes the on/off state requested,
and the IP address of the Echo making the request.
"""
TRIGGERS = {"device": 52000}
def act(self, client_address, state, name):
print "State", state, "on ", name, "from client #", client_address
return True
We've created a class called device_handler which contains a dictionary called TRIGGERS and a function called act.
act takes a number of variables as input; self (any data structures in the class, such as our TRIGGERS dictionary), client_address, state, and name. We don't know what these are yet, but the names are quite self explanatory, so we can guess that client_address is probably going to be the IP address of the Echo, *state" that it is in, and name will be its name. This is the function that you're going to want to edit, since it is the final function triggered by the Echo. You can probably just stick whatever you function you want after the print statement. The act function returns True when called.
if __name__ == "__main__":
This will execute everything indented below it if you're running the script directly. More detail about that here if you want it.
# Startup the fauxmo server
fauxmo.DEBUG = True
p = fauxmo.poller()
u = fauxmo.upnp_broadcast_responder()
u.init_socket()
p.add(u)
As the comment suggests, this starts the fake WeMo server. We enable debugging, which just prints any debug messages to the command line, create a poller, p, which can process incoming messages, and create a upnp broadcast responder, u, which can handle UPnP device registration. We then tell u to initialise a socket, setting itself up on the network listening for UPnP devices, and add u to p so that we can respond when a broadcast is received.
# Register the device callback as a fauxmo handler
d = device_handler()
for trig, port in d.TRIGGERS.items():
fauxmo.fauxmo(trig, u, p, None, port, d)
As the comment says, this sets up an instance of the device handler class that we made earlier. Now we for-loop through the items in our TRIGGERS dictionary in our device handler d and calls fauxmo.fauxmo using the information it has found in the dictionary. If we look at the dictionary definition in the class definition we can see that there's only one entry, a trig device on port 52000. This essentially does the bulk of the work, making the actual fake WeMo device talk to the Echo. If we look at that fauxmo.fauxmo function we see that, when it receives a suitable trigger it calls the act function in the device_handler class we defined before.
# Loop and poll for incoming Echo requests
logging.debug("Entering fauxmo polling loop")
while True:
try:
# Allow time for a ctrl-c to stop the process
p.poll(100)
time.sleep(0.1)
except Exception, e:
logging.critical("Critical exception: " + str(e))
break
And here we enter the fauxmo polling loop. This indefinitely loops through the following code, checking to see if we've received a message. The code below it tries to poll for messages, to see if its received anything, then wait for a bit, then poll again. Except, if it can't do that for some reason, then the script will break and the error will be logged so you can see what went wrong.
Just to clarify; If the Fauxmo loop is running then the script is fine, right?
I think the TO is not getting any connection between the Echo and the WeMo fake device. It can help if you install the WeMo skill first. You may require an original WeMo device initially though.
I know these are old threads but it might help someone still.

Producing persistent message using stompest for python

I am not able to send persistent message to AMQ queue using stompest and python.. Dont know what header to use???
Below is source code
from stompest.config import StompConfig
from stompest.sync import Stomp
import os
CONFIG = StompConfig('tcp://localhost:61613')
QUEUE = '/queue/myQueue'
if __name__ == '__main__':
try:
client = Stomp(CONFIG)
client.connect({'login':'#####','passcode':'#####'})
for i in range(10):
msg="Test Message" +str(i)
client.send(QUEUE,msg)
client.disconnect()
except Exception,e:
print e
If you go persistent, you may also want to send you message in a transaction.
with client.transaction(receipt='important') as transaction:
client.send(QUEUE, 'test',{'persistent':'true', StompSpec.TRANSACTION_HEADER: transaction})
This way, you can ensure all or none of a set of messages ends up on a queue. If there is an error raised within the transaction block, the message(s) won't be committed to the queue. The same goes for reading messages.
You have to change the send line to this :
client.send(QUEUE,msg, headers={'persistent' :'true'})

Python/Django: sending emails in the background

Imagine a situation in which a user performs an action on a website and admins are notified.
Imagine there are 20 admins to notify. By using normal methods for sending emails with Django the user will have to wait until all the emails are sent before being able to proceed.
How can I send all the emails in a separate process so the user doesn't have to wait? Is it possible?
Use celery as a task queue and django-celery-email which is an Django e-mail backend that dispatches e-mail sending to a celery task.
Another option is django-mailer. It queues up mail in a database table and then you use a cron job to send them.
https://github.com/pinax/django-mailer
A thread may be a possible solution. I use threads intensively in my application for haevy tasks.
# This Python file uses the following encoding: utf-8
#threading
from threading import Thread
...
class afegeixThread(Thread):
def __init__ (self,usuari, parameter=None):
Thread.__init__(self)
self.parameter = parameter
...
def run(self):
errors = []
try:
if self.paramenter:
....
except Exception, e:
...
...
n = afegeixThread( 'p1' )
n.start()

Categories