How to auto refresh requesting data and refresh variables shown in html? - python

so basically i am using python request post to grab data about next bus arriving time, then i used websocket and tornado to make a html webpage to show the data that i grabbed.
However, i really dont know how to auto refresh the data and the html. I tried to use this:
import threading
def getETA():
threading.Timer(5.0, getETA).start()
... the python file ends with
if __name__ == "__main__":
app.listen(8888)
ioloop.IOLoop.instance().start()
getETA()
so the code run and showing this error
Exception in thread Thread-25:
Traceback (most recent call last):
File "C:\Python34\lib\threading.py", line 920, in _bootstrap_inner
self.run()
File "C:\Python34\lib\threading.py", line 1186, in run
self.function(*self.args, **self.kwargs)
File "C:\Users\Tan\Desktop\DIP\DIP WEB INTERFACE\testing.py", line 183, in getETA
app.listen(8888)
File "C:\Python34\lib\site-packages\tornado\web.py", line 1788, in listen
server.listen(port, address)
File "C:\Python34\lib\site-packages\tornado\tcpserver.py", line 126, in listen
sockets = bind_sockets(port, address=address)
File "C:\Python34\lib\site-packages\tornado\netutil.py", line 187, in bind_sockets
sock.bind(sockaddr)
OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
Please suggest correct way or other method to let my html local host show refreshed value.

Related

How to debug a stuck asyncio coroutine in Python?

There are lots of coroutines in my production code, which are stuck at unknown position while processing request. I attached gdb with Python support extension to the process, but it doesn't show the exact line in the coroutine where the process is stuck, only primary stack trace. Here is a minimal example:
import asyncio
async def hello():
await asyncio.sleep(30)
print('hello world')
asyncio.run(hello())
(gdb) py-bt
Traceback (most recent call first):
File "/usr/lib/python3.8/selectors.py", line 468, in select
fd_event_list = self._selector.poll(timeout, max_ev)
File "/usr/lib/python3.8/asyncio/base_events.py", line 2335, in _run_once
File "/usr/lib/python3.8/asyncio/base_events.py", line 826, in run_forever
None, getaddr_func, host, port, family, type, proto, flags)
File "/usr/lib/python3.8/asyncio/base_events.py", line 603, in run_until_complete
self.run_forever()
File "/usr/lib/python3.8/asyncio/runners.py", line 299, in run
File "main.py", line 7, in <module>
GDB shows a trace that ends on line 7, but the code is obviously stuck on line 4. How to make it show a more complete trace with nested coroutines?
You can use the aiodebug.log_slow_callbacks.enable(0.05)
Follow for more : https://pypi.org/project/aiodebug/

Downloading second file from ftp fails

I want to download multiple files from FTP in python. the my code works when I just download 1 file, but not works for more than one!
import urllib
urllib.urlretrieve('ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_package/00/00/PMC1790863.tar.gz', 'file1.tar.gz')
urllib.urlretrieve('ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_package/00/00/PMC2329613.tar.gz', 'file2.tar.gz')
An error say:
Traceback (most recent call last):
File "/home/ehsan/dev_center/bigADEVS-bknd/daemons/crawler/ftp_oa_crawler.py", line 3, in <module>
urllib.urlretrieve('ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_package/00/00/PMC2329613.tar.gz', 'file2.tar.gz')
File "/usr/lib/python2.7/urllib.py", line 98, in urlretrieve
return opener.retrieve(url, filename, reporthook, data)
File "/usr/lib/python2.7/urllib.py", line 245, in retrieve
fp = self.open(url, data)
File "/usr/lib/python2.7/urllib.py", line 213, in open
return getattr(self, name)(url)
File "/usr/lib/python2.7/urllib.py", line 558, in open_ftp
(fp, retrlen) = self.ftpcache[key].retrfile(file, type)
File "/usr/lib/python2.7/urllib.py", line 906, in retrfile
conn, retrlen = self.ftp.ntransfercmd(cmd)
File "/usr/lib/python2.7/ftplib.py", line 334, in ntransfercmd
host, port = self.makepasv()
File "/usr/lib/python2.7/ftplib.py", line 312, in makepasv
host, port = parse227(self.sendcmd('PASV'))
File "/usr/lib/python2.7/ftplib.py", line 830, in parse227
raise error_reply, resp
IOError: [Errno ftp error] 200 Type set to I
What should I do?
It is a bug in urllib in python 2.7. Reported here. The reason behind the same is explained here
Now, when a user tries to download the same file or another file from
same directory, the key (host, port, dirs) remains the same so
open_ftp() skips ftp initialization. Because of this skipping,
previous FTP connection is reused and when new commands are sent to
the server, server first sends the previous ACK. This causes a domino
effect and each response gets delayed by one and we get an exception
from parse227()
A possible solution is to clear the cache that may have been built up by previous calls. You may use the urllib.urlcleanup() method calls between your urlretrieve calls for the same, as mentioned here.
Hope this helps!

Rabbit MQ python script. Socket closed when connection was open

Need some help! While running the python script using Rabbit MQ RPC. I am getting a Socket 104,Socket closed when connection was open error. Below is python traceback and some code:
Traceback (most recent call last):
File "./server.py", line 34, in <module>
channel.start_consuming()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1681, in start_consuming
self.connection.process_data_events(time_limit=None)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 656, in process_data_events
self._dispatch_channel_events()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 469, in _dispatch_channel_events
impl_channel._get_cookie()._dispatch_events()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1310, in _dispatch_events
evt.body)
File "./server.py", line 30, in on_request
body=json.dumps(DEVICE_INFO))
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1978, in basic_publish
mandatory, immediate)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 2065, in publish
self._flush_output()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1174, in _flush_output
*waiters)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 395, in _flush_output
raise exceptions.ConnectionClosed()
pika.exceptions.ConnectionClosed
Apologies as i am unable to comment due to low reputation. Could you provide a little more information on how you are opening your connection. Is it really open?
It might be because of loss of connection with rabbitmq server as pika doesn't deal with disconnects and often results in similar stacktrace.
I also had similar problem, in my case it was because my pika connection was dropping after sometime and my colleague was able to deal with this by adding a wait time for mq:port_number.
We were using docker container so we added following line to our invoke.sh to wait for mq:
filename.py --wait-secs 30 --port-wait mq:5672
I hope you are able to resolve this after doing that.
Otherwise it would be better to check if the connection is being dropped by pika before your python script runs or providing more information on how you are invoking it.

Connection to remote MySQL db from Python 3.4

I'm trying to connect to two MySQL databases (one local, one remote) at the same time using Python 3.4 but I'm really struggling. Splitting the problem into three:
Step 1: connect to the local DB. This is working fine
using PyMySQL. (MySQLdb isn't compatible with Python 3.4, of
course.)
Step 2: connect to the remote DB (which needs to
use SSH). I can get it to work from the Linux command prompt but not
from Python... see below.
Step 3: connect to both at the
same time. I think I'm supposed to use a different port for the
remote database so that I can have both connections at the same time
but I'm out of my depth here! If it's relevant then the two DBs will
have different names. And if this question isn't directly related,
please tell me and I'll post it separately.
Unfortunately I'm not really starting in the right place for a newbie... once I can get this working I can happily go back to basic Python and SQL but hopefully someone will take pity on me and give me a hand to get started!
For Step 2, my code is below. It seems to be quite close to the sshtunnel example which answers this question Python - SSH Tunnel Setup and MySQL DB Access - though that uses MySQLdb. For the moment I'm embedding the connection parameters – I'll move them to the config file once it's working properly.
import dropbox, pymysql, shlex, shutil, subprocess
from sshtunnel import SSHTunnelForwarder
import iot_config as cfg
def CloseLocalDB():
localcur.close()
localdb.close()
def CloseRemoteDB():
# Disconnect from the database
# remotecur.close()
# remotedb.close()
# Close the SSH tunnel
# ssh.close()
print("end of CloseRemoteDB function")
def OpenLocalDB():
global localcur, localdb
localdb = pymysql.connect(host=cfg.localdbconn['host'], user=cfg.localdbconn['user'], passwd=cfg.localdbconn['passwd'], db=cfg.localdbconn['db'])
localcur = localdb.cursor()
def OpenRemoteDB():
global remotecur, remotedb
with SSHTunnelForwarder(
('my_remote_site', 22),
ssh_username = "my_ssh_username",
ssh_private_key = "/etc/ssh/my_private_key.ppk",
ssh_private_key_password = "my_private_key_password",
remote_bind_address = ('127.0.0.1', 3308)) as server:
remotedb = None
#Following line gives an error if uncommented
# remotedb = pymysql.connect(host='127.0.0.1', user='remote_db_user', passwd='remote_db_password', db='remote_db_name', port=server.local_bind_port)
#remotecur = remotedb.cursor()
# Main program starts here
OpenLocalDB()
CloseLocalDB()
OpenRemoteDB()
CloseRemoteDB()
This is the error I'm getting:
2016-04-21 19:13:33,487 | ERROR | Secsh channel 0 open FAILED: Connection refused: Connect failed
2016-04-21 19:13:33,553 | ERROR | In #1 <-- ('127.0.0.1', 60591) to ('127.0.0.1', 3308) failed: ChannelException(2, 'Connect failed')
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 60591)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/sshtunnel.py", line 286, in handle
src_address)
File "/usr/local/lib/python3.4/dist-packages/paramiko/transport.py", line 834, in open_channel
raise e
paramiko.ssh_exception.ChannelException: (2, 'Connect failed')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.4/socketserver.py", line 613, in process_request_thread
self.finish_request(request, client_address)
File "/usr/lib/python3.4/socketserver.py", line 344, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python3.4/socketserver.py", line 669, in __init__
self.handle()
File "/usr/local/lib/python3.4/dist-packages/sshtunnel.py", line 296, in handle
raise HandlerSSHTunnelForwarderError(msg)
sshtunnel.HandlerSSHTunnelForwarderError: In #1 <-- ('127.0.0.1', 60591) to ('127.0.0.1', 3308) failed: ChannelException(2, 'Connect failed')
----------------------------------------
Traceback (most recent call last):
File "/home/pi/Documents/iot_pm2/iot_ssh_example_for_help.py", line 38, in <module>
OpenRemoteDB()
File "/home/pi/Documents/iot_pm2/iot_ssh_example_for_help.py", line 32, in OpenRemoteDB
remotedb = pymysql.connect(host='127.0.0.1', user='remote_db_user', passwd='remote_db_password', db='remote_db_name', port=server.local_bind_port)
File "/usr/local/lib/python3.4/dist-packages/pymysql/__init__.py", line 88, in Connect
return Connection(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 678, in __init__
self.connect()
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 889, in connect
self._get_server_information()
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 1190, in _get_server_information
packet = self._read_packet()
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 945, in _read_packet
packet_header = self._read_bytes(4)
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 981, in _read_bytes
2013, "Lost connection to MySQL server during query")
pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')
Thanks in advance.
Answering my own question because, with a lot of help from J.M. Fernández on Github, I have a solution: the example that I copied at the beginning uses port 3308 but port 3306 is the standard. Once I'd changed this it started working.

Python Streamhandler over ftp doesn't work after second import

I have the following problem:
I wrote a FTPHandler(StreamHandler), which connects via 'transport=paramiko.Transport(...)' and 'transport.connect(...)' to a server and opens a sftp connection with 'SFTPClient.from_transport(...)'.
I am importing this handler in a class named 'JUS_Logger.py', which is my module for logging. This 'FMP_Logger' is imported by another class, 'JUS_Reader'.
The problem is, that if I start 'JUS_Reader', the transport is being initialized, but the Connection fails. There is no exception, the program only hangs. If I kill it, I get the stacktrace
CTraceback (most recent call last):
File "./JUS_Reader.py", line 24, in <module>
from JUS_Logger import logger
File "/<home>/.../JUS_Logger.py", line 74, in <module>
ftpHandler=FTPHandler(ftpOut,10)
File "/<home>/FTPHandler.py", line 21, in __init__
self.transport.connect(username=ftpOut['user'].decode('base64'),password=ftpOut['passwd'].decode('base64'))
File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1004, in connect
self.auth_password(username, password)
File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1165, in auth_password
return self.auth_handler.wait_for_response(my_event)
File "/usr/lib/python2.7/dist-packages/paramiko/auth_handler.py", line 158, in wait_for_response
event.wait(0.1)
File "/usr/lib/python2.7/threading.py", line 403, in wait
self.__cond.wait(timeout)
File "/usr/lib/python2.7/threading.py", line 262, in wait
_sleep(delay)
However, if I'm running the 'JUS_Logger.py' by itself, everything works, the transport's connection establishes and the SFTClient connects also.
Any ideas? Or further questions?

Categories