Starting app.py, then killing the database and hitting /api/foo gives me:
peewee.OperationalError: could not connect to server: Connection refused
Bringing the database back up gives me and hitting /api/foo gives me:
peewee.OperationalError: terminating connection due to administrator
command\nSSL connection has been closed unexpectedly\n
And hitting /api/foo again gives me:
peewee.InterfaceError: connection already closed
Test case
test_case/__init__.py
#!/usr/bin/env python
from os import environ
from bottle import Bottle, request, response
from playhouse.db_url import connect
bottle_api = Bottle()
db = connect(environ['RDBMS_URI'])
from test_case.foo.models import Foo
db.connect() # Not needed, but do want to throw errors ASAP
db.create_tables([Foo], safe=True) # Create tables (if they don't exist)
from test_case.foo.routes import foo_api
bottle_api.merge(foo_api)
bottle_api.catchall = False
#bottle_api.hook('before_request')
def _connect_db():
print 'Connecting to db'
db.connect()
#bottle_api.hook('after_request')
def _close_db():
print 'Closing db'
if not db.is_closed():
db.close()
def error_catcher(environment, start_response):
try:
return bottle_api.wsgi(environment, start_response)
except Exception as e:
environment['PATH_INFO'] = '/api/error'
environment['api_error'] = e
return bottle_api.wsgi(environment, start_response)
#bottle_api.route('/api/error')
def global_error():
response.status = 500
return {'error': (lambda res: res[res.find("'") + 1:res.rfind("'")])(
str(request.environ['api_error'].__class__)),
'error_message': request.environ['api_error'].message}
test_case/__main__.py
from __init__ import bottle_api
# Or `from __init__ import bottle_api`; `from bottle import run`;
# Then `run(error_catcher, port=5555)`
bottle_api.run(port=5555)
test_case/foo/__init__.py
test_case/foo/models.py
from peewee import Model, CharField
from test_case import db
class Foo(Model):
id = CharField(primary_key=True)
class Meta(object):
database = db
test_case/foo/routes.py
from bottle import Bottle
from playhouse.shortcuts import model_to_dict
from test_case.foo.models import Foo
foo_api = Bottle()
#foo_api.get('/api/foo')
def retrieve_foos():
return {'foos': tuple(model_to_dict(foo) for foo in Foo.select())}
Github gist for easy cloning.
Update:
I believe the problem lies in how you've structured your imports and the way python loads and caches modules in sys.path.
I think that one of your modules is being imported and loaded twice and different parts of the codebase use different instances of the module.
Thus, the views in foo.routes, are using one instance of the database object, while the connection hooks are using another.
Instead of from __init__, what about trying from test_case import bottle_api? That is the one import statement that jumps out at me as a possible culprit.
I added the following to your code so I could run it from the command-line:
if __name__ == '__main__':
api.run()
Then I made a request to /api/foo and saw some fake data. I stopped the Postgresql server and got this error:
Traceback (most recent call last):
File "/usr/lib64/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "/home/charles/tmp/scrap/bottlez/lib/python2.7/site-packages/bottle.py", line 979, in __call__
return self.wsgi(environ, start_response)
File "/home/charles/tmp/scrap/bottlez/lib/python2.7/site-packages/bottle.py", line 954, in wsgi
out = self._cast(self._handle(environ))
File "/home/charles/tmp/scrap/bottlez/lib/python2.7/site-packages/bottle.py", line 857, in _handle
self.trigger_hook('before_request')
File "/home/charles/tmp/scrap/bottlez/lib/python2.7/site-packages/bottle.py", line 640, in trigger_hook
return [hook(*args, **kwargs) for hook in self._hooks[__name][:]]
File "bt.py", line 31, in _connect_db
db.connect()
File "/home/charles/tmp/scrap/bottlez/src/peewee/peewee.py", line 2967, in connect
self.initialize_connection(self.__local.conn)
File "/home/charles/tmp/scrap/bottlez/src/peewee/peewee.py", line 2885, in __exit__
reraise(new_type, new_type(*exc_value.args), traceback)
File "/home/charles/tmp/scrap/bottlez/src/peewee/peewee.py", line 2965, in connect
**self.connect_kwargs)
File "/home/charles/tmp/scrap/bottlez/src/peewee/peewee.py", line 3279, in _connect
conn = psycopg2.connect(database=database, **kwargs)
File "/home/charles/tmp/scrap/bottlez/lib/python2.7/site-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
When I restarted the server and made a subsequent request I got a normal response with my test data.
So, in short, I'm not sure what I may be missing but the code seems to be working correctly to me.
Postgresql 9.4, psycopg2 2.6, python 2.7.9, peewee 2.6.0
Related
I'm a novice trying to spin up my first webapp with a combination of Fly.io, Django, and a postgres DB but I'm having some trouble and can't find an answer in walkthroughs or Q&A.
I've set up a simple "Hello world" Django app (models.py is empty so far) and I'm trying to get all the components up and running before I build it out any further.
I've successfully deployed my app on Fly.io with no errors
I've created a postgres cluster on Fly.io using the instructions here: https://fly.io/docs/postgres/
I've attached the cluster to my app, which generates a DB and sets an environment variable with the appropriate details (username, password, port, host, dbname)
I've updated my settings.py file:
DATABASES = {}
DATABASES["default"] = dj_database_url.config(conn_max_age=600, ssl_require=True)
I've added to my fly.toml:
[[services]]
internal_port = 5432 # Postgres instance
protocol = "tcp"
# Open port 10000 for plaintext connections.
[[services.ports]]
handlers = []
port = 10000
I've confirmed I can get into the psql shell with flyctl postgres connect -a MYAPP-pg
But unfortunately when I run python manage.py migrate to check that everything is working, I get the following error:
File "<my_path>\venv\lib\site-packages\django\db\backends\base\base.py", line 282, in ensure_connection
self.connect()
File "<my_path>\venv\lib\site-packages\django\utils\asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "<my_path>\venv\lib\site-packages\django\db\backends\base\base.py", line 263, in connect
self.connection = self.get_new_connection(conn_params)
File "<my_path>\venv\lib\site-packages\django\utils\asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "<my_path>\venv\lib\site-packages\django\db\backends\postgresql\base.py", line 215, in get_new_connection
connection = Database.connect(**conn_params)
File "<my_path>\venv\lib\site-packages\psycopg2\__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "top2.nearest.of.MYAPP-pg.internal" to address: Unknown host
Any ideas what might be happening? Any help would be very much appreciated!
I use PyMySQL library and Flask in my program. My view function accesses the database every time it called. After some calls it breaks and raise InterfaceError(0, ''). All next requests also raise InterfaceError (any db query, specifially).
Traceback (most recent call last):
(several files of mine and Flask)
File "/home/maxim/.local/lib/python3.7/site-packages/pymysql/cursors.py", line 170, in execute
result = self._query(query)
File "/home/maxim/.local/lib/python3.7/site-packages/pymysql/cursors.py", line 328, in _query
conn.query(q)
File "/home/maxim/.local/lib/python3.7/site-packages/pymysql/connections.py", line 516, in query
self._execute_command(COMMAND.COM_QUERY, sql)
File "/home/maxim/.local/lib/python3.7/site-packages/pymysql/connections.py", line 750, in _execute_command
raise err.InterfaceError("(0, '')")
pymysql.err.InterfaceError: (0, '')
I read PyMySQL library code and saw, that this error occures if connection's _sock variable is None (i think it means connection is closed). But why is it happen?
I use one connection object for all view functions (i.e. it is defined outside functions). Do I do it right or I must make new connection every request? Or I need do something other to get rid of this error?
My code: https://pastebin.com/sy3xKtgB
Full traceback: https://pastebin.com/iTU75FUi
I solved my problem by creating a new connection to db every request.
def get_db():
return pymysql.connect(
'ip',
'user',
'password',
'db_name',
cursorclass=pymysql.cursors.DictCursor
)
I call this function every request.
from flask import Flask, request
from my_utils import get_db
app = Flask(__name__)
#app.route('/get', methods=['POST'])
def get():
conn = get_db()
with conn.cursor() as cur:
pass
I am trying to connect to remote mongodb, here the ssh access has a different username and password, And the mongodb has a different username and password.
I tried passing the username and password of ssh in ssh tunnel server, and the mongodb credentials in the client, but getting an error stating:
pymongo.errors.ServerSelectionTimeoutError: 127.0.0.1:27017: [Errno 111] Connection refused
Here the ssh connection is happening, whereas the mongodb is not getting connected
def Collect_Pid_DB():
MONGO_DB = "mydatabasename"
server = SSHTunnelForwarder(
(MONGO_HOST,22),
ssh_username=username,
ssh_password=password,
remote_bind_address=('127.0.0.1', 27017)
)
server.start()
#print (server)
uri = "mongodb://admin:" + urllib.quote("p#ssW0$3") + "#127.0.0.1:27017"
client = pymongo.MongoClient(uri,server.local_bind_port)
db = client[MONGO_DB]
print (db)
print(json.dumps(db.collection_names(), indent=2))
server.stop()
Actual results:
Database(MongoClient(host=['127.0.0.1:27017'], document_class=dict, tz_aware=False, connect=True), u'MissingPatches')
Traceback (most recent call last):
File "duplicate.py", line 7, in <module>
class MyClass:
File "duplicate.py", line 41, in MyClass
Collect_Pid_DB('192.142.123.142','root','password','mydatabasename')
File "duplicate.py", line 35, in Collect_Pid_DB
print(json.dumps(db.collection_names(), indent=2))
File "/usr/local/lib/python2.7/dist-packages/pymongo/database.py", line 787, in collection_names
nameOnly=True, **kws)]
File "/usr/local/lib/python2.7/dist-packages/pymongo/database.py", line 722, in list_collections
read_pref) as (sock_info, slave_okay):
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/pymongo/mongo_client.py", line 1135, in _socket_for_reads
server = topology.select_server(read_preference)
File "/usr/local/lib/python2.7/dist-packages/pymongo/topology.py", line 226, in select_server
address))
File "/usr/local/lib/python2.7/dist-packages/pymongo/topology.py", line 184, in select_servers
selector, server_timeout, address)
File "/usr/local/lib/python2.7/dist-packages/pymongo/topology.py", line 200, in _select_servers_loop
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: 127.0.0.1:27017: [Errno 111] Connection refused
The following is the working code for the above question, In the above code the issue was that the local bind port along with the url was not parsed into a proper format, hence when authenticating the port couldnt be authenticated in order to connect.
Hence in the above code mentioned in the question was not working.
The working code to connect to mongodb, when the ssh and mongo both have different credentials:
def Collect_Pid_DB(hostname,user,password,accountid):
server = SSHTunnelForwarder(
(MONGO_HOST,22),
ssh_username=MONGO_USER,
ssh_password=MONGO_PASS,
remote_bind_address=('127.0.0.1', 27017)
)
host_name="'primary_host_name': 'win-3auvcutkp34'"
patch_name="'patch_name': '[\\& A-Za-z0-9+.,\\-]+'"
server.start()
client = pymongo.MongoClient(host='127.0.0.1',
port=server.local_bind_port,
username='admin',
password='P#ssW0Rd')
db = client[MONGO_DB]
print (db)
print(json.dumps(db.collection_names(), indent=2))
Hope the above answer would be helpful to someone, as i couldn't find one anywhere when i needed :P
Try to pip install ssh-pymongo and then:
from ssh_pymongo import MongoSession
session = MongoSession(MONGO_HOST)
db = session.connection['mydatabasename']
# perform your queries #
session.stop()
If you have more complex settings, check docs for cases, e.g.:
from ssh_pymongo import MongoSession
session = MongoSession(
MONGO_HOST,
port=22,
user='ssh_username',
password='ssh_password',
uri='mongodb://admin:p#ssW0$3#127.0.0.1:27017')
db = session.connection['mydatabasename']
print(db.collection_names())
session.stop()
I'm trying to connect to two MySQL databases (one local, one remote) at the same time using Python 3.4 but I'm really struggling. Splitting the problem into three:
Step 1: connect to the local DB. This is working fine
using PyMySQL. (MySQLdb isn't compatible with Python 3.4, of
course.)
Step 2: connect to the remote DB (which needs to
use SSH). I can get it to work from the Linux command prompt but not
from Python... see below.
Step 3: connect to both at the
same time. I think I'm supposed to use a different port for the
remote database so that I can have both connections at the same time
but I'm out of my depth here! If it's relevant then the two DBs will
have different names. And if this question isn't directly related,
please tell me and I'll post it separately.
Unfortunately I'm not really starting in the right place for a newbie... once I can get this working I can happily go back to basic Python and SQL but hopefully someone will take pity on me and give me a hand to get started!
For Step 2, my code is below. It seems to be quite close to the sshtunnel example which answers this question Python - SSH Tunnel Setup and MySQL DB Access - though that uses MySQLdb. For the moment I'm embedding the connection parameters – I'll move them to the config file once it's working properly.
import dropbox, pymysql, shlex, shutil, subprocess
from sshtunnel import SSHTunnelForwarder
import iot_config as cfg
def CloseLocalDB():
localcur.close()
localdb.close()
def CloseRemoteDB():
# Disconnect from the database
# remotecur.close()
# remotedb.close()
# Close the SSH tunnel
# ssh.close()
print("end of CloseRemoteDB function")
def OpenLocalDB():
global localcur, localdb
localdb = pymysql.connect(host=cfg.localdbconn['host'], user=cfg.localdbconn['user'], passwd=cfg.localdbconn['passwd'], db=cfg.localdbconn['db'])
localcur = localdb.cursor()
def OpenRemoteDB():
global remotecur, remotedb
with SSHTunnelForwarder(
('my_remote_site', 22),
ssh_username = "my_ssh_username",
ssh_private_key = "/etc/ssh/my_private_key.ppk",
ssh_private_key_password = "my_private_key_password",
remote_bind_address = ('127.0.0.1', 3308)) as server:
remotedb = None
#Following line gives an error if uncommented
# remotedb = pymysql.connect(host='127.0.0.1', user='remote_db_user', passwd='remote_db_password', db='remote_db_name', port=server.local_bind_port)
#remotecur = remotedb.cursor()
# Main program starts here
OpenLocalDB()
CloseLocalDB()
OpenRemoteDB()
CloseRemoteDB()
This is the error I'm getting:
2016-04-21 19:13:33,487 | ERROR | Secsh channel 0 open FAILED: Connection refused: Connect failed
2016-04-21 19:13:33,553 | ERROR | In #1 <-- ('127.0.0.1', 60591) to ('127.0.0.1', 3308) failed: ChannelException(2, 'Connect failed')
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 60591)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/sshtunnel.py", line 286, in handle
src_address)
File "/usr/local/lib/python3.4/dist-packages/paramiko/transport.py", line 834, in open_channel
raise e
paramiko.ssh_exception.ChannelException: (2, 'Connect failed')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.4/socketserver.py", line 613, in process_request_thread
self.finish_request(request, client_address)
File "/usr/lib/python3.4/socketserver.py", line 344, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python3.4/socketserver.py", line 669, in __init__
self.handle()
File "/usr/local/lib/python3.4/dist-packages/sshtunnel.py", line 296, in handle
raise HandlerSSHTunnelForwarderError(msg)
sshtunnel.HandlerSSHTunnelForwarderError: In #1 <-- ('127.0.0.1', 60591) to ('127.0.0.1', 3308) failed: ChannelException(2, 'Connect failed')
----------------------------------------
Traceback (most recent call last):
File "/home/pi/Documents/iot_pm2/iot_ssh_example_for_help.py", line 38, in <module>
OpenRemoteDB()
File "/home/pi/Documents/iot_pm2/iot_ssh_example_for_help.py", line 32, in OpenRemoteDB
remotedb = pymysql.connect(host='127.0.0.1', user='remote_db_user', passwd='remote_db_password', db='remote_db_name', port=server.local_bind_port)
File "/usr/local/lib/python3.4/dist-packages/pymysql/__init__.py", line 88, in Connect
return Connection(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 678, in __init__
self.connect()
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 889, in connect
self._get_server_information()
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 1190, in _get_server_information
packet = self._read_packet()
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 945, in _read_packet
packet_header = self._read_bytes(4)
File "/usr/local/lib/python3.4/dist-packages/pymysql/connections.py", line 981, in _read_bytes
2013, "Lost connection to MySQL server during query")
pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')
Thanks in advance.
Answering my own question because, with a lot of help from J.M. Fernández on Github, I have a solution: the example that I copied at the beginning uses port 3308 but port 3306 is the standard. Once I'd changed this it started working.
I ran into an error that was painful to track down, so I thought I'd add the cause + "solution" here.
The setup:
Devbox - Running Google App Engine listening on all ports ("--address=0.0.0.0") serving a URL that launches a task.
Client - Client (Python requests library) which queries the callback URL
App Engine code:
class StartTaskCallback(webapp.RequestHandler):
def post(self):
param = self.request.get('param')
logging.info('STARTTASK: %s' % param)
# launch a task
taskqueue.add(url='/tasks/mytask',
queue_name='myqueue',
params={'param': param})
class MyTask(webapp.RequestHandler):
def post(self):
param = self.request.get('param')
logging.info('MYTASK: param = %s' % param)
When I queried the callback with my browser, everything worked, but the same query from the remote client gave me the following error:
ERROR 2012-03-23 21:18:27,351 taskqueue_stub.py:1858] An error occured while sending the task "task1" (Url: "/tasks/mytask") in queue "myqueue". Treating as a task error.
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/taskqueue/taskqueue_stub.py", line 1846, in ExecuteTask
connection.endheaders()
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 868, in endheaders
self._send_output()
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 740, in _send_output
self.send(msg)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 699, in send
self.connect()
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 683, in connect
self.timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 498, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
gaierror: [Errno 8] nodename nor servname provided, or not known
This error would just spin in a loop as the task retried. Though oddly, I could go to Admin -> Task Queues and click 'Run' to get the task to complete successfully.
At first I thought this was an error with the binding. I would not get an error if I queried the StartTaskCallback via the browser or if I ran the client locally.
Finally I noticed that App Engine is using the 'host' field of the request in order to build an absolute URL for the task. In /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/taskqueue/taskqueue_stub.py (1829):
connection_host, = header_dict.get('host', [self._default_host])
if connection_host is None:
logging.error('Could not determine where to send the task "%s" '
'(Url: "%s") in queue "%s". Treating as an error.',
task.task_name(), task.url(), queue.queue_name)
return False
connection = httplib.HTTPConnection(connection_host)
In my case, I was using a special name + hosts file on the remote client to access the server.
192.168.1.208 devbox
So the 'host' for the remote client looked like 'devbox:8085' which the local server could not resolve.
To fix the issue, I simply added devbox to my AppEngine server's hosts file, but it sure would have been nice if the gaierror exception had printed the name it failed to resolve, or if App Engine didn't use the 'host' of the incoming request to build a URL for task creation.