I want to use pdb to step into some flask-restful code. I have an endpoint which returns a token. I then use the token to access another endpoint which returns the required data. I would like to view the result of a database query. How do I go about this?
I tried setting a breakpoint inside the class, but it does not get triggered when I send a request using the request library.
class FetchData(Resource):
#jwt_required
def get(self, args):
engine = create_engine('mysql+pymysql://')
conn = engine.connect()
tablemeta = MetaData()
tablemeta.reflect(bind=engine)
keydate = tablemeta.tables['KEYDATE']
coefficient = tablemeta.tables['COEFFICIENT']
vessel = tablemeta.tables['VESSEL']
update_dict = {}
s = select([coefficient])
s = s.where(coefficient.c.updated_date >= args["dt"])
rp = conn.execute(s)
result = []
for r in rp:
j = coefficient.join(vessel, r['idvessel'] == vessel.c.idvessel)
import pdb
pdb.set_trace()
vdm_id = select([vessel.c.vessel_id]).select_from(j)
vdm_id = conn.execute(vdm_id).scalar()
intermediate = []
intermediate.append({"vdm_id": vdm_id})
intermediate.append([dict(r)])
result.append(intermediate)
Or possibly there's another debugger I should be using?
You should put your pdb before the loop as it will never get to pdb if you don't get any results.
I have been using pdb for the last few years in flask without any problems.
Just use print(variable-you-want), this should be faster and efficient.
Related
hello i guess have problem with client and member config which config should i use as you can see i am inserting json as data when i call get_data it returns with no problem but when i try to use predicate-sql it gives me error "hazelcast.errors.HazelcastSerializationError: Exception from server: com.hazelcast.nio.serialization.HazelcastSerializationException: There is no suitable de-serializer for type -120. This exception is likely caused by differences in t
he serialization configuration between members or between clients and members."
#app.route('/insert_data/<database_name>/<collection_name>', methods=['POST'])
def insert_data(database_name, collection_name):
client = hazelcast.HazelcastClient(cluster_members=[
url
])
dbname_map = client.get_map(f"{database_name}-{collection_name}").blocking()
if request.json:
received_json_data = request.json
received_id = received_json_data["_id"]
del received_json_data["_id"]
dbname_map.put(received_id, received_json_data)
client.shutdown()
return jsonify()
else:
client.shutdown()
abort(400)
#app.route('/get_data/<database_name>/<collection_name>', methods=['GET'])
def get_all_data(database_name, collection_name):
client = hazelcast.HazelcastClient(cluster_members=[
url
])
dbname_map = client.get_map(f"{database_name}-{collection_name}").blocking()
entry_set = dbname_map.entry_set()
output = dict()
datas = []
for key, value in entry_set:
value['_id'] = key
output = value
datas.append(output)
client.shutdown()
return jsonify({"Result":datas})
#bp.route('/get_query/<database_name>/<collection_name>/<name>', methods=['GET'])
def get_query_result(database_name, collection_name,name):
client = hazelcast.HazelcastClient(cluster_members=[
url
])
predicate_map = client.get_map(f"{database_name}-{collection_name}").blocking()
predicate = and_(sql(f"name like {name}%"))
entry_set = predicate_map.values(predicate)
#entry_set = predicate_map.entry_set(predicate)
send_all_data = ""
for x in entry_set:
send_all_data += x.to_string()
send_all_data += "\n"
print(send_all_data)
# print("Retrieved %s values whose age is less than 30." % len(result))
# print("Entry is", result[0].to_string())
# value=predicate_map.get(70)
# print(value)
return jsonify()
i try to change hazelcast.xml according to hazelcast-full-example.xml but i can't start hazelcast after
the changes and do i really have to use serialization ? hazelcast version:4.1 python:3.9
This is most likely happening because you are putting entries of the type dictionary to the map, which is serialized by the pickle because you didn't specify a serializer for that and the client does not know how to handle that correctly, so it fallbacks to the default serializer. However, since pickle serialization is Python-specific, servers cannot deserialize it and throw such an exception.
There are possible solutions to that, see the https://hazelcast.readthedocs.io/en/stable/serialization.html chapter for details.
I think the most appropriate solution for your use case would be Portable serialization which does not require a configuration change or code on the server-side. See the https://hazelcast.readthedocs.io/en/stable/serialization.html#portable-serialization
BTW, client objects are quite heavyweight, so you shouldn't be creating them on demand like this. You can construct it once in your application and share and use it in your endpoints or business-logic code freely since it is thread-safe. The same applies to the map proxy you get from the client. It can also be re-used.
I am writing my Python API using Flask. This API accept only 1 parameter called questionID. I would like it to accept a second parameter called lastDate. I tried to look around on how to add this parameter, but couldn't find a good method to do this. My current code looks as follows:
from flask import Flask, request
from flask_restful import Resource, Api, reqparse
from sqlalchemy import create_engine
from json import dumps
from flask_jsonpify import jsonify
import psycopg2
from pandas import read_sql
connenction_string = "DB Credentials'";
app = Flask(__name__)
api = Api(app)
class GetUserAnswers(Resource):
def get(self, questionID):
conn = psycopg2.connect(connenction_string);
cursor = conn.cursor();
userAnswers = read_sql('''
select * from <tablename> where questionid = ''' + "'" + questionID + "' order by timesansweredincorrectly desc limit 15" +'''
''', con=conn)
conn.commit();
conn.close();
result = {}
for index, row in userAnswers.iterrows():
result[index] = dict(row)
return jsonify(result)
api.add_resource(GetUserAnswers, '/GetUserAnswers/<questionID>')
if __name__ == '__main__':
app.run(port='5002')
Question 1: I'm guessing I can accept the second parameter in the get definition. If this is not true, how should I accept the second parameter?
Question 2: How do I modify the api.add_resource() call to accept the second parameter?
Question 3: I currently use http://localhost:5002/GetUserAnswers/<some question ID> to call this API from the browser. How would this call change with a second parameter?
I have never developed an API before, so any help would be much appreciated.
If you want to add multiple parameters within the url path for example:
http://localhost:5002/GetUserAnswers/<question_id>/answers/<answer_id>
Then you need to add multiple parameters to your get method:
def get(self, question_id, answer_id):
# your code here
But if you instead want to add multiple query parameters to the url for example:
http://localhost:5002/GetUserAnswers/<question_id>?lastDate=2020-01-01&totalCount=10>
Then you can use request arguments:
def get(self, question_id):
lastDate = request.args.get('lastDate')
totalCount = request.args.get('totalCount')
# your code here
Consider several adjustments to your code:
For simpler implementation as you have, use decorators in Flask API and avoid need to initialize and call the class object;
Use parameterization in SQL and avoid the potentially dangerous and messy string concatenation;
Avoid using the heavy data analytics library, pandas, and its inefficient row by row iterrows loop. Instead, handle everything with cursor object, specifically use DictCursor in psycopg2;
Refactored Python code (adjust assumption of how to use lastDate):
#... leave out the heavy pandas ...
app = Flask(__name__)
#app.route('/GetUserAnswers', methods= ['GET'])
def GetUserAnswers():
questionID = request.args.get('questionID', None)
lastDate = request.args.get('lastDate', None)
conn = psycopg2.connect(connenction_string)
cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
userAnswers = '''SELECT * FROM <tablename>
WHERE questionid = %s
AND lastdate = %s
ORDER BY timesansweredincorrectly DESC
LIMIT 15
'''
# EXECUTE SQL WITH PARAMS
cur.execute(userAnswers, (questionID, lastDate))
# SAVE TO LIST OF DICTIONARIES
result = [dict(row) for row in cur.fetchall()]
cur.close()
conn.close()
return jsonify(result)
if __name__ == '__main__':
app.run(port='5002')
Browser Call
http://localhost:5002/GetUserAnswers?questionID=8888&lastDate=2020-01-08
I'd like to get the list of all test cases assigned to a specific user in TestLink. I'm using the TestLink Python API, version 0.6.4. I've been reading the JetMore docs for TestLink.
So far I have the following code:
from testlink import TestlinkAPIGeneric, TestLinkHelper
import testlink
TESTLINK_API_PYTHON_SERVER_URL = '<my testlink ip>'
TESTLINK_API_PYTHON_DEVKEY = '<my testlink devkey>'
my_test_link = TestLinkHelper(TESTLINK_API_PYTHON_SERVER_URL, TESTLINK_API_PYTHON_DEVKEY).connect(TestlinkAPIGeneric)
test_project_id = my_test_link.getTestProjectByName('<project name>')['id']
plans = my_test_link.getProjectTestPlans(test_project_id)
# I've also tried ['globalRoleID']
userid = int(my_test_link.getUserByLogin('<user name>')[0]['dbID'])
for plan in plans:
print(my_test_link.getTestCasesForTestPlan(testplanid=plan['id'], assignedto=userid, details='full'))
This produces an empty list for every test plan in my project:
[]
[]
[]
...
Any help is appreciated.
assignedto is an optional parameter in the function getTestCasesForTestPlan().
This function will always return all the testcases associated with the testplan.
Pass username instead of userid. It will return all the testcases.
To get testcases for specific user you have to modify the API code(xmlrpc.class.php) in testlink codebase.
initial cache object code as follow:
pageCache = Cache()
cacheDir = os.path.join(path.dirname(path.dirname(__file__)),'pageCache')
pageCache.init_app(flaskApp,config={'CACHE_TYPE': 'filesystem','CACHE_THRESHOLD':1>>10>>10,'CACHE_DIR': cacheDir })
I use pageCache as follow:
class CodeList:
"""
show code list
"""
#pageCache.cached(timeout=60)
def GET(self):
i = web.input()
sort = i.get('sort','newest')
pageNo = int(i.get('page','1'))
if i.get('pageSize'):
pageSize = int(i.get('pageSize'))
else:
pageSize = DEFAULT_LIST_PAGE_SIZE
if pageSize > 50:
pageSize = 50
items = csModel.getCodeList(sort=sort,pageNo=pageNo,pageSize=pageSize)
totalCount = csModel.getCodeCount()
pageInfo = (pageNo,pageSize,totalCount)
return render.code.list(items,pageInfo)
when I request this page, I got an exception:
type 'exceptions.RuntimeError' at /code-snippet/
working outside of request context
Python C:\Python27\lib\site-packages\flask-0.9-py2.7.egg\flask\globals.py in >_lookup_object, line 18
Flask-Cache is - as the name suggests - a Flask extension. So you cannot properly use it if you do not use Flask.
You can use werkzeug.cache instead - Flask-Cache is using it, too. However, depending on your needs it might be a better idea to use e.g. memcached directly - when using a wrapper such as werkzeug.cache you lose all advanced features of your caching engine because it's wrapped with a rather simple/minimalistic API.
My app connects to multiple databases using a technique similar to this. It works so long as I don't try to access different databases in the same request. Having looked back to the above script I see they have written a comment to this end:
SQLAlchemy integration for CherryPy,
such that you can access multiple databases,
but only one of these databases per request or thread.
My app now requires me to fetch data from Database A and Database B. Is it possible to do this in a single request?
Please see below for sources and examples:
Working Example 1:
from model import meta
my_object_instance = meta.main_session().query(MyObject).filter(
MyObject.id == 1
).one()
Working Example 2:
from model import meta
my_user = meta.user_session().query(User).filter(
User.id == 1
).one()
Error Example:
from model import meta
my_object_instance = meta.main_session().query(MyObject).filter(
MyObject.id == 1
).one()
my_user = meta.user_session().query(User).filter(
User.id == 1
).one()
This errors with:
(sqlalchemy.exc.ProgrammingError) (1146, "Table 'main_db.user' doesn't exist")
Sources:
# meta.py
import cherrypy
import sqlalchemy
from sqlalchemy import MetaData
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
# Return an Engine
def create_engine(defaultschema = True, schema = "", **kwargs):
# A blank DB is the same as no DB so to specify a non-schema-specific connection just override with defaultschema = False
connectionString = 'mysql://%s:%s#%s/%s?charset=utf8' % (
store['application'].config['main']['database-server-config-username'],
store['application'].config['main']['database-server-config-password'],
store['application'].config['main']['database-server-config-host'],
store['application'].config['main']['database-server-config-defaultschema'] if defaultschema else schema
)
# Create engine object. we pass **kwargs through so this call can be extended
return sqlalchemy.create_engine(connectionString, echo=True, pool_recycle=10, echo_pool=True, encoding='utf-8', **kwargs)
# Engines
main_engine = create_engine()
user_engine = None
# Sessions
_main_session = None
_user_session = None
# Metadata
main_metadata = MetaData()
main_metadata.bind = main_engine
user_metadata = MetaData()
# No idea what bases are/do but nothing works without them
main_base = declarative_base(metadata = main_metadata)
user_base = declarative_base(metadata = user_metadata)
# An easy collection of user database connections
engines = {}
# Each thread gets a session based on this object
GlobalSession = scoped_session(sessionmaker(autoflush=True, autocommit=False, expire_on_commit=False))
def main_session():
_main_session = cherrypy.request.main_dbsession
_main_session.configure(bind=main_engine)
return _main_session
def user_session():
_user_session = cherrypy.request.user_dbsession
_user_session.configure(bind = get_user_engine())
return _user_session
def get_user_engine():
# Get dburi from the users instance
dburi = cherrypy.session['auth']['user'].instance.database
# Store this engine for future use
if dburi in engines:
engine = engines.get(dburi)
else:
engine = engines[dburi] = create_engine(defaultschema = False, schema = dburi)
# Return Engine
return engine
def get_user_metadata():
user_metadata.bind = get_user_engine()
return user_metadata
# open a new session for the life of the request
def open_dbsession():
cherrypy.request.user_dbsession = cherrypy.thread_data.scoped_session_class
cherrypy.request.main_dbsession = cherrypy.thread_data.scoped_session_class
return
# close the session for this request
def close_dbsession():
if hasattr(cherrypy.request, "user_dbsession"):
try:
cherrypy.request.user_dbsession.flush()
cherrypy.request.user_dbsession.remove()
del cherrypy.request.user_dbsession
except:
pass
if hasattr(cherrypy.request, "main_dbsession"):
try:
cherrypy.request.main_dbsession.flush()
cherrypy.request.main_dbsession.remove()
del cherrypy.request.main_dbsession
except:
pass
return
# initialize the session factory class for the selected thread
def connect(thread_index):
cherrypy.thread_data.scoped_session_class = scoped_session(sessionmaker(autoflush=True, autocommit=False))
return
# add the hooks to cherrypy
cherrypy.tools.dbsession_open = cherrypy.Tool('on_start_resource', open_dbsession)
cherrypy.tools.dbsession_close = cherrypy.Tool('on_end_resource', close_dbsession)
cherrypy.engine.subscribe('start_thread', connect)
You could also choose an ORM that is designed from the ground up for multiple databases, like Dejavu.
Take a look at this:
http://pythonhosted.org/Flask-SQLAlchemy/binds.html
Basically, it suggests that you use a bind param - for each connection. That said, this seems to be a bit of a hack.
This question has a lot more detail in the answer:
With sqlalchemy how to dynamically bind to database engine on a per-request basis
That said, both this question and the one referenced aren't the newest and sqlalchemy will probably have moved on since then.