Unit testing bottle py application that uses request body results in KeyError: 'wsgi.input' - python

When unit testing a bottle py route function:
from bottle import request, run, post
#post("/blah/<boo>")
def blah(boo):
body = request.body.readline()
return "body is %s" % body
blah("booooo!")
The following exception is raised:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in blah
File "bottle.py", line 1197, in body
self._body.seek(0)
File "bottle.py", line 166, in __get__
if key not in storage: storage[key] = self.getter(obj)
File "bottle.py", line 1164, in _body
read_func = self.environ['wsgi.input'].read
KeyError: 'wsgi.input'
The code will work if running as a server via bottle's run function, it's purely when I call it as a normal Python function e.g. in a unit test.
What am I missing? How can I invoke this as a normal python func inside a unit test?

I eventually worked out what the problem is. I needed to "fake" the request environment for bottle to play nicely:
from bottle import request, run, post, tob
from io import BytesIO
body = "abc"
request.environ['CONTENT_LENGTH'] = str(len(tob(body)))
request.environ['wsgi.input'] = BytesIO()
request.environ['wsgi.input'].write(tob(body))
request.environ['wsgi.input'].seek(0)
# Now call your route function and assert

Another issue is that Bottle uses thread locals and reads the BytesIO object you put into request.environ the first time you access the body property on request. Therefore if you run multiple tests with post data, e.g. in a TestCase, when you come to read it in your request callback it will only return the value it was initially given, not your updated value.
The solution is to scrub all the values stored on the request object before each test, so in your setUp(self) you can do something like this:
class MyTestCase(TestCase):
def setUp():
# Flush any cached values
request.bind({})

Check out https://pypi.python.org/pypi/boddle. In your test you could do:
from bottle import request, run, post
from boddle import boddle
#post("/blah/<boo>")
def blah(boo):
body = request.body.readline()
return "body is %s" % body
with boddle(body='woot'):
print blah("booooo!")
Which would print body is woot.
Disclaimer: I authored. (Wrote it for work.)

Related

RuntimeError: Working outside of application context Error in Flask Server

I am trying to use the flask server, but since I faced an error, I started debugging it and by removing many codes to simplify the code and find finally reached to the following error::
Traceback (most recent call last):
File "C:\Code\SportsPersonClassifier\server\server.py", line 18, in <module>
print(classify_image())
File "C:\Code\SportsPersonClassifier\server\server.py", line 10, in classify_image
response = jsonify(util.classify_image(util.get_b64_for_virat()))
File "E:\Users\Acer\anaconda3\lib\site-packages\flask\json\__init__.py", line 358, in jsonify
if current_app.config["JSONIFY_PRETTYPRINT_REGULAR"] or current_app.debug:
File "E:\Users\Acer\anaconda3\lib\site-packages\werkzeug\local.py", line 436, in __get__
obj = instance._get_current_object()
File "E:\Users\Acer\anaconda3\lib\site-packages\werkzeug\local.py", line 565, in _get_current_object
return self.__local() # type: ignore
File "E:\Users\Acer\anaconda3\lib\site-packages\flask\globals.py", line 52, in _find_app
raise RuntimeError(_app_ctx_err_msg)
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
to interface with the current application object in some way. To solve
this, set up an application context with app.app_context(). See the
documentation for more information.
This is my code:: in server.py
from flask import Flask, request, jsonify
import util
app = Flask(__name__)
#app.route('/classify_image', methods=['GET', 'POST'])
def classify_image():
response = jsonify(util.classify_image(util.get_b64_for_image()))
return response
if __name__ == "__main__":
print("Starting Python Flask Server For Sports Celebrity Image Classification")
util.load_saved_artifacts()
print(classify_image())
But the exact code works without any error, if I just remove jsonify() from the classify_image() function like this::
def classify_image():
response = util.classify_image(util.get_b64_for_image())
return response
If I write the classify_image function without jsonify it works as expected without error. I tried to solve the problem reading several StackOverflow answers but not working for my code. Please help me solve the problem with jsonify. Thank you.
As the error suggests, and as required by jsonify:
This requires an active request or application context
Calling the function this way via __main__ will not be within a Flask app context. Instead use app.run() to start the dev server, and then navigate to your route.

coreapi : can not get schema

python 3.7
I have a python app for which I run tests:
$ python -m unittest
package code goes like this:
import coreapi
from coreapi import codecs
class myClient():
myApiUrl = None
client = None
def __init__(self, myApiUrl, authenticationToken):
self.myApiUrl = myApiUrl
auth = coreapi.auth.TokenAuthentication(
scheme='Token',
token=authenticationToken
)
decoders = [
codecs.CoreJSONCodec(),
codecs.JSONCodec()
]
self.client = coreapi.Client(auth=auth, decoders=decoders)
def getSomething(self):
....at this point self.client.decoders are present....
schema = self.client.get(self.myApiUrl)
.......blah-blah....
This tests run ends with this error:
ERROR: test_doFirstTest (myclient.tests.SomeTestClass)
Traceback (most recent call last): File "/usr/src/app/myclient/tests.py", line 82, in test_doFirstTest
output = client.test_doFirstTest() File "/usr/src/app/myclient/myClient.py", line 42, in getSomething
schema = self.client.get(self.myApiUrl) File "/opt/conda/lib/python3.7/site-packages/coreapi/client.py", line 136, in get
return transport.transition(link, decoders, force_codec=force_codec) File "/opt/conda/lib/python3.7/site-packages/coreapi/transports/http.py", line 380, in transition
result = _decode_result(response, decoders, force_codec) File "/opt/conda/lib/python3.7/site-packages/coreapi/transports/http.py", line 284, in _decode_result
codec = utils.negotiate_decoder(decoders, content_type) File "/opt/conda/lib/python3.7/site-packages/coreapi/utils.py", line 207, in negotiate_decoder
raise exceptions.NoCodecAvailable(msg) coreapi.exceptions.NoCodecAvailable: Unsupported media in Content-Type header 'text/html'
I realise it's telling me that it received text/html instead of json (maybe an empty string?), but why? I am not doing any request yet, I am doing a preparation step of getting the schema object.
And this is not a connectivity issue, when it can not connect at all it gives different error.
Thanks
Ok this did not have anything to do with coreapi, codex, unittest, requests, or anything I could possible think of. The reason for that error is some docker container involved was exiting silently after starting because another docker container on which the first one depends was not started. So the manifestation just happened to be non-decipherable.

How to extract variable from within a function in Python?

I'm making a program which scrapes bus information from a server and sends it to a user via Facebook messenger. It works fine, but I'm trying to add functionality which splits really long timetables into separate messages. To do this, I had to make an if statement that detects really long timetables, splits them and calls the send_message function from my main file, app.py
Here is the part of the function in app with the variable I need to extract:
for messaging_event in entry["messaging"]:
if messaging_event.get("message"): # someone sent us a message
sender_id = messaging_event["sender"]["id"] # the facebook ID of the person sending you the message
recipient_id = messaging_event["recipient"]["id"] # the recipient's ID, which should be your page's facebook ID
message_text = messaging_event["message"]["text"] # the message's text
tobesent = messaging_event["message"]["text"]
send_message(sender_id, fetch.fetchtime(tobesent))
and here is the if statement in fetch which detects long messages, splits them and calls the send_message function from the other file, app.py:
if len(info["results"]) > 5:
for i, chunk in enumerate(chunks(info, 5), 1):
app.send_message((USER ID SHOULD BE HERE, 'Listing part: {}\n \n{}'.format(i, chunk)))
I'm trying to call the send_message function from app.py, but it requires two arguments, sender_id and the message text. How can I go about getting the sender_id variable from this function and using it in fetch? I've tried returning it and calling the function, but it doesn't work for me.
EDIT:error
Traceback (most recent call last):
File "bus.py", line 7, in <module>
print fetch.fetchtime(stopnum)
File "/home/ryan/fb-messenger-bot-master/fetch.py", line 17, in fetchtime
send_message((webhook(),'Listing part: {}\n \n{}'.format(i, chunk)))
File "/home/ryan/fb-messenger-bot-master/app.py", line 30, in webhook
data = request.get_json()
File "/usr/local/lib/python2.7/dist-packages/werkzeug/local.py", line 343, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/local/lib/python2.7/dist-packages/werkzeug/local.py", line 302, in _get_current_object
return self.__local()
File "/usr/local/lib/python2.7/dist-packages/flask/globals.py", line 37, in _lookup_req_object
raise RuntimeError(_request_ctx_err_msg)
RuntimeError: Working outside of request context.
This typically means that you attempted to use functionality that needed
an active HTTP request. Consult the documentation on testing for
information about how to avoid this problem.

Upload file to GCS from Appengine (Endpoints). AttributeTypeError:

I'm trying to upload a file to GCS from Appengine Endpoints. I'm using Python. When the file ends to upload, shows an error " AttributeError: 'str' object has no attribute 'ToMessage' ".
So, if I go to GCS File Explorer, in the browser, I see the recently filename uploaded but its size is 0K.
This is my model:
class File(EndpointsModel):
_message_fields_schema = ('blob', 'url')
blob = ndb.BlobKeyProperty() #stored in GCS
url = ndb.StringProperty()
enable = ndb.BooleanProperty(default=True)
def create_file(filename):
file_info = blobstore.FileInfo(filename)
filename = '/gs'+ str(file_info.filename.blob)
gcs.open(secrets.BUCKET_NAME +'/' + filename, 'w').close()
return blobstore.create_gs_key(filename)
So, what I need to do to upload correctly a file to GCS from Appengine Endpoints.
Traceback:
ERROR 2014-11-25 20:35:22,654 service.py:191] Encountered unexpected error from ProtoRPC method implementation: AttributeError ('str' object has no attribute 'ToMessage')
Traceback (most recent call last):
File "/home/alpocr/workspace/google_appengine/lib/protorpc-1.0/protorpc/wsgi/service.py", line 181, in protorpc_service_app
response = method(instance, request)
File "/home/alpocr/workspace/google_appengine/lib/endpoints-1.0/endpoints/api_config.py", line 1332, in invoke_remote
return remote_method(service_instance, request)
File "/home/alpocr/workspace/google_appengine/lib/protorpc-1.0/protorpc/remote.py", line 412, in invoke_remote_method
response = method(service_instance, request)
File "/home/alpocr/workspace/mall4g-backend/libs/endpoints_proto_datastore/ndb/model.py", line 1429, in EntityToRequestMethod
response = response.ToMessage(fields=response_fields)
AttributeError: 'str' object has no attribute 'ToMessage'
It sounds like you have defined the return type correctly for your endpoints method, and it's expecting to turn the result into a Message object, but the endpoints method code is actually returning a string. Can you post the endpoints method that is called when this error occurs?
Either that or endpoints proto model is acting weird when you (somewhere in your code) assign a string value to one of its properties. When it tries to convert it to a Message (and thus recursively to turn its properties into Messages), it finds the String and bugs out. It's hard to tell without seeing the affected endpoint method's code.
UPDATE: Also, checking the source of endpoints_proto_datastore, we see the following comment above the line that bugs:
# If developers using a custom request message class with
# response_fields to create a response message class for them, it is
# up to them to return an instance of the current EndpointsModel
# class. If not, their API users will receive a 503 from an uncaught
# exception.
Could this apply to you?

example urllib3 and threading in python

I am trying to use urllib3 in simple thread to fetch several wiki pages.
The script will
Create 1 connection for every thread (I don't understand why) and Hang forever.
Any tip, advice or simple example of urllib3 and threading
import threadpool
from urllib3 import connection_from_url
HTTP_POOL = connection_from_url(url, timeout=10.0, maxsize=10, block=True)
def fetch(url, fiedls):
kwargs={'retries':6}
return HTTP_POOL.get_url(url, fields, **kwargs)
pool = threadpool.ThreadPool(5)
requests = threadpool.makeRequests(fetch, iterable)
[pool.putRequest(req) for req in requests]
#Lennart's script got this error:
http://en.wikipedia.org/wiki/2010-11_Premier_LeagueTraceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/threadpool.py", line 156, in run
http://en.wikipedia.org/wiki/List_of_MythBusters_episodeshttp://en.wikipedia.org/wiki/List_of_Top_Gear_episodes http://en.wikipedia.org/wiki/List_of_Unicode_characters result = request.callable(*request.args, **request.kwds)
File "crawler.py", line 9, in fetch
print url, conn.get_url(url)
AttributeError: 'HTTPConnectionPool' object has no attribute 'get_url'
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/threadpool.py", line 156, in run
result = request.callable(*request.args, **request.kwds)
File "crawler.py", line 9, in fetch
print url, conn.get_url(url)
AttributeError: 'HTTPConnectionPool' object has no attribute 'get_url'
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/threadpool.py", line 156, in run
result = request.callable(*request.args, **request.kwds)
File "crawler.py", line 9, in fetch
print url, conn.get_url(url)
AttributeError: 'HTTPConnectionPool' object has no attribute 'get_url'
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/threadpool.py", line 156, in run
result = request.callable(*request.args, **request.kwds)
File "crawler.py", line 9, in fetch
print url, conn.get_url(url)
AttributeError: 'HTTPConnectionPool' object has no attribute 'get_url'
After adding import threadpool; import urllib3 and tpool = threadpool.ThreadPool(4) #user318904's code got this error:
Traceback (most recent call last):
File "crawler.py", line 21, in <module>
tpool.map_async(fetch, urls)
AttributeError: ThreadPool instance has no attribute 'map_async'
Here is my take, a more current solution using Python3 and concurrent.futures.ThreadPoolExecutor.
import urllib3
from concurrent.futures import ThreadPoolExecutor
urls = ['http://en.wikipedia.org/wiki/2010-11_Premier_League',
'http://en.wikipedia.org/wiki/List_of_MythBusters_episodes',
'http://en.wikipedia.org/wiki/List_of_Top_Gear_episodes',
'http://en.wikipedia.org/wiki/List_of_Unicode_characters',
]
def download(url, cmanager):
response = cmanager.request('GET', url)
if response and response.status == 200:
print("+++++++++ url: " + url)
print(response.data[:1024])
connection_mgr = urllib3.PoolManager(maxsize=5)
thread_pool = ThreadPoolExecutor(5)
for url in urls:
thread_pool.submit(download, url, connection_mgr)
Some remarks
My code is based on a similar example from the Python Cookbook by Beazley and Jones.
I particularly like the fact that you only need a standard module besides urllib3.
The setup is extremely simple, and if you are only going for side-effects in download (like printing, saving to a file, etc.), there is no additional effort in joining the threads.
If you want something different, ThreadPoolExecutor.submit actually returns whatever download would return, wrapped in a Future.
I found it helpful to align the number of threads in the thread pool with the number of HTTPConnection's in a connection pool (via maxsize). Otherwise you might encounter (harmless) warnings when all threads try to access the same server (as in the example).
Obviously it will create one connection per thread, how should else each thread be able to fetch a page? And you try to use the same connection, made from one url, for all urls. That can hardly be what you intended.
This code worked just fine:
import threadpool
from urllib3 import connection_from_url
def fetch(url):
kwargs={'retries':6}
conn = connection_from_url(url, timeout=10.0, maxsize=10, block=True)
print url, conn.get_url(url)
print "Done!"
pool = threadpool.ThreadPool(4)
urls = ['http://en.wikipedia.org/wiki/2010-11_Premier_League',
'http://en.wikipedia.org/wiki/List_of_MythBusters_episodes',
'http://en.wikipedia.org/wiki/List_of_Top_Gear_episodes',
'http://en.wikipedia.org/wiki/List_of_Unicode_characters',
]
requests = threadpool.makeRequests(fetch, urls)
[pool.putRequest(req) for req in requests]
pool.wait()
Thread programming is hard, so I wrote workerpool to make exactly what you're doing easier.
More specifically, see the Mass Downloader example.
To do the same thing with urllib3, it looks something like this:
import urllib3
import workerpool
pool = urllib3.connection_from_url("foo", maxsize=3)
def download(url):
r = pool.get_url(url)
# TODO: Do something with r.data
print "Downloaded %s" % url
# Initialize a pool, 5 threads in this case
pool = workerpool.WorkerPool(size=5)
# The ``download`` method will be called with a line from the second
# parameter for each job.
pool.map(download, open("urls.txt").readlines())
# Send shutdown jobs to all threads, and wait until all the jobs have been completed
pool.shutdown()
pool.wait()
For more sophisticated code, have a look at workerpool.EquippedWorker (and the tests here for example usage). You can make the pool be the toolbox you pass in.
I use something like this:
#excluding setup for threadpool etc
upool = urllib3.HTTPConnectionPool('en.wikipedia.org', block=True)
urls = ['/wiki/2010-11_Premier_League',
'/wiki/List_of_MythBusters_episodes',
'/wiki/List_of_Top_Gear_episodes',
'/wiki/List_of_Unicode_characters',
]
def fetch(path):
# add error checking
return pool.get_url(path).data
tpool = ThreadPool()
tpool.map_async(fetch, urls)
# either wait on the result object or give map_async a callback function for the results

Categories