Using PyTest with hug and base_url - python

I have a an api that is setup with
import hug
API = hug.API(__name__).http.base_url='/api'
#hug.get('/hello-world', versions=1)
def hello_world(response):
return hug.HTTP_200
and I'm trying to test it with PyTest.
I'm trying to test the route with
import pytest
import hug
from myapi import api
...
def test_hello_world_route(self):
result = hug.test.get(myapp, 'v1/hello-world')
assert result.status == hug.HTTP_200
How can I test hug routes that have http.base_url configured?
I get a 404 errors regardless of the route path. I've tried
/api/v1/hello-world
api/v1/hello-world
v1/hello-world
/v1/hello-world
If I remove the hug.API().http.base_url setting then v1/hello-world works fine but my requirement is to have a base_url setup.
I've reviewed the documentation on the official hug github repo and various online sources such as ProgramTalk but I haven't had much success.
any recommendations?

You should send your module (myapp) as the first argument to hug.test.get().
Then you can use the full path /api/v1/hello-world as the second argument.
Here's a minimal working example:
# myapp.py
import hug
api = hug.API(__name__).http.base_url='/api'
#hug.get('/hello-world', versions=1)
def hello_world(response):
return hug.HTTP_200
.
# tests.py
import hug
import myapp
def test_hello_world_route():
result = hug.test.get(myapp, '/api/v1/hello-world')
assert result.status == hug.HTTP_200
.
# run in shell
pytest tests.py

Related

Is there a way to fix the pythonpath when using aws sam?

I'm trying to import a custom module that I created, but it breaks my API just to import it.
data directory:
-src
--order
----__init__.py
----app.py
----validator.py
----requirments.txt
--__init__.py
on my app.py I have this code:
import json
from .validator import validate
def handler(event, context):
msg = ''
if event['httpMethod'] == 'GET':
msg = "GET"
elif event['httpMethod'] == 'POST':
pass #msg = validate(json.loads(event['body']))
return {
"statusCode": 200,
"body": json.dumps({
"message": msg,
}),
}
I get this error:
Unable to import module 'app': attempted relative import with no known parent package
However, if I remove line 2 (from .validator import validate) from my code, it works fine, so the problem is with that import, and honestly, I can't figure what is going on. I have tried to import using:
from src.order.validator import validate
but it doesn't work either.
was able to solve my issue by generating a build through the command: sam build, and zipping my file, and putting it on the root folder inside aws-sam, it's not a great solution because I have to rebuild at every small change, but at least it's a workaround for now
It seems app.py has not been loaded as part of the package hierarchy (i.e. src and order packages have not been loaded). You should be able to run
from src.order import app
from the parent directory of src and your code will work. If you run python app.py from the terminal — which I assume is what you did — app.py will be run as a standalone script — not as part of a package.
However, I do not believe you need the .validator in your case since both modules are in the same directory. You should be able to do
from validator import validate

Azure Functions - Unable to import other python modules in called scripts

I have created a simple HTTP trigger-based azure function in python which is calling another python script to create a sample file in azure data lake gen 1. My solution structure is given below: -
Requirements.txt contains the following imports: -
azure-functions
azure-mgmt-resource
azure-mgmt-datalake-store
azure-datalake-store
init.py
import logging, os, sys
import azure.functions as func
import json
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
full_path_to_script = os.path.join(os.path.dirname( __file__ ) + '/Test.py')
logging.info(f"Path: - {full_path_to_script}")
os.system(f"python {full_path_to_script}")
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
Test.py
import json
from azure.datalake.store import core, lib, multithread
directoryId = ''
applicationKey = ''
applicationId = ''
adlsCredentials = lib.auth(tenant_id = directoryId, client_secret = applicationKey, client_id = applicationId)
adlsClient = core.AzureDLFileSystem(adlsCredentials, store_name = '')
with adlsClient.open('stage1/largeFiles/TestFile.json', 'rb') as input_file:
data = json.load(input_file)
with adlsClient.open('stage1/largeFiles/Result.json', 'wb') as responseFile:
responseFile.write(data)
Test.py is failing with an error that no module found azure.datalake.store
Why other required modules are not working for Test.py since it is inside the same directory?
pip freeze output: -
adal==1.2.2
azure-common==1.1.23
azure-datalake-store==0.0.48
azure-functions==1.0.4
azure-mgmt-datalake-nspkg==3.0.1
azure-mgmt-datalake-store==0.5.0
azure-mgmt-nspkg==3.0.2
azure-mgmt-resource==6.0.0
azure-nspkg==3.0.2
certifi==2019.9.11
cffi==1.13.2
chardet==3.0.4
cryptography==2.8
idna==2.8
isodate==0.6.0
msrest==0.6.10
msrestazure==0.6.2
oauthlib==3.1.0
pycparser==2.19
PyJWT==1.7.1
python-dateutil==2.8.1
requests==2.22.0
requests-oauthlib==1.3.0
six==1.13.0
urllib3==1.25.6
Problem
os.system(f"python {full_path_to_script}") from your functions project is causing the issue.
Azure Functions Runtime sets up the environment, along with modifying process level variables like os.path so that your function can load any dependencies you may have. When you create a sub-process like that, not all information will flow through. Additionally, you will face issues with logging -- logs from test.py would not show up properly unless explicitly handled.
Importing works locally because you have all your requirements.txt modules installed and available to test.py. This is not the case in Azure. After remotely building as part of publish, your modules are included as part of your code package published. It's not "installed" globally in the Azure environment per se.
Solution
You shouldn't have to run your script like that. In the example above, you could import your test.py from your __init__.py file, and that should behave like it was called python test.py (at least in the case above). Is there a reason you'd want to do python test.py in a sub-process over importing it?
Here's the official guide on how you'd want to structure your app to import shared code -- https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#folder-structure
Side-Note
I think once you get through the import issue, you may also face problems with adlsClient.open('stage1/largeFiles/TestFile.json', 'rb'). We recommend following the developer guide above to structure your project and using __file__ to get the absolute path (reference).
For example --
import pathlib
with open(pathlib.Path(__file__).parent / 'stage1' / 'largeFiles' /' TestFile.json'):
....
Now, if you really want to make os.system(f"python {full_path_to_script}") work, we have workarounds to the import issue. But, I'd rather not recommend such approach unless you have a really compelling need for it. :)

python nose test reporting in teamcity

I have a script call run_test.py, here's the content:-
if __name__ == '__main__':
nose.main(argv=sys.argv)
Running all my tests is as simple as doing this:
run_test.py unittests/test_*.py
I'm trying to now incorperate the output reporting for this into teamcity.
I'm referring to this https://github.com/JetBrains/teamcity-messages
I tried changing all my unittests/test_*.py program following the documentation. It works if running the test individually like this:-
unittest/test_one.py
But it does not work when running it thru nose, like this:
run_test.py unittest/test_one.py
According to the documentation link, it says that nose reporting is enabled automatically under TeamCity build. I don't quite get what that means.
Is there anything that i'm missing out here?
Any help is greatly appreciated. Thanks.
have a look at the xunit plugin of nose.
it will generate an xml file with the results => which jenkins and teamcity can use.
some documentation for teamcity
this post tells you how to enable the plugin in your test script
if __name__ == '__main__':
argv = sys.argv[:]
argv.insert(1, "--with-xunit")
nose.main(argv=argv)
I finally found out the way to achieve that.
Here's what i modified in my run_test.py
#!/usr/bin/env python
import os
import sys
import unittest
from teamcity import is_running_under_teamcity
from teamcity.unittestpy import TeamcityTestRunner
loader = unittest.TestLoader()
start_dir = sys.argv[1]
suite = loader.discover(start_dir, pattern='test_*.py')
#runner = unittest.TextTestRunner()
runner = TeamcityTestRunner(verbosity=2)
runner.run(suite)

pytest return ModuleNotFoundError when module imported in test file import another module within the same directory of the imported module

I am sorry if the title takes some time to understand. So here is the folder structure:
falcon_tut/
falcon_tut/
app.py
images.py
__init__.py
tests/
test_app.py
__init__.py
And some codes
####################
# app.py
####################
from images import Resource
images = Resource()
api = application = falcon.API()
api.add_route('/images', images)
# ... few more codes
####################
# test_app.py
####################
import falcon
from falcon import testing
import ujson
import pytest
from falcon_tut.app import api
#pytest.fixture
def client():
return testing.TestClient(api)
def test_list_images(client):
doc = {
'images': [
{
'href': '/images/1eaf6ef1-7f2d-4ecc-a8d5-6e8adba7cc0e.png'
}
]
}
response = client.simulate_get('/images')
result_doc = ujson.loads(response.content)
assert result_doc == doc
assert response.status == falcon.HTTP_OK
It works fine when running with python falcon_tut/app.py and curl it with response of 200 and payload of the images
Until running pytest tests/ from the project root it output this:
ImportError while importing test module ../falcon_tut/tests/test_app.py
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/test_app.py:6: in <module>
from falcon_tut.app import api
E ModuleNotFoundError: No module named 'falcon_tut'
I tried creating __init__.py at the project root but it still output the same error above
Python version 3.7.0, with falcon 1.4.1, cpython 0.28.5, pytest 3.7.3, and instead of using gunicorn, I am using bjoern 2.2.2
I am trying out the python falcon framework and encounter the error at the testing part.
==========UPDATE===========
The reason why pytest could not found the module is because sys.path does not have ../falcon_tut/falcon_tut exists.
When I ran pytest and edit those 2 files and printing out sys.path, it only has [../falcon_tut/tests, ../falcon_tut, ..]. The workaround to this is to append the path to the package to sys.path. So here is the edited app.py
#############
# app.py
#############
import sys
# this line is just example, please rewrite this properly if you wants to use this workaround
# sys_path[1] only applied to my situation, again this is just example to know that it works
# the idea is to make sure the path to your module exists in sys.path
# in this case, I appended ../falcon_tut/falcon_tut to sys.path
# so that now ../falcon_tut/falcon_tut/images.py can be found by pytest
sys.path.insert(0, '{}/falcon_tut'.format(sys_path[1]))
# body codes...

No api proxy found for service "taskqueue"

When I'm trying to python manage.py changepassword command I get this errror:
AssertionError: No api proxy found for service "taskqueue"
Here's what I have in my PYTHONPATH:
$ echo $PYTHONPATH
lib/:/usr/local/google_appengine
And my DJANGO_SETTINGS_MODULE points to the settings file that I use for GAE:
$ echo $DJANGO_SETTINGS_MODULE
settings.dev
There's some package for taskqueue in appengine api folder:
/usr/local/google_appengine/google/appengine/api/taskqueue$ ls
__init__.py __init__.pyc taskqueue.py taskqueue.pyc taskqueue_service_pb.py taskqueue_service_pb.pyc taskqueue_stub.py taskqueue_stub.pyc
What could I miss here?
I assume manage.py is executing sdk methods without starting a local dev_appserver. dev_appserver.py sets up stubs to emulate the services available once your application is deployed. When you are executing code locally and outside of the running app server, you will need to initialize those stubs yourself.
The app engine docs have a section on testing that tells you how to initialize those stubs. It isn't the exact solution to your issue, but it can point you to the stubs you need to set up.
import unittest
from google.appengine.api import taskqueue
from google.appengine.ext import deferred
from google.appengine.ext import testbed
class TaskQueueTestCase(unittest.TestCase):
def setUp(self):
self.testbed = testbed.Testbed()
self.testbed.activate()
# root_path must be set the the location of queue.yaml.
# Otherwise, only the 'default' queue will be available.
self.testbed.init_taskqueue_stub(root_path='tests/resources')
self.taskqueue_stub = self.testbed.get_stub(
testbed.TASKQUEUE_SERVICE_NAME)
def tearDown(self):
self.testbed.deactivate()
def testTaskAddedToQueue(self):
taskqueue.Task(name='my_task', url='/url/of/my/task/').add()
tasks = self.taskqueue_stub.get_filtered_tasks()
assert len(tasks) == 1
assert tasks[0].name == 'my_task'

Categories