I have a unit test that's failing in an assertion that passes in another test in the same test case class.
Here's the passing test:
def test_home(self):
c = Client()
resp = c.get('/')
self.assertEqual(resp.status_code, 200)
self.assertTrue('a_formset' in resp.context)
Here's the failing test:
def test_number_initial_number_of_forms(self):
c = Client()
resp = c.get('/')
self.assertEqual(resp.context['a_formset'].total_form_count(), 1)
In the second test, I get the error TypeError: 'NoneType' object has no attribute '__getitem__'.
If I execute the second test as
def test_number_initial_number_of_forms(self):
c = Client()
resp = c.get('/')
self.assertTrue('a_formset' in resp.context)
self.assertEqual(resp.context['a_formset'].total_form_count(), 1)
I get the error TypeError: argument of type 'NoneType' is not iterable. I've confirmed via print statements in the second test that the response.content contains the page I expect to get, that the status code is correct, and that the template is correct. But the response's context is consistently None in the second test.
I'm running my Django unit tests through the standard "python manage.py test ..." interface, so I don't believe I'm running into the "context is empty from the shell" issue.
What's going on with this?
Edit:
If I add print type(resp.context['a_formset']) to each test, for the working test I get <class 'django.forms.formsets.AFormFormSet'>. For the non-working test, I get TypeError: 'NoneType' object has no attribute '__getitem__' again.
It's because you ran into some error, exited the shell and restarted it.
But you forgot to start environment...
from django.test.utils import setup_test_environment
>>> setup_test_environment()
That was my problem. Hope it works...
Today I run into the same issue. The second test gets same page has nothing in response.context
I made a research and found that
1) test client uses signals to populate context,
2) my view method is not called for the second test
I turned on a debugger and found that the guilty one is 'Cache middleware'. Knowing that I found this ticket and this SO question (the latter has a solution).
So, in short: the second request is served from cache, not from a view, thus a view is not executed and test-client doesn't get the signal and have no ability to populate context.
I can not disable cache middleware for my project, so I added next hack-lines into my settings:
if 'test' in sys.argv:
CACHE_MIDDLEWARE_SECONDS = 0
Hope this helps someone
You can also clear cache manually by calling cache.clear() inside a test method:
from django.core.cache import cache
import pytest
class TestPostView:
#pytest.mark.django_db(transaction=True)
def test_index_post(self, client, post):
cache.clear()
response = client.get('/')
Related
I'm working on code that retrieves information from Twilio's Flow system through their API. That part of the code functions fine, but when I try to mock it for unit testing, it's throwing an error from the mocked api response.
Here is the code being tested:
from twilio.rest import Client
class FlowChecker:
def __init__(self, twilio_sid, twilio_auth_token):
self.twilio_SID = twilio_sid
self.twilio_auth_token = twilio_auth_token
self.client = Client(self.twilio_SID, self.twilio_auth_token)
self.calls = self.client.calls.list()
self.flows = self.client.studio.v2.flows
def get_active_executions(self):
active_executions = []
for flow in self.flows.list():
executions = self.client.studio.v2.flows(flow.sid).executions.list()
for execution in executions:
if execution._properties['status'] != 'ended':
active_executions.append({'flow_sid': flow.sid, 'execution': execution})
And here is my unit test code that's throwing the error:
import unittest
from unittest.mock import Mock, patch
from flows.twilio_flows import FlowChecker
class FlowCheckerTest(unittest.TestCase):
#patch('flows.twilio_flows.Client')
def test_get_active_flows(self, mock_client):
flow_checker = FlowChecker('fake_sid', 'fake_auth_token')
mock_call = Mock()
mock_flow = Mock()
mock_flow.sid = 0
mock_execution = Mock()
mock_execution._properties = {'status': 'ended'}
mock_client.calls.list().return_value = [mock_call]
mock_client.studio.v2.flows = [mock_flow]
mock_client.studio.v2.flows(mock_flow.sid).executions.list().return_value = [mock_execution]
self.assertEqual(flow_checker.get_active_executions(), [])
And here is the error traceback:
Ran 2 tests in 0.045s
FAILED (errors=1)
Error
Traceback (most recent call last):
File "C:\Users\Devon\AppData\Local\Programs\Python\Python310\lib\unittest\mock.py", line 1369, in patched
return func(*newargs, **newkeywargs)
File "C:\Users\Devon\PycharmProjects\Day_35\tests\twilio_flows_test'.py", line 19, in test_get_active_flows_when_empty
mock_client.studio.v2.flows(mock_flow.sid).executions.list().return_value = [mock_execution]
TypeError: 'list' object is not callable
Process finished with exit code 1
As you can see, "mock_client.calls.list().return_value = [mock_call]" doesn't throw any errors during init, and the first code block runs fine. It's only the mocked executions.list() that's throwing the error in the test.
Can anyone clear this up?
Thank you!
I've tried researching this specific issue and was unable to find information addressing it. It's a very specific deeply nested function in a vendor supplied client that I need to test, so I don't know what to try.
The problem isn't with .list(), it's with .flows().
mock_client.studio.v2.flows = [mock_flow]
mock_client.studio.v2.flows(mock_flow.sid).executions.list().return_value = [mock_execution]
You assign .flows to be a list, and then you try to call it like a function, which causes the error.
I think maybe you intended to say .flows[mock_flow.sid] instead of .flows(mock_flow.sid)?
Although even that doesn't make sense. .flows is a one-element list, so you would use .flows[0] to access the first (and only) item.
I have a function that makes an http request and throws an error if the response is not a 200. It looks like this:
def put_request(param1, param2):
url = f"api/v1/some/route"
response = requests.put(
url,
json=param2,
verify="test",
)
if response.status_code != 200:
raise CustomError()
return response.json()
I want to test that the exception is correct so my test code looks like:
def test_put_request_error(mocker):
requests_mock = mocker.patch("path.to.file.requests")
requests_mock.put.return_value.status_code = 500
with pytest.raises(CustomError) as error:
put_request(param1=param1, param2={some data})
assert error.value.code == 500
Problem is, the error gets raise in the code gets raised and execution stops. It never makes it back to the assertion in the my test. I could use some advice on fixing this!
This pattern seems to work for my other test cases so I'm not sure what the problem here is!
EDIT: The issue was in the imports! The imports from my tests needed to be the same from my actual code. This means both need to be absolute or relative!
The issue here was in the imports. After MrBean Bremen asked to show the imports, I realized my code used relative paths and my test used absolute paths. When I made these both the same they worked!
I have a deployment package in the following structure:
my-project.zip
--- my-project.py
------ lambda_handler()
Then I define the handler path in configuration file
my-project.lambda_handler
Get the error:
'handler' missing on module
Can not understand that
There are some issues occurring this error.
Issue#1:
The very first issue you’re gonna run into is if you name the file incorrectly, you get this error:
Unable to import module 'lambda_function': No module named lambda_function
If you name the function incorrectly you get this error:
Handler 'handler' missing on module 'lambda_function_file': 'module'
object has no attribute 'handler'
On the dashboard, make sure the handler field is entered as function_filename.actual_function_name and make sure they match up in your deployment package.
If only the messages were a bit more instructive that would have been a simpler step.
Resource Link:
No lambda_function?
Issue#2:
adrian_praja has solved the issue in aws forum. He answered the following
I belive your index.js should contain
exports.createThumbnailHandler = function(event, context) {}
Issue#3:
Solution: Correctly specify the method call
This happens when the specification of the method called by node.js is incorrect in Lambda's setting.
Please review the specification of the method to call.
In the case of the above error message, I attempted to call the handler method of index.js, but the corresponding method could not be found.
The processing to call is set with "Handler" on the configuration tab.
Below is an example of setting to call the handler method of index.js.
Resource Link:
http://qiita.com/kazuqqfp/items/ac8d93918d0030b31aad
AWS Lambda Function is returning Handler 'handler' missing on module 'index'
I had this issue and had to make sure I had a function called handler in my file, e.g.:
# this just takes whatever is sent to the api gateway and sends it back
def handler(event, context):
try:
return response(event, 200)
except Exception as e:
return response('Error' + e.message, 400)
def response(message, status_code):
return message
I'm new to the mocking library and so far it's been giving me trouble. I'm trying to test a Url parsing method that takes a response from an initialUrl which is then parsed in the method. I set autospec=true so I think it should have access to all methods in the requests library (including response.url) I'm trying to mock both get and response though I'm not sure if that's needed?
My getUrl method that takes a response and returns its parsed contents:
def getUrl(response):
if response.history:
destination = urllib.parse.urlsplit(response.url)
baseUrlTuple = destination._replace(path="", query="")
return urllib.parse.urldefrag(urllib.parse.urlunsplit(baseUrlTuple)).url
raise RuntimeError("No redirect")
Test method:
def testGetUrl(self):
initialUrl = 'http://www.initial-url.com'
expectedUrl = 'http://www.some-new-url.com'
mock_response = Mock(spec=requests, autospec=True)
mock_response.status_code = 200
mock_get = Mock(return_value=mock_response)
#mock_get.return_value.history = True
resp = mock_get(self.initialUrl)
mock_response.history = True
resultUrl = getBaseUrl(resp)
self.assertEqual(resultUrl, expectedUrl)
When I run the test, I get
raise AttributeError("Mock object has no attribute %r" % name)
AttributeError: Mock object has no attribute 'url'
First I would fix the code in your question so that it actually runs.
You have several options, the easiest being just adding url to the list of attributes you're mocking:
mock_response.url = <your URL>
But it's also important to understand that you're trying to use the requests library as a specification for the mock, when you should be using requests.Response() if you want the url attribute to be automatically generated. You still have to assign whatever url you want to use to it though, or you'll be comparing a Mock object to an int in your function.
Take a look at the documentation involving spec if you want to learn more:
https://docs.python.org/3/library/unittest.mock-examples.html
Let's say I want to display my own 404 & 500 pages, I've found 2 possibilities so far:
1: Using cherrypy.config.update
def error_page_404(status, message, traceback, version):
return ('Error 404 Page not found')
def error_page_500(status, message, traceback, version):
return ('Error:')
cherrypy.config.update({'error_page.404': error_page_404, 'error_page.500': error_page_500})
Using _cp_config:
from cherrypy import _cperror
def handle_error():
cherrypy.response.status = 500
cherrypy.log("handle_error() called. Alarm!", "WEBAPP")
cherrypy.response.body = ['Sorry, an error occured. The admin has been notified']
error = _cperror.format_exc()
def error_page(status, message, traceback, version):
cherrypy.log("error_page() called. Probably not very important.", "WEBAPP")
return "Sorry, an error occured."
class Root:
_cp_config = {
'error_page.default': error_page,
'request.error_response': handle_error
}
but is there a difference or a suggestion which is preferable to use?
request.error_response allows you to set a handler for processing of some unexpected errors, like your own exceptions raised from HTTP handlers.
The callable that you'll set for this option will receive no arguments at all and you'll have to inspect sys.exc_info() for the details, to find out what happened.
You'll also have to set cherrypy.response.status and cherrypy.response.body by yourself, explicitly in your error handler.
If you want to modify the error response for HTTP error codes (when instances of cherrypy.HTTPError are raised, like raise cherrypy.NotFound), you can use error_page.default (catch-all) or error_page.404 (error-specific) for handling those errors.
error_page options support both file path and callable values. In case of using a file path, the HTML template file can use the following substitution patterns: %(status)s, %(message)s, %(traceback)s, and %(version)s.
If you opt-in to using a function, it'll receive those as arguments (callback(status, message, traceback, version)). The return value of this callable is then used HTTP response payload.
As you can see, these approaches have different implications and different levels of flexibility and usability. Choose whatever works for you. Internally, the default request.error_response uses error_page settings to figure out what to return. So if you redefine request.error_response, it'll not use error_page.* settings unless you explicitly make it do so.
See the docstring with some explanation here.