Python (flask/ marshmallow)ValueError: too many values to unpack (expected 2) - python

I am working on a Flask project and I am using marshmallow to validate user input.
Below is a code snippet:
def create_user():
in_data = request.get_json()
data, errors = Userschema.load(in_data)
if errors:
return (errors), 400
fname = data.get('fname')
lname = data.get('lname')
email = data.get('email')
password = data.get('password')
cpass = data.get('cpass')
When I eliminate the errors part, the code works perfectly. When I run it as it is, I get the following error:
builtins.ValueError
ValueError: too many values to unpack (expected 2)
Traceback (most recent call last)
File
"/home/..project-details.../venv3/lib/python3.6/site-packages/flask/app.py",
line 2000, in call
error = None
ctx.auto_pop(error)
def __call__(self, environ, start_response):
"""Shortcut for :attr:`wsgi_app`."""
return self.wsgi_app(environ, start_response)
def __repr__(self):
return '<%s %r>' % (
self.__class__.__name__,
self.name,
Note: The var in_data is a dict.
Any ideas??

I recommend you check your dependency versions.
Per the Marshmallow API reference, schema.load returns:
Changed in version 3.0.0b7: This method returns the deserialized data rather than a (data, errors) duple. A ValidationError is raised if invalid data are passed.
I suspect python is trying to unpack the dict (returned as a singular object) into two variables. The exception is raised because there is nothing to pack into the 'errors' variable. The below reproduces the error:
d = dict()
d['test'] = 10101
a, b = d
print("%s : %s" % (a, b))

according to the documentation in its most recent version (3.17.1) the way of handling with validation errors is as follows:
from marshmallow import ValidationError
try:
result = UserSchema().load({"name": "John", "email": "foo"})
except ValidationError as err:
print(err.messages) # => {"email": ['"foo" is not a valid email address.']}
print(err.valid_data) # => {"name": "John"}

Related

TypeError: Object of type TypeError is not JSON serializable Python

So I have JSON request with format like this:
{
"item_ids": [635,10,692,194,9412],
"gender": "male",
"number_results": 5
}
I'm trying to parse array in "item_ids". But I got error message like in the title. This is my code:
resto_id = json.loads['item_ids']
data = json.dumps(resto_id)
I also tried this:
response = requests.get("http://127.0.0.1:8520/recommend_multi")
users = json.loads(response.text)
data = users['item_ids']
But gave me an error:
TypeError: Object of type JSONDecodeError is not JSON serializable
Edit: Maybe this will help:
#app.route('/recommend_multi', methods=['POST'])
def recommend_multi():
dct={}
new_user = 'newusername'
try:
e=""
resto_id = json.loads['item_ids']
data = json.dumps(resto_id)
# response = requests.get("http://127.0.0.1:8520/recommend_multi")
# users = json.loads(response.text)
# data = users['item_ids']
gender = request.json['gender']
resto_rec = float(request.json['number_results'])
input_dict = {'id_resto': data,
'gender': [gender, gender, gender, gender, gender], 'username': [new_user, new_user, new_user, new_user, new_user]}
dct = {"items": input_dict}
dct2 = {"data": dct, "message":"sukses", "success":True}
except Exception as e:
dct2 = {"data": dct, "message":e, "success":False}
return jsonify(dct2)
And this is the traceback:
I run it with docker. And for request I'm using Insomnia
The problem is in this snippet:
except Exception as e:
dct2 = {"data": dct, "message":e, "success":False}
You are basically trying to JSON serialize the exception e which is not possible. You need to use something that is JSON serializable, like the string representation of the exception, for example by using str(e):
except Exception as e:
dct2 = {"data": dct, "message":str(e), "success":False}
First, thanks to #bdbd to keep responding to me. The exception solution helped me to fix my code, but in the end I managed to debug my code then found the solution which is resolve my problems about retrieving array in JSON objects. So instead of this:
resto_id = json.loads['item_ids']
data = json.dumps(resto_id)
I need to request first.
resto_id = request.json
data = json.dumps(resto_id)
data2 = json.loads(data)
#Then retrieve the array
data2["item_ids"]

Flask SQLAlchemy: filter_by() takes 1 positional argument but 2 were given

I have the following lines in my python api which deletes a function created by user from postgres db upon request.
#func_app.route("/delete", methods=["POST"])
def delete_func():
try:
JSON = request.get_json()
user_func = function_table.query.filter_by(
created_by=token_payload["email"], functionid=JSON["funcId"]
).all()
functionid=JSON["funcId"]
func_detail = function_table.query.filter_by(functionid).first()
user = users_table.query.filter_by(email=token_payload["username"]).first()
if len(user_func) == 0:
log_metric(g.request_id + " " + "No functions found")
return make_response(
jsonify("User does not have any functions. Please try again later."),
204,
)
else:
function_table.query.filter_by(functionid).delete()
db.session.commit()
except Exception as err:
db.session.rollback()
log_metric(g.request_id + " " + str(err))
return make_response(
jsonify(
"Unable to process your request at the moment. Please try again later."
),
400,
)
finally:
db.session.close()
I have used filter_by similarly before but there I didn't find any issue. Can anyone help me figure out what went wrong?
Thanks!
You need to define the column name with the value in filter_by, it take **kwargs arguments not *args type of input.
you need to change
func_detail = function_table.query.filter_by(functionid).first()
to
func_detail = function_table.query.filter_by(id = functionid).first()
considering id is column name.
For more information, see the sqlalchemy documentation.

forward return value of x-apikeyInfoFunc to operation

I'm using openapi to define my API. I just added this security schema:
securitySchemes:
api_key:
type: apiKey
name: X-Auth
in: header
x-apikeyInfoFunc: apikey_auth
where apikey_auth is defined like this
def apikey_auth(token, required_scopes):
decrypted_token = None
try:
decrypted_token = mydecrypter.decrypt(token)
except InvalidToken:
raise OAuthProblem('Invalid token')
return {'decrypted_token': decrypted_token}
Now i'd like to use this authentication for my actual endpoints which are defined in openapi like this:
/myendpoint
get:
operationId: operation
#more stuff
security:
- api_key: []
When now calling myendpoint the authentication is being done and works as expected. What I would like to have now is the return value of apikey_auth being passed into the call of operation so i can access decrypted_token in operation like this:
def operation(decrypted_token):
data = get_data_for_token(decrypted_token)
return data
Does anyone have an idea if this is possible somehow without having an extra parameter in the endpoint defintion?
Solved. For whatever reasons with these changes it works:
def apikey_auth(token, required_scopes):
decrypted_token = None
try:
decrypted_token = mydecrypter.decrypt(token)
except InvalidToken:
raise OAuthProblem('Invalid token')
return {'sub': decrypted_token}
def operation(user):
data = get_data_for_token(user)
return data

try except not catching on function?

I am getting this valid error while preprocessing some data:
9:46:56.323 PM default_model Function execution took 6008 ms, finished with status: 'crash'
9:46:56.322 PM default_model Traceback (most recent call last):
File "/user_code/main.py", line 31, in default_model
train, endog, exog, _, _, rawDf = preprocess(ledger, apps)
File "/user_code/Wrangling.py", line 73, in preprocess
raise InsufficientTimespanError(args=(appDf, locDf))
That's occurring here:
async def default_model(request):
request_json = request.get_json()
if not request_json:
return '{"error": "empty body." }'
if 'transaction_id' in request_json:
transaction_id = request_json['transaction_id']
apps = [] # array of apps whose predictions we want, or uempty for all
if 'apps' in request_json:
apps = request_json['apps']
modelUrl = None
if 'files' in request_json:
try:
files = request_json['files']
modelUrl = getModelFromFiles(files)
except:
return package(transaction_id, error="no model to execute")
else:
return package(transaction_id, error="no model to execute")
if 'ledger' in request_json:
ledger = request_json['ledger']
try:
train, endog, exog, _, _, rawDf = preprocess(ledger, apps)
# ...
except InsufficientTimespanError as err:
return package(transaction_id, error=err.message, appDf=err.args[0], locDf=err.args[1])
And preprocess is correctly throwing my custom error:
def preprocess(ledger, apps=[]):
"""
convert ledger from the server, which comes in as an array of csv entries.
normalize/resample timeseries, returning dataframes
"""
appDf, locDf = splitLedger(ledger)
if len(appDf) < 3 or len(locDf) < 3:
raise InsufficientDataError(args=(appDf, locDf))
endog = appDf['app_id'].unique().tolist()
exog = locDf['location_id'].unique().tolist()
rawDf = normalize(appDf, locDf)
trainDf = cutoff(rawDf.copy(), apps)
rawDf = cutoff(rawDf.copy(), apps, trim=False)
# TODO - uncomment when on realish data
if len(trainDf) < 2 * WEEKS:
raise InsufficientTimespanError(args=(appDf, locDf))
The thing is, it is in a try``except block precisely because I want to trap the error and return a payload with the error, rather than crashing with a 500 error. But its crashing on my custom error, in the try block, anyway. Right on that line calling preprocess.
This must be a failure on my part to conform to proper python code. But I'm not sure what I am doing wrong. The environment is python 3.7
Here's where that error is defined, in Wrangling.py:
class WranglingError(Exception):
"""Base class for other exceptions"""
pass
class InsufficientDataError(WranglingError):
"""insufficient data to make a prediction"""
def __init__(self, message='insufficient data to make a prediction', args=None):
super().__init__(message)
self.message = message
self.args = args
class InsufficientTimespanError(WranglingError):
"""insufficient timespan to make a prediction"""
def __init__(self, message='insufficient timespan to make a prediction', args=None):
super().__init__(message)
self.message = message
self.args = args
And here is how main.py declares (imports) it:
from Wrangling import preprocess, InsufficientDataError, InsufficientTimespanError, DataNotNormal, InappropriateValueToPredict
Your preprocess function is declared async. This means the code in it isn't actually run where you call preprocess, but instead when it is eventually awaited or passed to a main loop (like asyncio.run). Because the place where it is run is no-longer in the try block in default_model, the exception is not caught.
You could fix this in a few ways:
make preprocess not async
make default_model async too, and await on preprocess.
Do the line numbers in the error match up with the line numbers in your code? If not is it possible that you are seeing the error from a version of the code before you added the try...except?

Trapping a custom error in python

I'm trying to trap the following error in a try/exception block, but as this is a custom module that is generating the error - not generating a standard error such as ValueError for example. What is the correct way to catch such errors?
Here is my code:
try:
obj = IPWhois(ip_address)
except Exception(IPDefinedError):
results = {}
else:
results = obj.lookup()
The most obvious way:
except IPDefinedError:
gives:
NameError: name 'IPDefinedError' is not defined
The error returned that I want to check for is:
ipwhois.exceptions.IPDefinedError
ipwhois.exceptions.IPDefinedError: IPv4 address '127.0.0.1' is already defined as 'Private-Use Networks' via 'RFC 1918'.
The issue here is the import!
I had the import as
from ipwhois import IPWhois
but I also needed
import ipwhois
So the following works:
try:
obj = IPWhois(ip_address)
except ipwhois.exceptions.IPDefinedError:
results = {}
else:
results = obj.lookup()
Here is a quick recap. Yes, the error in your question did look like it was likely related to an import issue (as per my comment ;) ).
from pprint import pprint as pp
class IPDefinedError(Exception):
"""Mock IPDefinedError implementation
"""
pass
class IPWhois(object):
"""Mock IPWhois implementation
"""
def __init__(self, ip_address):
if ip_address == "127.0.0.1":
raise IPDefinedError(
"IPv4 address '127.0.0.1' is already defined as 'Private-Use Networks' via 'RFC 1918'.")
self._ip_address = ip_address
def lookup(self):
return "RESULT"
def lookup(ip_address):
""" calculates IPWhois lookup result or None if unsuccessful
:param ip_address:
:return: IPWhois lookup result or None if unsuccessful
"""
result = None
try:
obj = IPWhois(ip_address)
result = obj.lookup()
except IPDefinedError as e:
msg = str(e)
print("Error received: {}".format(msg)) # do something with msg
return result
if __name__ == '__main__':
results = map(lookup, ["192.168.1.1", "127.0.0.1"])
pp(list(results)) # ['RESULT', None]

Categories