Problem:
I'm parsing some config file and one field should be parsed to an Enum. Let's say there may be 3 possible values:
`foo`, `bar`, `baz`
And I want to parse them into the following enum's values:
class ConfigField(Enum):
FOO = 'foo'
BAR = 'bar'
BAZ = 'baz'
What have I tried so far:
I have written the following function to do that:
def parse_config_field(x: str) -> ConfigField:
for key, val in ConfigField.__members__.items():
if x == val.value:
return val
else:
raise ValueError('invalid ConfigField: ' + str(x))
But I think this is ugly and too complicated for something so simple and straightforward.
I also considered just lowercasing enum field's names:
def parse_config_field2(x: str) -> ConfigField:
value = ConfigField.__members__.get(x.lower(), None)
if value is None:
raise ValueError('invalid ConfigField: ' + str(x))
else:
return value
Which is slightly shorter, but still pretty ugly and what's worse, it creates direct dependence between config values (in some text config file) and ConfigField which I'm not comfortable with.
I just recently switched from python2 to python3 so I'm not 100% familiar with Enums and I'm hoping there may be a better way to do this.
Question:
Is there a better way to do this? I'm looking for simpler (and perhaps more "built-in" / "pythonic") solution than my parse_config_field - perhaps something like:
value = ConfigField.get_by_value(cfg_string)
I believe you are looking for accessing by value: ConfigField("foo"), which gives <ConfigField.FOO: 'foo'>.
A ValueError will be raised for parsing using not existing values:
>>> ConfigField("not foo")
ValueError: 'not foo' is not a valid ConfigField
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Program Files\Python38\lib\enum.py", line 309, in __call__
return cls.__new__(cls, value)
File "C:\Program Files\Python38\lib\enum.py", line 600, in __new__
raise exc
File "C:\Program Files\Python38\lib\enum.py", line 584, in __new__
result = cls._missing_(value)
File "C:\Program Files\Python38\lib\enum.py", line 613, in _missing_
raise ValueError("%r is not a valid %s" % (value, cls.__name__))
ValueError: 'not foo' is not a valid ConfigField
Related
I' am trying to assign values when a users belongs into a project in this case the project name and manager to which he is assigned but the compute method gives an error, I've tried to log in the values just in case one of them came empty but no they definitely have the required data.
we can see in the logs:
root: Staff Augmentation - project_task.project_id.display_name
root: 120 - project_task.project_id.delivery_director.id
#api.depends('user_id')
#api.model
def _get_assigned_project(self):
today = fields.Date.today()
for employee in self:
project_task = self.env['project.task'].search([('user_id','=',employee.user_id.id),
('status','=','assigned'),
('date_end','>',today),
('project_id.original_project_id', '=', False)], order='date_end desc', limit=1)
if project_task:
employee.project_name = project_task.project_id.display_name
if project_task.project_id.role_id.name = 'Delivery Manager' and employee.base_manager != 0:
employee.parent_id = project_task.project_id.delivery_director.id
else:
employee.parent_id = project_task.project_id.delivery_manager.id
else:
employee.project_name = ""
if employee.base_manager != 0:
employee.parent_id = employee.base_manager
this is the error:
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/odoo/src/odoo/odoo/http.py", line 640, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/home/odoo/src/odoo/odoo/http.py", line 316, in _handle_exception
raise exception.with_traceback(None) from new_cause
ValueError: Compute method failed to assign hr.employee(266,).project_name
You need to assign employee.parent_id in your compute method , If your if else are failed to assign the value to the field than this error will comes up.
So check your method and put an else condition after your if condition and give assignment like employee.parent_id = False.
Because if in any case your compute method failed to assign value which we think it should be, than they throws error.
Put default value for the field to be assigned in case it did'n fulfill the computed assignment condition
I coded a field(a) on classA which is to automatically take the contents of another field(b) in another classB
After updating my own developing on a module I tried to fill in a form on tryton, then I tried to save the form.
But there was an error
Traceback (most recent call last):
File "/trytond/wsgi.py", line 104, in dispatch_request
return endpoint(request, **request.view_args)
File "/trytond/protocols/dispatcher.py", line 48, in rpc
request, database_name, *request.rpc_params)
File "/trytond/wsgi.py", line 72, in auth_required
return wrapped(*args, **kwargs)
File "/trytond/protocols/wrappers.py", line 131, in wrapper
return func(request, pool, *args, **kwargs)
File "/trytond/protocols/dispatcher.py", line 197, in _dispatch
result = rpc.result(meth(*c_args, **c_kwargs))
File "/trytond/model/modelsql.py", line 832, in read
getter_results = field.get(ids, cls, field_list, values=result)
File "/trytond/model/fields/function.py", line 106, in get
return dict((name, call(name)) for name in names)
File "/trytond/model/fields/function.py", line 106, in <genexpr>
return dict((name, call(name)) for name in names)
File "/trytond/model/fields/function.py", line 101, in call
return dict((r.id, method(r, name)) for r in records)
File "/trytond/model/fields/function.py", line 101, in <genexpr>
return dict((r.id, method(r, name)) for r in records)
File "/trytond/modules/module_designing/design.py", line 15702, in On_change_design
('Description', '=', self.id),
ValueError: not enough values to unpack (expected 1, got 0)
, the method mentioned on the error is this : (this method I used it on my field(b) on a class B to call another field(a) on another class A)
def On_change_design(self,Name):
Design = Pool().get('design.classA')
design, = Design.search([
('classB', '=', self.id),
])
return design.id
field(b) = fields.Function(fields.Many2One('design.classA', 'test'), 'On_change_design')
the field(b) which will take the contain of the field(a)
this is how I was coded the field(a):
field(a) = fields.Function(fields.Char('area '),'on_change_parameters')
Any help will be appreciated, I want to know what's wrong and what I should do.
Or can anyone help me and tell how I can code the method onchange to make the field(b) take automatically the contents of another field(a) from another class(a)
Function fields are computed after you save. On your function you are performing a search into a related table and unpacking the result. This has no problem when the search returns a single record but in your case the search does not return any record so this makes the code crash.
You should use a safer code, that tests if the serach return any result before unpacking. Something like this:
def on_change_design(self,Name):
Design = Pool().get('design.classA')
designs = Design.search([
('classB', '=', self.id),
], limit=1)
if designs:
design, = designs
return design.id
return None
Note that I also added a limit on the search to ensure a maximum of one record is returned. This will also prevent to crash when multiple records are returned but you may want a diferent behaviour. I also added a explicit None return to make it clar that function will return None when no search is found.
I'm attempting to add typing to a method that returns a generator. Whenever I run this program with the return type specified, a TypeError is raised.
Adding quotes or removing the typing fixes the error, but this seems like hack. Surely there is a correct way of doing this.
def inbox_files(self) -> "Generator[RecordsFile]":
...
# OR
def inbox_files(self):
...
from typing import Generator, List
from .records_file import RecordsFile
Class Marshaller:
...
def inbox_files(self) -> Generator[RecordsFile]:
return self._search_directory(self._inbox)
def _search_directory(self, directory: str) -> RecordsFile:
for item_name in listdir(directory):
item_path = path.join(item_name, directory)
if path.isdir(item_path):
yield from self._search_directory(item_path)
elif path.isfile(item_path):
yield RecordsFile(item_path)
else:
print(f"[WARN] Unknown item found: {item_path}")
The following stack trace is produced:
Traceback (most recent call last):
File "./bin/data_marshal", line 8, in <module>
from src.app import App
File "./src/app.py", line 9, in <module>
from .marshaller import Marshaller
File "./src/marshaller.py", line 9, in <module>
class Marshaller:
File "./src/marshaller.py", line 29, in Marshaller
def inbox_files(self) -> Generator[RecordsFile]:
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/typing.py", line 254, in inner
return func(*args, **kwds)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/typing.py", line 630, in __getitem__
_check_generic(self, params)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/typing.py", line 208, in _check_generic
raise TypeError(f"Too {'many' if alen > elen else 'few'} parameters for {cls};"
TypeError: Too few parameters for typing.Generator; actual 1, expected 3
¯\_(ツ)_/¯
You have to explicitly specify the send type and the return type, even if both are None.
def inbox_files(self) -> Generator[RecordsFile,None,None]:
return self._search_directory(self._inbox)
Note that the yield type is what you might think of as the return type. The send type is the type of value you can pass to the generator's send method. The return type is the type of value that could be embedded in the StopIteration exception raised by next after all possible value have been yielded. Consider:
def foo():
yield 3
return "hi"
f = foo()
The first call to next(f) will return 3; the second will raise StopIteration("hi").
)
A generator that you cannot send into or return from is simply an iterable or an iterator (either, apparently can be used).
def inbox_files(self) -> Iterable[RecordsFile]: # Or Iterator[RecordsFile]
return self._search_directory(self._inbox)
_search_directory itself also returns a generator/iterable, not an instance of RecordsFile:
def _search_directory(self, directory: str) -> Iterable[RecordsFile]:
This answer was useful, but I was confused since I was sure I had used Generator[] with just one parameter in the past and it worked.
I traced it back to using "from __future__ import annotations". Only one parameter seems to be required in that case.
I have a scrapy script that works locally, but when I deploy it to Scrapinghub, it's giving all errors. Upon debugging, the error is coming from Yielding the item.
This is the error I get.
ERROR [scrapy.utils.signal] Error caught on signal handler: <bound method ?.item_scraped of <sh_scrapy.extension.HubstorageExtension object at 0x7fd39e6141d0>> Less
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/local/lib/python2.7/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/extension.py", line 45, in item_scraped
item = self.exporter.export_item(item)
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 304, in export_item
result = dict(self._get_serialized_fields(item))
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 75, in _get_serialized_fields
value = self.serialize_field(field, field_name, item[field_name])
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 284, in serialize_field
return serializer(value)
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 290, in _serialize_value
return dict(self._serialize_dict(value))
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 300, in _serialize_dict
key = to_bytes(key) if self.binary else key
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/python.py", line 117, in to_bytes
'object, got %s' % type(text).__name__)
TypeError: to_bytes must receive a unicode, str or bytes object, got int
It doesn't specify the field with issues, but by process of elimination, I came to realize it's this part of the code:
try:
item["media"] = {}
media_index = 0
media_content = response.xpath("//audio/source/#src").extract_first()
if media_content is not None:
item["media"][media_index] = {}
preview = item["media"][media_index]
preview["Media URL"] = media_content
preview["Media Type"] = "Audio"
media_index += 1
except IndexError:
print "Index error for media " + item["asset_url"]
I cleared some parts up to make it easier to tackle, but basically this part is the issue. Something it doesn't like about the item media.
I'm beginner in both Python and Scrapy. So sorry if this turns out to be silly basic Python mistake. Any idea?
EDIT: So after getting the answer from ThunderMind, the solution was to simply do str(media_index) for key
Yeah, right here:
item["media"][media_index] = {}
media_index is a mutable. and Keys can't be mutable.
Read Python dict, to know what should be used as keys.
I have an operation using a dictionnary that I want to parallelize, but multiprocessing.map is causing me headaches
def dict_segmentor(dictionnary,n_processes):
print("len dictionnary")
print(len(dictionnary))
if len(dictionnary) < n_processes:
seg_size = len(dictionnary)
else:
seg_size = len(dictionnary) // n_processes
print("segmenting dictionnary")
print("seg_size "+str(seg_size))
print("len(dictionnary) "+str(len(dictionnary)))
itemlist=list(dictionnary.items())
seg_ranges = [dict(itemlist[s:s+seg_size]) for s in range(1, len(dictionnary)+1, seg_size)]
print("finished")
return seg_ranges
def multiprocess_calc(n_processes, dictionnary,struc):
dictionnary=dictionnary
struc=struc
seg_ranges1 = dict_segmentor(dictionnary,n_processes)
#this is invoked to break the dict to be passed into dicts into a list. Works as expected
print("seg_range_check")
print("seg_ranges1 {}".format(type(seg_ranges1)))#Returns a dict as expected
print("seg_ranges1 {}".format(type(seg_ranges1[0])))#Returns a list as expected
print("seg_ranges1 {}".format(len(seg_ranges1))) #Returns expected len=1
print("seg_ranges1 {}".format(len(seg_ranges1[0]))) #Returns expected len
processes = multiprocessing.Pool(n_processes)
print("Mapping Building")
processes.map(Builder, seg_ranges1,1)
def main():
file_multiprocess = 'pref_multiprocess.csv'
n_CPUs = multiprocessing.cpu_count()
n_processes = n_CPUs-1
print("\nNumber of CPUs detected:", n_CPUs)
multiprocess_calc(n_processes, file_multiprocess,struc)
if __name__ == '__main__':
main()
Here is complete Traceback:
Traceback (most recent call last):
File "<ipython-input-37-d0279721826c>", line 1, in <module>
runfile('Pyscript.py', wdir='C:/Python Scripts')
File "C:\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 89, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "Pyscript.py", line 1033, in <module>
main()
File "Pyscript.py", line 1025, in main
multiprocess_calc(n_processes, dictionnary,struc)
File "Pyscript.py", line 911, in multiprocess_calc
processes.map(Builder, seg_ranges1,1)
File "C:\Anaconda3\lib\multiprocessing\pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "C:\Anaconda3\lib\multiprocessing\pool.py", line 608, in get
raise self._value
TypeError: unorderable types: list() > int()
I don't understand, even by reading this carefully (https://docs.python.org/3.5/library/multiprocessing.html#module-multiprocessing).
Each chunk is a dict, so it should be sent through map to the builder.
But instead I get that stupid error and the traceback doesn't help. I looked into the code of pool.py but no luck,
and my builder is not involved since its first operation (a control printing) is not even displayed. The builder function seems to be totally ignored (and there is not even a syntax error)
So I conclude this is a problem with map.
In case I would have misunderstood the multiprocessing.map function and that it would first make a chunk, then iterate over it to apply map on each sub sub element, what function of multiprocessing could I use? Apply use only one thread. That would mean I should do this manually?
Please feel free to correct my code and give me some insights. Thanks by advance
Edit: here is the builder function:
def Builder(dictionary,struc=struc):
#def Builder(keys, dictionary=dictionary,struc=struc): #Alternative
#Note, I even tried to use only the keys, passing the dictionary from a global variable but it didn't work
print("Building Tree") #Not even displayed
print("type dictionary"+str(type(dictionary)))
frags=0
try:
if True:
print("Building")
#for id in keys: #Alternative
for id in dictionary:
seq=dictionary[id]
for i in range(3):
frags+=1
if len(seq)<=3:
break
seq=seq[i:-i]
struc.append(seq)
print("Number of frags found {}".format(frags))
except TypeError as e:
print (e)
print ("error in Builder")