Python deadlock error - Trying to retrieve key value from shelve - python

Here is my code where I am updating record in shelve
def updateRecord(db, form):
if not 'key' in form:
fields = dict.fromkeys(fieldnames, '?')
fields['key'] = 'Missing key input'
else:
key = form['key'].value
if key in db:
record = db[key]
else:
from person import Person
record = Person(name='?',age='?')
for field in fieldnames:
setattr(record, field, eval(form[field].value))
db[key] = record
fields = record.__dict__
fields['key'] = key
return fields
When i am trying to retrieve the value from shelve i am getting this error
>>> import shelve
>>> db = shelve.open('class-shelve')
>>> db['sue'].name
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/shelve.py", line 121, in __getitem__
f = StringIO(self.dict[key])
File "/usr/lib/python2.7/bsddb/__init__.py", line 270, in __getitem__
return _DeadlockWrap(lambda: self.db[key]) # self.db[key]
File "/usr/lib/python2.7/bsddb/dbutils.py", line 68, in DeadlockWrap
return function(*_args, **_kwargs)
File "/usr/lib/python2.7/bsddb/__init__.py", line 270, in <lambda>
return _DeadlockWrap(lambda: self.db[key]) # self.db[key]
KeyError: 'sue'
any insights whats going on?

Assuming in the first snippet, that the db variable is a 'shelf' object, then, although the line...
db[key] = record
...will add the new key/value pair to the 'shelf', it won't necessarily flush the contents to disk, so it won't be available to other processes sharing the same 'shelf file'.
You can force the 'shelf file' to be written to disk by adding the line...
db.sync()
...after adding the new key/value pair, but it can be quite slow when your 'shelf file' gets large, so you may not want to call it too frequently.

Related

ValueError: not enough values to unpack (expected 1, got 0)

I coded a field(a) on classA which is to automatically take the contents of another field(b) in another classB
After updating my own developing on a module I tried to fill in a form on tryton, then I tried to save the form.
But there was an error
Traceback (most recent call last):
File "/trytond/wsgi.py", line 104, in dispatch_request
return endpoint(request, **request.view_args)
File "/trytond/protocols/dispatcher.py", line 48, in rpc
request, database_name, *request.rpc_params)
File "/trytond/wsgi.py", line 72, in auth_required
return wrapped(*args, **kwargs)
File "/trytond/protocols/wrappers.py", line 131, in wrapper
return func(request, pool, *args, **kwargs)
File "/trytond/protocols/dispatcher.py", line 197, in _dispatch
result = rpc.result(meth(*c_args, **c_kwargs))
File "/trytond/model/modelsql.py", line 832, in read
getter_results = field.get(ids, cls, field_list, values=result)
File "/trytond/model/fields/function.py", line 106, in get
return dict((name, call(name)) for name in names)
File "/trytond/model/fields/function.py", line 106, in <genexpr>
return dict((name, call(name)) for name in names)
File "/trytond/model/fields/function.py", line 101, in call
return dict((r.id, method(r, name)) for r in records)
File "/trytond/model/fields/function.py", line 101, in <genexpr>
return dict((r.id, method(r, name)) for r in records)
File "/trytond/modules/module_designing/design.py", line 15702, in On_change_design
('Description', '=', self.id),
ValueError: not enough values to unpack (expected 1, got 0)
, the method mentioned on the error is this : (this method I used it on my field(b) on a class B to call another field(a) on another class A)
def On_change_design(self,Name):
Design = Pool().get('design.classA')
design, = Design.search([
('classB', '=', self.id),
])
return design.id
field(b) = fields.Function(fields.Many2One('design.classA', 'test'), 'On_change_design')
the field(b) which will take the contain of the field(a)
this is how I was coded the field(a):
field(a) = fields.Function(fields.Char('area '),'on_change_parameters')
Any help will be appreciated, I want to know what's wrong and what I should do.
Or can anyone help me and tell how I can code the method onchange to make the field(b) take automatically the contents of another field(a) from another class(a)
Function fields are computed after you save. On your function you are performing a search into a related table and unpacking the result. This has no problem when the search returns a single record but in your case the search does not return any record so this makes the code crash.
You should use a safer code, that tests if the serach return any result before unpacking. Something like this:
def on_change_design(self,Name):
Design = Pool().get('design.classA')
designs = Design.search([
('classB', '=', self.id),
], limit=1)
if designs:
design, = designs
return design.id
return None
Note that I also added a limit on the search to ensure a maximum of one record is returned. This will also prevent to crash when multiple records are returned but you may want a diferent behaviour. I also added a explicit None return to make it clar that function will return None when no search is found.

Scrapinghub Getting Error caught on signal handler: <bound method ? on Yield

I have a scrapy script that works locally, but when I deploy it to Scrapinghub, it's giving all errors. Upon debugging, the error is coming from Yielding the item.
This is the error I get.
ERROR [scrapy.utils.signal] Error caught on signal handler: <bound method ?.item_scraped of <sh_scrapy.extension.HubstorageExtension object at 0x7fd39e6141d0>> Less
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/local/lib/python2.7/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/extension.py", line 45, in item_scraped
item = self.exporter.export_item(item)
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 304, in export_item
result = dict(self._get_serialized_fields(item))
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 75, in _get_serialized_fields
value = self.serialize_field(field, field_name, item[field_name])
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 284, in serialize_field
return serializer(value)
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 290, in _serialize_value
return dict(self._serialize_dict(value))
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 300, in _serialize_dict
key = to_bytes(key) if self.binary else key
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/python.py", line 117, in to_bytes
'object, got %s' % type(text).__name__)
TypeError: to_bytes must receive a unicode, str or bytes object, got int
It doesn't specify the field with issues, but by process of elimination, I came to realize it's this part of the code:
try:
item["media"] = {}
media_index = 0
media_content = response.xpath("//audio/source/#src").extract_first()
if media_content is not None:
item["media"][media_index] = {}
preview = item["media"][media_index]
preview["Media URL"] = media_content
preview["Media Type"] = "Audio"
media_index += 1
except IndexError:
print "Index error for media " + item["asset_url"]
I cleared some parts up to make it easier to tackle, but basically this part is the issue. Something it doesn't like about the item media.
I'm beginner in both Python and Scrapy. So sorry if this turns out to be silly basic Python mistake. Any idea?
EDIT: So after getting the answer from ThunderMind, the solution was to simply do str(media_index) for key
Yeah, right here:
item["media"][media_index] = {}
media_index is a mutable. and Keys can't be mutable.
Read Python dict, to know what should be used as keys.

How to assign a value to a sliced output signal?

I'm a beginner with myhdl.
I try to translate the following Verilog code to MyHDL:
module ModuleA(data_in, data_out, clk);
input data_in;
output reg data_out;
input clk;
always #(posedge clk) begin
data_out <= data_in;
end
endmodule
module ModuleB(data_in, data_out, clk);
input [1:0] data_in;
output [1:0] data_out;
input clk;
ModuleA instance1(data_in[0], data_out[0], clk);
ModuleA instance2(data_in[1], data_out[1], clk);
endmodule
Currently, I have this code:
import myhdl
#myhdl.block
def ModuleA(data_in, data_out, clk):
#myhdl.always(clk.posedge)
def logic():
data_out.next = data_in
return myhdl.instances()
#myhdl.block
def ModuleB(data_in, data_out, clk):
instance1 = ModuleA(data_in(0), data_out(0), clk)
instance2 = ModuleA(data_in(1), data_out(1), clk)
return myhdl.instances()
# Create signals
data_in = myhdl.Signal(myhdl.intbv()[2:])
data_out = myhdl.Signal(myhdl.intbv()[2:])
clk = myhdl.Signal(bool())
# Instantiate the DUT
dut = ModuleB(data_in, data_out, clk)
# Convert tfe DUT to Verilog
dut.convert()
But it doesn't works because signal slicing produce a read-only shadow signal (cf MEP-105).
So, what is it the good way in MyHDL to have a writable slice of a signal?
Edit:
This is the error I get
$ python demo.py
Traceback (most recent call last):
File "demo.py", line 29, in <module>
dut.convert()
File "/home/killruana/.local/share/virtualenvs/myhdl_sandbox-dYpBu4o5/lib/python3.6/site-packages/myhdl-0.10-py3.6.egg/myhdl/_block.py", line 342, in convert
File "/home/killruana/.local/share/virtualenvs/myhdl_sandbox-dYpBu4o5/lib/python3.6/site-packages/myhdl-0.10-py3.6.egg/myhdl/conversion/_toVerilog.py", line 177, in __call__
File "/home/killruana/.local/share/virtualenvs/myhdl_sandbox-dYpBu4o5/lib/python3.6/site-packages/myhdl-0.10-py3.6.egg/myhdl/conversion/_analyze.py", line 170, in _analyzeGens
File "/usr/lib/python3.6/ast.py", line 253, in visit
return visitor(node)
File "/home/killruana/.local/share/virtualenvs/myhdl_sandbox-dYpBu4o5/lib/python3.6/site-packages/myhdl-0.10-py3.6.egg/myhdl/conversion/_analyze.py", line 1072, in visit_Module
File "/home/killruana/.local/share/virtualenvs/myhdl_sandbox-dYpBu4o5/lib/python3.6/site-packages/myhdl-0.10-py3.6.egg/myhdl/conversion/_misc.py", line 148, in raiseError
myhdl.ConversionError: in file demo.py, line 4:
Signal has multiple drivers: data_out
You can use an intermediate list of Signal(bool()) as placeholder.
#myhdl.block
def ModuleB(data_in, data_out, clk):
tsig = [myhdl.Signal(bool(0)) for _ in range(len(data_in))]
instances = []
for i in range(len(data_in)):
instances.append(ModuleA(data_in(i), tsig[i], clk))
#myhdl.always_comb
def assign():
for i in range(len(data_out)):
data_out.next[i] = tsig[i]
return myhdl.instances()
A quick (probably non-fulfilling) comment, is that the intbv is treated as a single entity that can't have multiple drives. Two references that might help shed some light:
http://jandecaluwe.com/hdldesign/counting.html
http://docs.myhdl.org/en/stable/manual/structure.html#converting-between-lists-of-signals-and-bit-vectors

Unable to access mongo entry by "id"

I have a Mongo Document with some fields (_id, id, name, status, etc...). I wrote a typical document in a class(like a model would do):
class mod(Document):
id=fields.IntField()
name = fields.StringField()
status=fields.StringField()
description_summary = fields.StringField()
_id = fields.ObjectIdField()
And with this model, I tried to access them:
>>> from mongoengine import *
>>> from api.models import *
>>> connect('doc')
MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True, read_preference=Primary())
I tried to fetch all the entries in the "mod" document: It Worked! I can get all the fields of all the entries (id, name, etc...)
>>> mod_ = mod.objects.all()
>>> mod_[0].name
'Name of entry'
>>> mod_[0].id
102
I tried to filter and return all the entries with the field "status" = "Incomplete": It works, just like before.I tried to filter other fields: it works too
>>> mod_ = mod.objects(status="Incomplete")
>>> mod_[0].name
'Name of entry'
>>> mod_[0].id
102
But When I try to filter with the id I don't manage to get a result:
>>> mod_ = mod.objects(id=102)
>>> mod_[0].name
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/.../lib/python3.4/site-packages/mongoengine/queryset/base.py", line 193, in __getitem__
return queryset._document._from_son(queryset._cursor[key],
File "/.../lib/python3.4/site-packages/pymongo/cursor.py", line 570, in __getitem__
raise IndexError("no such item for Cursor instance")
IndexError: no such item for Cursor instance
>>> mod_[0].id
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/.../lib/python3.4/site-packages/mongoengine/queryset/base.py", line 193, in __getitem__
return queryset._document._from_son(queryset._cursor[key],
File "/.../lib/python3.4/site-packages/pymongo/cursor.py", line 570, in __getitem__
raise IndexError("no such item for Cursor instance")
IndexError: no such item for Cursor instance
So I tried with mod.objects.get(id=102)
>>> mod_ = mod.objects.get(id=102)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/.../lib/python3.4/site-packages/mongoengine/queryset/base.py", line 271, in get
raise queryset._document.DoesNotExist(msg)
api.models.DoesNotExist: mod matching query does not exist.
Mod matching query does not exist, so it doesn't recognize the id field but when I write mod_[0].id I do have a result, so what can be wrong?
EDIT: I believe that when writing mod.objects(id=102), the field id is interpreted as the _id. How can I specify, I want to query by id and not _id? My Document is already written, so I cannot change the name of the fields.
So, the problem does not come from the difference between _id and id, like said #HourGlass. The values of the id field were stored as integer, I wrote fields.IntField() for the id field, and called mod.objects(id=102) (without quotes).
But for some reason, I have to write them as fields.StringField(), and call mod.objects(id='102').

Serializing twisted.protocols.amp.AmpList for testing

I have a command as follows:
class AddChatMessages(Command):
arguments = [
('messages', AmpList([('message', Unicode()), ('type', Integer())]))]
And I have a responder for it in a controller:
def add_chat_messages(self, messages):
for i, m in enumerate(messages):
messages[i] = (m['message'], m['type'])
self.main.add_chat_messages(messages)
return {}
commands.AddChatMessages.responder(add_chat_messages)
I am writing a unit test for it. This is my code:
class AddChatMessagesTest(ProtocolTestMixin, unittest.TestCase):
command = commands.AddChatMessages
data = {'messages': [{'message': 'hi', 'type': 'None'}]}
def assert_callback(self, unused):
pass
Where ProtocolMixin is as follows:
class ProtocolTestMixin(object):
def setUp(self):
self.protocol = client.CommandProtocol()
def assert_callback(self, unused):
raise NotImplementedError("Has to be implemented!")
def test_responder(self):
responder = self.protocol.lookupFunction(
self.command.commandName)
d = responder(self.data)
d.addCallback(self.assert_callback)
return d
It works if AmpList is not involved, but when it is - I get following error:
======================================================================
ERROR: test_responder
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/<username>/Projects/space/env/lib/python2.7/site-packages/twisted/internet/defer.py", line 139, in maybeDeferred
result = f(*args, **kw)
File "/Users/<username>/Projects/space/env/lib/python2.7/site-packages/twisted/internet/utils.py", line 203, in runWithWarningsSuppressed
reraise(exc_info[1], exc_info[2])
File "/Users/<username>/Projects/space/env/lib/python2.7/site-packages/twisted/internet/utils.py", line 199, in runWithWarningsSuppressed
result = f(*a, **kw)
File "/Users/<username>/Projects/space/tests/client_test.py", line 32, in test_responder
d = responder(self.data)
File "/Users/<username>/Projects/space/env/lib/python2.7/site-packages/twisted/protocols/amp.py", line 1016, in doit
kw = command.parseArguments(box, self)
File "/Users/<username>/Projects/space/env/lib/python2.7/site-packages/twisted/protocols/amp.py", line 1717, in parseArguments
return _stringsToObjects(box, cls.arguments, protocol)
File "/Users/<username>/Projects/space/env/lib/python2.7/site-packages/twisted/protocols/amp.py", line 2510, in _stringsToObjects
argparser.fromBox(argname, myStrings, objects, proto)
File "/Users/<username>/Projects/space/env/lib/python2.7/site-packages/twisted/protocols/amp.py", line 1209, in fromBox
objects[nk] = self.fromStringProto(st, proto)
File "/Users/<username>/Projects/space/env/lib/python2.7/site-packages/twisted/protocols/amp.py", line 1465, in fromStringProto
boxes = parseString(inString)
File "/Users/<username>/Projects/space/env/lib/python2.7/site-packages/twisted/protocols/amp.py", line 2485, in parseString
return cls.parse(StringIO(data))
TypeError: must be string or buffer, not list
Which makes sense, but how do I serialize a list in AddChatMessagesTest.data?
The responder expects to be called with a serialized box. It will then deserialize it, dispatch the objects to application code, take the object the application code returns, serialize it, and then return that serialized form.
For a few AMP types. most notably String, the serialized form is the same as the deserialized form, so it's easy to overlook this.
I think that you'll want to pass your data through Command.makeArguments in order to produce an object suitable to pass to a responder.
For example:
>>> from twisted.protocols.amp import Command, Integer
>>> class Foo(Command):
... arguments = [("bar", Integer())]
...
>>> Foo.makeArguments({"bar": 17}, None)
AmpBox({'bar': '17'})
>>>
If you do this with a Command that uses AmpList I think you'll find makeArguments returns an encoded string for the value of that argument and that the responder is happy to accept and parse that kind of string.

Categories