Python import (Could be python or Lego Mindstorms libs) - python

I'll start this by saying I'm not the most familiar with python, and this issue could be a more general python thing that I don't get (i.e. a glaringly obvious duplicate).
In the python bindings for the ev3, a motor is referenced like this:
# hardware.py #
import ev3dev.ev3 as ev3
motor = ev3.LargeMotor('outA')
motor.connected
Where 'outA' is the output port on the robot that the motor is connected to.
If I then do:
$:python hardware.py
I get no issues and I can use the motor normally. However, if I write a new file
# do_something.py #
from hardware import *
I get an error:
Exception TypeError: "'NoneType' object is not callable" in <bound method LargeMotor.__del__ of <ev3dev.core.LargeMotor object at 0xb67d2fd0>> ignored
Does anyone know why this is happening? Is it a python thing or an ev3 thing?
My reason for wanting to import in this way is so that I can do all of the hardware setup in one file (a sizeable chunk of code) and then import this to the files that actually make the robot perform tasks.
I know that NoneType is the type of None in python, I just don't know why a direct compile works but an import doesn't.
1st Edit:
Ok, so I compiled it as:
$:python hardware.py do_something.py
$:python do_something.py
And this gave no errors.
However, upon request, I've added more code, hardware.py is the same:
# do_something.py #
from hardware import *
counter = 0
while True:
if (counter >= 1000):
break
motor.run_direct(duty_cycle_sp = 20)
counter += 1
I.e. run the motor at a cycle speed of 20 until we've been through a thousand loop iterations.This works, and runs until the loop breaks and the script ends. The same NoneType error is then given and the motor continues to run even though the script has finished. The behaviour is the same with a KeyboardInterrupt. There is no traceback given, just that error message.

First of all, python is a language at which it's codes are formed from words, and on the other hand the Lego Mindstorms "language" is formed of simple blocks. That said, logically the two languages cannot be mixed together and have nothing in common. And with having vast experience with both, I never found any thing in common between them.

Related

Can you use mock_open to simulate serial connections?

Morning folks,
I'm trying to get a few unit tests going in Python to confirm my code is working, but I'm having a real hard time getting a Mock anything to fit into my test cases. I'm new to Python unit testing, so this has been a trying week thus far.
The summary of the program is I'm attempting to do serial control of a commercial monitor I got my hands on and I thought I'd use it as a chance to finally use Python for something rather than just falling back on one of the other languages I know. I've got pyserial going, but before I start shoving a ton of commands out to the TV I'd like to learn the unittest part so I can write for my expected outputs and inputs.
I've tried using a library called dummyserial, but it didn't seem to be recognising the output I was sending. I thought I'd give mock_open a try as I've seen it works like a standard IO as well, but it just isn't picking up on the calls either. Samples of the code involved:
def testSendCmd(self):
powerCheck = '{0}{1:>4}\r'.format(SharpCodes['POWER'], SharpCodes['CHECK']).encode('utf-8')
read_text = 'Stuff\r'
mo = mock_open(read_data=read_text)
mo.in_waiting = len(read_text)
with patch('__main__.open', mo):
with open('./serial', 'a+b') as com:
tv = SharpTV(com=com, TVID=999, tvInput = 'DVI')
tv.sendCmd(SharpCodes['POWER'], SharpCodes['CHECK'])
com.write(b'some junk')
print(mo.mock_calls)
mo().write.assert_called_with('{0}{1:>4}\r'.format(SharpCodes['POWER'], SharpCodes['CHECK']).encode('utf-8'))
And in the SharpTV class, the function in question:
def sendCmd(self, type, msg):
sent = self.com.write('{0}{1:>4}\r'.format(type,msg).encode('utf-8'))
print('{0}{1:>4}\r'.format(type,msg).encode('utf-8'))
Obviously, I'm attempting to control a Sharp TV. I know the commands are correct, that isn't the issue. The issue is just the testing. According to documentation on the mock_open page, calling mo.mock_calls should return some data that a call was made, but I'm getting just an empty set of []'s even in spite of the blatantly wrong com.write(b'some junk'), and mo().write.assert_called_with(...) is returning with an assert error because it isn't detecting the write from within sendCmd. What's really bothering me is I can do the examples from the mock_open section in interactive mode and it works as expected.
I'm missing something, I just don't know what. I'd like help getting either dummyserial working, or mock_open.
To answer one part of my question, I figured out the functionality of dummyserial. The following works now:
def testSendCmd(self):
powerCheck = '{0}{1:>4}\r'.format(SharpCodes['POWER'], SharpCodes['CHECK'])
com = dummyserial.Serial(
port='COM1',
baudrate=9600,
ds_responses={powerCheck : powerCheck}
)
tv = SharpTV(com=com, TVID=999, tvInput = 'DVI')
tv.sendCmd(SharpCodes['POWER'], SharpCodes['CHECK'])
self.assertEqual(tv.recv(), powerCheck)
Previously I was encoding the dictionary values as utf-8. The dummyserial library decodes whatever you write(...) to it so it's a straight string vs. string comparison. It also encodes whatever you're read()ing as latin1 on the way back out.

How to correctly pass (mutable) objects using ROS-SMACH FSM

I've been struggling with this question for a while now. My question is very specific so please don't post a link to the ROS-tutorial page with the "!Note:" paragraph showing how to pass mutable objects (unless you show me something I have missed). I would like to know if anyone has been able to correctly pass mutable objects back and forth in SMACH states, without encountering any errors.
I wrote a particularly simple and useless program to explain what I am attempting to do. I could post the code of this example (unfortunately however, not surprisingly to those that have used SMACH before, it is a long piece of code). So for now I will just try my best to explain it and include a [link] to an image of my example. I created two python scripts. Each script contains a single class and object of that class (with some basic methods). I create a publisher and subscriber in each script, one of the scripts sends messages (talks) while the other script listens to (hears) the messages. At the end the talker flags both FSMs to shutdown. If anyone would like the full code example let me know...
Code snippet below showing smach states and transitions:
# begin sm
with sm:
smach.StateMachine.add('LOAD', loadFSM(),
transitions={'LOADED':'SENDMSG'})
smach.StateMachine.add('SENDMSG', startMSG(),
transitions={'SENT':'SENDMSG',
'ENDING':'END'})
smach.StateMachine.add('END', stopFSM(),
transitions={'DONE':'complete',
'ERRED':'incomplete'})
Code snippet below showing a smach state (loadFSM):
class loadFSM(smach.State):
def __init__(self):
smach.State.__init__(self, outcomes=['LOADED'],
output_keys=['talker_obj'],
input_keys=['talker_obj'])
# Initialise our talker object
self.talker = Talk()
def execute(self, userdata):
rospy.loginfo("talker state: Loading fsm")
self.talker.init_publish()
self.talker.init_subscribe()
userdata.talker_obj = self.talker
return 'LOADED'
The errors I receive (using Ubuntu 14.04, ROS indigo and python 2.7, not too sure but I believe the same errors occur in kinetic as well) only occur during state transitions and of course the introspective server does not work (does not show state transitions). The errors are;
1. "Exception in thread sm_introViewer:status_publisher:"
2. "Could not execute transition callback: Traceback (most recent call
last): File
"/opt/ros/indigo/lib/python2.7/dist-packages/smach/container.py",
line 175, in call_transition_cbs cb(self.userdata,
self.get_active_states(), *args)"
I also need to add, that my simple finite state machine example actually works and completes successfully, even my project's 2 larger FSMs complete. However, when the FSM has many states like in my project, sometimes my simulations fail. I would like to know from anyone who has used SMACH extensively if they think these errors are the cause or if they know for a fact that I am not passing the object correctly between states.
Thanks in Advance,
Tiz
I had similar problems in the past. If I remember correctly, userdata can only handle basic types (int, string, list, dict, ...) but not arbitrary objects.
I solved it by passing the objects to the constructor of state classes, instead of using the userdata, i.e. something like:
class myState(smach.State):
def __init__(self, obj):
smach.State.__init__(self, outcomes=['foo'], ...)
self.obj = obj
...
and then initialize like follows:
obj = SomeClass()
with sm:
smach.StateMachine.add('MY_STATE', myState(obj),
transitions={'foo':'bar'})
It is not so nice as it circumvents the userdata concept but it works.

Why is multiprocessing copying my data if I don't touch it?

I was tracking down an out of memory bug, and was horrified to find that python's multiprocessing appears to copy large arrays, even if I have no intention of using them.
Why is python (on Linux) doing this, I thought copy-on-write would protect me from any extra copying? I imagine that whenever I reference the object some kind of trap is invoked and only then is the copy made.
Is the correct way to solve this problem for an arbitrary data type, like a 30 gigabyte custom dictionary to use a Monitor? Is there some way to build Python so that it doesn't have this nonsense?
import numpy as np
import psutil
from multiprocessing import Process
mem=psutil.virtual_memory()
large_amount=int(0.75*mem.available)
def florp():
print("florp")
def bigdata():
return np.ones(large_amount,dtype=np.int8)
if __name__=='__main__':
foo=bigdata()#Allocated 0.75 of the ram, no problems
p=Process(target=florp)
p.start()#Out of memory because bigdata is copied?
print("Wow")
p.join()
Running:
[ebuild R ] dev-lang/python-3.4.1:3.4::gentoo USE="gdbm ipv6 ncurses readline ssl threads xml -build -examples -hardened -sqlite -tk -wininst" 0 KiB
I'd expect this behavior -- when you pass code to Python to compile, anything that's not guarded behind a function or object is immediately execed for evaluation.
In your case, bigdata=np.ones(large_amount,dtype=np.int8) has to be evaluated -- unless your actual code has different behavior, florp() not being called has nothing to do with it.
To see an immediate example:
>>> f = 0/0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: integer division or modulo by zero
>>> def f():
... return 0/0
...
>>>
To apply this to your code, put bigdata=np.ones(large_amount,dtype=np.int8) behind a function and call it as your need it, otherwise, Python is trying to be hepful by having that variable available to you at runtime.
If bigdata doesn't change, you could write a function that gets or sets it on an object that you keep around for the duration of the process.
edit: Coffee just started working. When you make a new process, Python will need to copy all objects into that new process for access. You can avoid this by using threads or by a mechanism that will allow you to share memory between processes such as shared memory maps or shared ctypes
The problem was that by default Linux checks for the worst case memory usage, which can indeed exceed memory capacity. This is true even if the python language doesn't exposure the variables. You need to turn off "overcommit" system wide, to achieve the expected COW behavior.
sysctl `vm.overcommit_memory=2'
See https://www.kernel.org/doc/Documentation/vm/overcommit-accounting

Python: Can I replace an object in a dictionary?

I’m using the pi3d lib in Python on a raspberry pi. What I’m trying to do is dynamically create screen objects, and replace them.
First I want to fill an array of dictionaries. These can be integers, strings or pi3D sprite objects.
I have the following test code:
import pi3d
DISPLAY = pi3d.Display.create(x=0, y=0)
shader = pi3d.Shader("uv_flat")
CAMERA = pi3d.Camera(is_3d=False)
screen_items=[]
for item_number in range (5):
screen_item={}
screen_item['type']= 'second_rotation_stepped'
screen_item['text_type']='static'
screen_item['visible']='always'
screen_item['image_sprite']=pi3d.ImageSprite(pi3d.Texture("textures/PATRN.PNG", blend=True), shader, w=100.0, h=100.0, z=5.0 ,x=0,y=120*item_number)
screen_items.append(screen_item)
screen_items[0]['image_sprite']=pi3d.ImageSprite(pi3d.Texture("textures/altimeter.png", blend=True), shader, w=50.0, h=50.0, z=5.0 ,x=0,y=-200)
screen_items[0]['visible']='never'
screen_items[2]['image_sprite'].rotateToZ(45)
mykeys = pi3d.Keyboard()
while DISPLAY.loop_running():
for drawitem in screen_items:
drawitem['image_sprite'].draw()
if mykeys.read() == 27:
mykeys.close()
DISPLAY.destroy()
Break
Everything works as I expected but it gives me an error/warning:
“couldn't set to delete”
This error won’t come up if I block out the code:
screen_items[0]['image_sprite']=pi3d.ImageSprite(pi3d.Texture("textures/altimeter.png", blend=True), shader, w=50.0, h=50.0, z=5.0 ,x=0,y=-200)
I don’t get this error with the code:
screen_items[0]['visible']='never'
So I guess I can replace a string in a dictionary, but cannot replace an object?
Like I said, everything works fine, the object does get replaced (and drawn on the screen) but somehow the “old” object isn’t deleted. Is it some kind of pointer problem?
The "error" you're seeing is actually a debug message by pi3d when trying to delete the initial Texture object created in the for loop in your script.
When you re-assign the new ImageSprite object to screen_items[0]['image_sprite'], the old Texture object gets garbage collected, invoking it's __del__ method, causing this debug message.
Reference: https://github.com/tipam/pi3d/blob/master/pi3d/Texture.py#L87
This problem originates because the python garbage collector doesn't tidy up the gpu memory allocated when the Texture instance is created.
In python it is possible to use a __del__() method as a destructor but it doesn't necessarily get executed immediately. When the gpu buffer memory leak was first observed it was confounded by a bug in the video core due to implementation of EGL so pi3d ended up with a 'belt and braces' system whereby references were kept in the Display object to all texture, vertex and element buffers and if, at close down, any of these had not been released by the python objects' __del__() methods, there was a final clear out.
However under certain circumstances, very occasionally, when the program was shutting down, the references pointed to objects that had been released before they could remove the references to themselves. As the bug had no impact on running code and would be messy to sort out it had been put on a back-burner.
PS see the raspberry pi forum thread t=79383 for my solutions to #satoer's problems generally

Python mmap ctypes - read only

I think I have the opposite problem as described here. I have one process writing data to a log, and I want a second process to read it, but I don't want the 2nd process to be able to modify the contents. This is potentially a large file, and I need random access, so I'm using python's mmap module.
If I create the mmap as read/write (for the 2nd process), I have no problem creating ctypes object as a "view" of the mmap object using from_buffer. From a cursory look at the c-code, it looks like this is a cast, not a copy, which is what I want. However, this breaks if I make the mmap ACCESS_READ, throwing an exception that from_buffer requires write privileges.
I think I want to use ctypes from_address() method instead, which doesn't appear to need write access. I'm probably missing something simple, but I'm not sure how to get the address of the location within an mmap. I know I can use ACCESS_COPY (so write operations show up in memory, but aren't persisted to disk), but I'd rather keep things read only.
Any suggestions?
I ran into a similar issue (unable to setup a readonly mmap) but I was using only the python mmap module. Python mmap 'Permission denied' on Linux
I'm not sure it is of any help to you since you don't want the mmap to be private?
Ok, from looking at the mmap .c code, I don't believe it supports this use case. Also, I found that the performance pretty much sucks - for my use case. I'd be curious what kind performance others see, but I found that it took about 40 sec to walk through a binary file of 500 MB in Python. This is creating a mmap, then turning the location into a ctype object with from_buffer(), and using the ctypes object to decipher the size of the object so I could step to the next object. I tried doing the same thing directly in c++ from msvc. Obviously here I could cast directly into an object of the correct type, and it was fast - less than a second (this is with a core 2 quad and ssd).
I did find that I could get a pointer with the following
firstHeader = CEL_HEADER.from_buffer(map, 0) #CEL_HEADER is a ctypes Structure
pHeader = pointer(firstHeader)
#Now I can use pHeader[ind] to get a CEL_HEADER object
#at an arbitrary point in the file
This doesn't get around the original problem - the mmap isn't read-only, since I still need to use from_buffer for the first call. In this config, it still took around 40 sec to process the whole file, so it looks like the conversion from a pointer into ctypes structs is killing the performance. That's just a guess, but I don't see a lot of value in tracking it down further.
I'm not sure my plan will help anyone else, but I'm going to try to create a c module specific to my needs based on the mmap code. I think I can use the fast c-code handling to index the binary file, then expose only small parts of the file at a time through calls into ctypes/python objects. Wish me luck.
Also, as a side note, Python 2.7.2 was released today (6/12/11), and one of the changes is an update to the mmap code so that you can use a python long to set the file offset. This lets you use mmap for files over 4GB on 32-bit systems. See Issue #4681 here
Ran into this same problem, we needed the from_buffer interface and wanted read only access. From the python docs https://docs.python.org/3/library/mmap.html "Assignment to an ACCESS_COPY memory map affects memory but does not update the underlying file."
If it's acceptable for you to use an anonymous file backing you can use ACCESS_COPY
An example: open two cmd.exe or terminals and in one terminal:
mm_file_write = mmap.mmap(-1, 4096, access=mmap.ACCESS_WRITE, tagname="shmem")
mm_file_read = mmap.mmap(-1, 4096, access=mmap.ACCESS_COPY, tagname="shmem")
write = ctypes.c_int.from_buffer(mm_file_write)
read = ctypes.c_int.from_buffer(mm_file_read)
try:
while True:
value = int(input('enter an integer using mm_file_write: '))
write.value = value
print('updated value')
value = int(input('enter an integer using mm_file_read: '))
#read.value assignment doesnt update anonymous backed file
read.value = value
print('updated value')
except KeyboardInterrupt:
print('got exit event')
In the other terminal do:
mm_file = mmap.mmap(-1, 4096, access=mmap.ACCESS_WRITE, tagname="shmem")
i = None
try:
while True:
new_i = struct.unpack('i', mm_file[:4])
if i != new_i:
print('i: {} => {}'.format(i, new_i))
i = new_i
time.sleep(0.1)
except KeyboardInterrupt:
print('Stopped . . .')
And you will see that the second process does not receive updates when the first process writes using ACCESS_COPY

Categories