Pywin - Initialize pvarResult by calling the VariantInit() - python

I'm using Pywin32 to communicate with Bloomberg through its COM-library. This works rather good! However, I have stumbeled upona a problem which I consider pretty complex. If I set the property QueueEvents of the Com object to True I the program fails. In the documentation they have a section regarding this,
If your QueueEvents property is set to
True and you are performing low-level
instantiation of the data control
using C++, then in your data event
handler (invoke) you will be required
to initialize pvarResult by calling
the VariantInit() function. This will
prevent your application from
receiving duplicate ticks.
session = win32com.client.DispatchWithEvents(comobj, EventHandler)
session.QueueEvents = True <-- this trigger some strange "bugs" in execution
if "pvarResult" is not initialized
I think I understand the theoretical aspects here, you need to initialize a datastructure before the comobject can write to it. However, how do you do this from Pywin32? That I have no clue about, and would appreciate any ideas or pointers(!) to how this can be done.
None of the tips below helped. My program doesn't throw an exception, it just returns the same message from the COM object again and again and again...
From the documentation:
If your QueueEvents property is set to
True and you are performing low-level
instantiation of the data control
using C++, then in your data event
handler (invoke) you will be required
to initialize pvarResult by calling
the VariantInit() function. This will
prevent your application from
receiving duplicate ticks. If this
variable is not set then the data
control assumes that you have not
received data yet, and it will then
attempt to resend it. In major
containers, such as MFC and Visual
Basic, this flag will automatically be
initialized by the container. Keep in
mind that this only pertains to
applications, which set the
QueueEvents property to True.

I'm not sure if this will help for your issue, but to have working COM events in Python you shouldn't forget about:
setting COM apartment to free
threaded at the beginning of script
file. This could be done using
following lines
import sys
sys.coinit_flags = 0
generating wrapper for com library before calling first DispatchWithEvents
from win32com.client.makepy import GenerateFromTypeLibSpec
GenerateFromTypeLibSpec("ComLibName 1.0 Type Library")
If you could post how the program fails (COM object fails or maybe python trows some exceptions) maybe I could advice more.

Related

Making python COM client testable

I have read about unit testing in python, but all examples I found are based on trivial examples of mocking objects. I have no idea how to really implement tests for the project in a way it can be tested without accessing COM port. project is mainly used for control of another application via COM API plus evaluating data produced by app and making reports, so let's say it is higher abstraction level api made as python library. COM interface is non-trivial, with main object that is exposing some managers, the managers exposing containers of objects, that are having references to another objects, so a web of conencted objects responsible for different things controlled application. Architecture of python library somehow follows the COM structure. During library import main COM object is dispatched and stored in central module, manager-level modules on import are getting references to COM manager objects from central module and then manager-level methods are using this COM manager objects in their methods. Examples for better understanding
#__init__
from managera import ManagerA
from centralobject import CentralObject
central_object = CentralObject()
manager_a = ManagerA() #here all initialisation to make everything work,
manager_b = ManagerB() #so full com object needed to import the package
...
#centalobject
class CentalObject():
def __init__():
self.com = Dispatch("application")
...
#managera
from project import central_object
class ManagerA():
def __init__():
self.com = central_object.com.manager_a
def manager_a_method1(x):
foo = self.com.somecontainer[x].somemethod()
foo.configure(3)
return foo
...
In current state it is hard to test. It is even not possible import without connection to the COM app. Dispatch could be moved into some init function that is executed after the import, but I don't see how it would make the testing possible. One solution would be to make test double that have structutre similar to the original application, but for my unexperienced in testing mind it seems a bit overkill to do this in each method test. Maybe it is not and that's how it should be done, I am asking somebody's more experience advice.
Second solutuon that came to my mind is to dispatch COM object every time any COM call is made, and then mock it but it seems a lot of dispatches and I don't see how it makes the code better.
Tried also with managers not being defined as classes, but modules, however it seemed even more hard to test as then COM managers was refrerenced during module import, not object instantiation.
What should be done to make testing posible and nice without accessing COM? Interested in everything: solution, advice or just topic that I should read more about.

Fabric - Python 3 - What is context and what does it have to contain and why do I need to pass it?

This is my fabric code:
from fabric import Connection, task
server = Connection(host="usrename#server.com:22", connect_kwargs={"password": "mypassword"})
#task
def dostuff(somethingmustbehere):
server.run("uname -a")
This code works just fine. When I execute fab dostuff it does what I want it to do.
When I remove somethingmustbehere however I get this error message:
raise TypeError("Tasks must have an initial Context argument!")
TypeError: Tasks must have an initial Context argument!
I never defined somethingmustbehere anywhere in my code. I just put it in and the error is gone and everything works. But why? What is this variable? Why do I need it? Why is it so important? And if it is so important why can it just be empty? I am really lost here. Yes it works, but I cannot run code that I don't understand. It drives me insane. :-)
Please be aware that I'm talking about the Python 3(!) version of Fabric!
The Fabric version is 2.4.0
To be able to run a #task you need a context argument. Fabric uses invoke task() which expects to see a context object. Normally we name the variable c or ctx (which I always use to make it more clear). I don't prefer using c because I use it normally for connection
Check this line on github from invoke package repo, you will see that it raises an exception when the context argument is not present, but it doesn't explain why!
To know more about Context object, what it 's and why we need it, you can read the following on the site of pyinvoke:
Aside: what exactly is this ‘context’ arg anyway? A common problem
task runners face is transmission of “global” data - values loaded
from configuration files or other configuration vectors, given via CLI
flags, generated in ‘setup’ tasks, etc.
Some libraries (such as Fabric 1.x) implement this via module-level
attributes, which makes testing difficult and error prone, limits
concurrency, and increases implementation complexity.
Invoke encapsulates state in explicit Context objects, handed to tasks
when they execute . The context is the primary API endpoint, offering
methods which honor the current state (such as Context.run) as well as
access to that state itself.
Check these both links :
Context
what exactly is this ‘context’ arg anyway?
To be honest, I wasted a lot of time figuring out what context is and why my code wouldn't run without it. But at some point I just gave up and started using to make my code run without errors.

Passing more QiChat variables to a Python function

I'm experiencing some issues with the naoqi sdk with Choreographe. I need to pass synchronously two or more variables by means of QiChat module to a Python function:
u:(Is someone in _~lab lab working on _~themes) $lab=$1 $themes=$2
or better:
u:(Is someone in _* lab working on _*) $lab=$1 $themes=$2
I have not found anything online, can someone help me?
Thanks in advance
QiChat raises ALMemory events when a variable is set, but processing ALMemory events is asynchronous, therefore you cannot rely on them in your case.
However, QiChat provides a way to make synchronous calls to any API exposed in NAOqi, using the ^call keyword. You can take advantage of this to call a method you would have exposed in a Python service you would have written yourself. In QiChat you would have something like that:
u:(_$myConcept): alright ^call(MyService.myMethod($1))
I suppose you write your program using Choregraphe, so please note that you already have access to a valid Qi Session in every Python box, by calling self.session().

explain python windows service example (win32serviceutil.serviceframework)

Most python windows service examples based on the win32serviceutil.ServiceFramework use the win32event for synchronization.
For example:
http://tools.cherrypy.org/wiki/WindowsService (the example for cherrypy 3.0)
(sorry I dont have the reputation to post more links, but many similar examples can be googled)
Can somebody clearly explain why the win32events are necessary (self.stop_event in the above example)?
I guess its necessary to use the win32event due to different threads calling svcStop and svcRun? But I'm getting confused, there are so many other things happening: the split between python.exe and pythonservice.exe, system vs local threads (?), python GIL..
For the top of PythonService.cpp
PURPOSE: An executable that hosts Python services.
This source file is used to compile 2 discrete targets:
* servicemanager.pyd - A Python extension that contains
all the functionality.
* PythonService.exe - This simply loads servicemanager.pyd, and
calls a public function. Note that PythonService.exe may one
day die - it is now possible for python.exe to directly host
services.
What exactly do you mean by system threads vs local threads? You mean threads created directly from C outside the GIL?
The PythonService.cpp just related the names to callable python objects and a bunch of properties, like the accepted methods.
For example a the accepted controls from the ServiceFramework:
def GetAcceptedControls(self):
# Setup the service controls we accept based on our attributes. Note
# that if you need to handle controls via SvcOther[Ex](), you must
# override this.
accepted = 0
if hasattr(self, "SvcStop"): accepted = accepted | win32service.SERVICE_ACCEPT_STOP
if hasattr(self, "SvcPause") and hasattr(self, "SvcContinue"):
accepted = accepted | win32service.SERVICE_ACCEPT_PAUSE_CONTINUE
if hasattr(self, "SvcShutdown"): accepted = accepted | win32service.SERVICE_ACCEPT_SHUTDOWN
return accepted
I suppose the events are recommended because that way you could interrupt the interpreter from outside the GIL, even if python is in a blocking call from the main thread, e.g.: time.sleep(10) you could interrupt from those points outside the GIL and avoid having an unresponsive service.
Most of the win32 services calls are in between the python c macros:
Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS
It may be that, being examples, they don't have anything otherwise interesting to do in SvcDoRun. SvcStop will be called from another thread, so using an event is just an easy way to do the cross-thread communication to have SvcDoRun exit at the appropriate time.
If there were some service-like functionality that blocks in SvcDoRun, they wouldn't necessarily need the events. Consider the second example in the CherryPy page that you linked to. It starts the web server in blocking mode, so there's no need to wait on an event.

set timeout to http response read method in python

I'm building a download manager in python for fun, and sometimes the connection to the server is still on but the server doesn't send me data, so read method (of HTTPResponse) block me forever. This happens, for example, when I download from a server, which located outside of my country, that limit the bandwidth to other countries.
How can I set a timeout for the read method (2 minutes for example)?
Thanks, Nir.
If you're stuck on some Python version < 2.6, one (imperfect but usable) approach is to do
import socket
socket.setdefaulttimeout(10.0) # or whatever
before you start using httplib. The docs are here, and clearly state that setdefaulttimeout is available since Python 2.3 -- every socket made from the time you do this call, to the time you call the same function again, will use that timeout of 10 seconds. You can use getdefaulttimeout before setting a new timeout, if you want to save the previous timeout (including none) so that you can restore it later (with another setdefaulttimeout).
These functions and idioms are quite useful whenever you need to use some older higher-level library which uses Python sockets but doesn't give you a good way to set timeouts (of course it's better to use updated higher-level libraries, e.g. the httplib version that comes with 2.6 or the third-party httplib2 in this case, but that's not always feasible, and playing with the default timeout setting can be a good workaround).
You have to set it during HTTPConnection initialization.
Note: in case you are using an older version of Python, then you can install httplib2; by many, it is considered a superior alternative to httplib, and it does supports timeout.
I've never used it, though, and I'm just reporting what documentation and blogs are saying.
Setting the default timeout might abort a download early if it's large, as opposed to only aborting if it stops receiving data for the timeout value. HTTPlib2 is probably the way to go.
5 years later but hopefully this will help someone else...
I was wrecking my brain trying to figure this out. My problem was a server returning corrupt content and thus giving back less data than it thought it had.
I came up with a nasty solution that seems to be working properly. Here it goes:
# NOTE I directly disabling blocking is not necessary but it represents
# an important piece to the problem so I am leaving it here.
# http_response.fp._sock.socket.setblocking(0)
http_response.fp._sock.settimeout(read_timeout)
http_response.read(chunk_size)
NOTE This solution also works for the python requests ANY library that implements the normal python sockets (which should be all of them?). You just have to go a few levels deeper:
resp.raw._fp.fp._sock.socket.setblocking()
resp.raw._fp.fp._sock.settimeout(read_timeout)
resp.raw.read(chunk_size)
As of this writing, I have not tried the following but in theory it should work:
resp = requests.get(some_url, stream=True)
resp.raw._fp.fp._sock.socket.setblocking()
resp.raw._fp.fp._sock.settimeout(read_timeout)
for chunk in resp.iter_content(chunk_size):
# do stuff
Explanation
I stumbled upon this approach when reading this SO question for setting a timeout on socket.recv
At the end of the day, any http request has a socket. For the httplib that socket is located at resp.raw._fp.fp._sock.socket. The resp.raw._fp.fp._sock is a socket._fileobj (which I honestly didn't look far into) and I imagine it's settimeout method internally sets it on the socket attribute.

Categories