I am using LLDB Python scripting support to add custom Variable Formatting for a complex C++ class type in XCode.
This is working well for simple situations, but I have hit a wall when I need to call a method which uses a pass-by-reference parameter, which it populates with results. This would require me to create a variable to pass here, but I can't find a way to do this?
I have tried using the target's CreateValueFromData method, as below, but this doesn't seem to work.
import lldb
def MyClass(valobj, internal_dict):
class2_type = valobj.target.FindFirstType('class2')
process = valobj.process
class2Data = [0]
data = lldb.SBData.CreateDataFromUInt32Array(process.GetByteOrder(), process.GetAddressByteSize(), class2Data)
valobj.target.CreateValueFromData("testClass2", data, class2_type)
valobj.EvaluateExpression("getType(testClass2)")
class2Val = valobj.frame.FindVariable("testClass2")
if not class2Val.error.success:
return class2Val.error.description
return class2Val.GetValueAsUnsigned()
Is there some way to be able to achieve what I'm trying to do?
SBValue names are just labels for the SBValue, they aren't guaranteed to exist as symbols in the target. For instance if the value you are formatting is an ivar of some other object, it's name will be the ivar name... And lldb does not inject new SBValue's names into the symbol table - that would end up causing lots of name collisions. So they don't exist in the namespace the expression evaluator queries when looking up names.
If the variable you are formatting is a pointer, you can get the pointer value and cons up an expression that casts the pointer value to the appropriate type for your getType function, and pass that to your function. If the value is not a pointer, you can still use SBValue.AddressOf to get the memory location of the value. If the value exists only in lldb (AddressOf will return an invalid address) then you would have to push it to the target with SBProcess.AllocateMemory/WriteMemory, but that should only happen if you have another data formatter that makes these objects out of whole cloth for its own purposes.
It's better not to call functions in formatters if you can help it. But if you really must call a function in your data formatter, you should to do that judiciously.
They can cause performance problems (if you have an array of 100 elements of this type, your formatter will require 100 function calls in the target to render the array... That's 200 context switches between your process and the debugger, plus a bunch of memory reads and writes) for every step operation.
Also, since you can't ensure that the data in your value is correct (it might represent a variable that has not been initialized yet, or already deallocated) you either need to have your function handle bad data, or at least be prepared for the expression to crash. lldb can clean up the stack and suppress the exception from crashes, but it can't undo any side-effects the expression might have had before crashing.
For instance, if the function you called took some lock before crashing that it was expecting to release on the way out, your formatter will damage the state of the program. So you have to be careful what you call...
And by default, EvaluateExpression will allow all threads to run so that expressions don't deadlock against a lock held by another thread. You probably don't want that to happen, since that means looking at the locals of one thread will "change" the state of another thread. So you really should only call functions you are sure don't take locks. And use the version of EvaluateExpression that takes an SBExpressionOption, in which you set the SBExpressionOptions.StopOthers to True, and SetTryAllThreads to False.
Related
Problem
I have a function make_pipeline that accepts an arbitrary number of functions, which it then calls to perform sequential data transformation. The resulting call chain performs transformations on a pandas.DataFrame. Some, but not all functions that it may call need to operate on a sub-array of the DataFrame. I have written multiple selector functions. However at present each member-function of the chain has to be explicitly be given the user-selected selector/filter function. This is VERY error-prone and accessibility is very important as the end-code is addressed to non-specialists (possibly with no Python/programming knowledge), so it must be "batteries-included". This entire project is written in a functional style (that's what's always worked for me).
Sample Code
filter_func = simple_filter()
# The API looks like this
make_pipeline(
load_data("somepath", header = [1,0]),
transform1(arg1,arg2),
transform2(arg1,arg2, data_filter = filter_func),# This function needs access to user-defined filter function
transform3(arg1,arg2,, data_filter = filter_func),# This function needs access to user-defined filter function
transform4(arg1,arg2),
)
Expected API
filter_func = simple_filter()
# The API looks like this
make_pipeline(
load_data("somepath", header = [1,0]),
transform1(arg1,arg2),
transform2(arg1,arg2),
transform3(arg1,arg2),
transform4(arg1,arg2),
)
Attempted
I thought that if the data_filter alias is available in the caller's namespace, it also becomes available (something similar to a closure) to all functions it calls. This seems to happen with some toy examples but wont work in the case (UnboundError).
What's a good way to make a function defined in one place available to certain interested functions in the call chain? I'm trying to avoid global.
Notes/Clarification
I've had problems with OOP and mutable states in the past, and functional programming has worked quite well. Hence I've set a goal for myself to NOT use classes (to the extent that Python enables me to anyways). So no classes.
I should have probably clarified this initially: In the pipeline the output of all functions is a DataFrame and the input of all functions (except load data obviously) is a DataFrame. The functions are decorated with a wrapper that calls functools.partial because we want the user to supply the args to each function but not execute it. The actual execution is done be a forloop in make_pipeline.
Each function accepts df:pandas.DataFrame plus all arguements that are specific to that function. The statement seen above transform1(arg1,arg2,...) actually calls the decorated transform1 witch returns functools.partial(transform, arg1,arg2,...) which is now has a signature like transform(df:pandas.DataFrame).
load_dataframe is just a convenience function to load the initial dataframe so that all other functions can begin operating on it. It just felt more intuitive to users to have it part of the chain rather that a separate call
The problem is this: I need a way for a filter function to be initialized (called) in only on place, such that every function in the call chain that needs access to the filter function, gets it without it being explicitly passed as argument to said function. If you're wondering why this is the case, it's because I feel that end users will find it unintuitive and arbitrary. Some functions need it, some don't. I'm also pretty certain that they will make all kinds of errors like passing different filters, forgetting it sometimes etc.
(Update) I've also tried inspect.signature() in make_pipeline to check if each function accepts a data_filter argument and pass it on. However, this raises an incorrect function signature error so some unclear reason (likely because of the decorators/partial calls). If signature could the return the non-partial function signature, this would solve the issue, but I couldn't find much info in the docs
Turns out it was pretty easy. The solution is inspect.signature.
def make_pipeline(*args, data_filter:Optional[Callable[...,Any]] = None)
d = args[0]
for arg in args[1:]:
if "data_filter" in inspect.signature(arg):
d = arg(d, data_filter = data_filter)
else:
d= arg(d)
Leaving this here mostly for reference because I think this is a mini design pattern. I've also seen an function._closure_ on unrelated subject. That may also work, but will likely be more complicated.
Suppose I have a module PyFoo.py that has a function bar. I want bar to print all of the local variables associated with the namespace that called it.
For example:
#! /usr/bin/env python
import PyFoo as pf
var1 = 'hi'
print locals()
pf.bar()
The two last lines would give the same output. So far I've tried defining bar as such:
def bar(x=locals):
print x()
def bar(x=locals()):
print x
But neither works. The first ends up being what's local to bar's namespace (which I guess is because that's when it's evaluated), and the second is as if I passed in globals (which I assume is because it's evaluated during import).
Is there a way I can have the default value of argument x of bar be all variables in the namespace which called bar?
EDIT 2018-07-29:
As has been pointed out, what was given was an XY Problem; as such, I'll give the specifics.
The module I'm putting together will allow the user to create various objects that represent different aspects of a numerical problem (e.x. various topology definitions, boundary conditions, constitutive models, ect.) and define how any given object interacts with any other object(s). The idea is for the user to import the module, define the various model entities that they need, and then call a function which will take all objects passed to it, make needed adjustments to ensure capability between them, and then write out a file that represents the entire numerical problem as a text file.
The module has a function generate that accepts each of the various types of aspects of the numerical problem. The default value for all arguments is an empty list. If a non-empty list is passed, then generate will use those instances for generating the completed numerical problem. If an argument is an empty list, then I'd like it to take in all instances in the namespace that called generate (which I will then parse out the appropriate instances for the argument).
EDIT 2018-07-29:
Sorry for any lack of understanding on my part (I'm not that strong of a programmer), but I think I might understand what you're saying with respect to an instance being declared or registered.
From my limited understanding, could this be done by creating some sort of registry dataset (like a list or dict) in the module that will be created when the module is imported, and that all module classes take this registry object in by default. During class initialization self can be appended to said dataset, and then the genereate function will take the registry as a default value for one of the arguments?
There's no way you can do what you want directly.
locals just returns the local variables in whatever namespace it's called in. As you've seen, you have access to the namespace the function is defined in at the time of definition, and you have access to the namespace of the function itself from within the function, but you don't have access to any other namespaces.
You can do what you want indirectly… but it's almost certainly a bad idea. At least this smells like an XY problem, and whatever it is you're actually trying to do, there's probably a better way to do it.
But occasionally it is necessary, so in case you have one of those cases:
The main good reason to want to know the locals of your caller is for some kind of debugging or other introspection function. And the way to do introspection is almost always through the inspect library.
In this case, what you want to inspect is the interpreter call stack. The calling function will be the first frame on the call stack behind your function's own frame.
You can get the raw stack frame:
inspect.currentframe().f_back
… or you can get a FrameInfo representing it:
inspect.stack()[1]
As explained at the top of the inspect docs, a frame object's local namespace is available as:
frame.f_locals
Note that this has all the same caveats that apply to getting your own locals with locals: what you get isn't the live namespace, but a mapping that, even if it is mutable, can't be used to modify the namespace (or, worse in 2.x, one that may or may not modify the namespace, unpredictably), and that has all cell and free variables flattened into their values rather than their cell references.
Also, see the big warning in the docs about not keeping frame objects alive unnecessarily (or calling their clear method if you need to keep a snapshot but not all of the references, but I think that only exists in 3.x).
I've been running multiple threads (by "symbol" below) but have encountered a weird issue where there appears to be a potential memory leak depending on which gets processed first. I believe the issue is due to me using the same field name / array name in each thread.
Below is an example of the code I am running to assign values to an array:
for i in range(level+1):
accounting_price[i] = yahoo_prices[j]['accg'][i][0]
It works fine but when I query multiple "symbols" and run a thread for each symbol, I am sometimes getting symbol A's "accounting_price[i]" being returned in Symbol C's and vice versa. Not sure if this could be a memory leak from one thread to the other, but the only quick solution I have is to make the "account_price[i]" unique to each symbol. Would it be correct if I implement the below?
symbol = "AAPL"
d = {}
for i in range(level+1):
d['accounting_price_{}'.format(symbol)][i] = yahoo_prices[j]['accg'][i][0]
When I run it, I get an error thrown up.
I would be extremely grateful for a solution on how to dynamically create unique arrays to each thread. Alternatively, a solution to the "memory leak".
Thanks!
If you think there’s a race here causing conflicting writes to the dict, using a lock is both the best way to rule that out, and probably the best solution if you’re right.
I should point out that in general, thanks to the global interpreter lock, simple assignments to dict and list members are already thread-safe. But it’s not always easy to prove that your case is one of the “in general” ones.
Anyway, if you have a mutable global object that’s being shared, you need to have a global lock that’s shared along with it, and you need to acquire that lock around every access (read and write) to the object.
If at all possible, you should do this using a with statement to ensure that it’s impossible to abandon the lock (which could cause other threads to block forever waiting for the same lock).
It’s also important to make sure you don’t do any expensive work, like downloading and parsing a web page, with the lock acquired (which could cause all of your threads to end up serialized instead of usefully running in parallel).
So, at the global level, where you create accounting_info, create a corresponding lock:
accounting_info = […etc.…]
accounting_info_lock = threading.Lock()
Then, inside the thread, wherever you use it:
with accounting_price_lock:
setup_info = accounting_price[...]
yahoo_prices = do_expensive_stuff(setup_info)
with accounting_price_lock:
for i in range(level+1):
accounting_price[i] = yahoo_prices[j]['accg'][i][0]
If you end up often having lots of reads and few writes, this can cause excessive and unnecessary contention, but you can fix that by just replacing the generic lock with a read-write lock. They’re a bit slower in general, but a lot faster if a bunch of threads want to read in parallel.
The error is presumably a KeyError, right? It's because you're indexing two levels into your dictionary when only one exists. Try this:
symbol = "AAPL"
d = {}
for i in range(level+1):
name = 'accounting_price_{}'.format(symbol)
d[name] = {}
d[name][i] = yahoo_prices[j]['accg'][i][0]
The title is pretty self-explanatory. I'm doing something like:
gen = obj #some generator instance running
frame = obj.gi_frame
prevframe = frame.f_back
But I always get None for prevframe. Why is this the case. Also, is there some workaround for this?
CONTEXT: I'm trying to write a simple call stack method to determine what called a particular function. I'm using twisted manhole and telnetting into a running process, where I then execute these commands but I can't seem to access the previous frames.
To the best of my knowledge, this is both intentional and cannot be worked around. The code in cpython responsible for it is here, which indicates that the reference to the previous frame is broken as soon as the generator yields (or excepts out) in order to prevent issues with reference counting. It also appears that the intended behavior is that the generator's previous frame is swapped out every time it's entered, so while it's not running, the notion of "the parent frame" doesn't make much sense.
The correct way to do this, at least in the post-mortem context, is to use traceback objects, which have their frame lists linked in the reverse order, tb_next instead of f_back.
my question is rather a design question.
In Python, if code in your "constructor" fails, the object ends up not being defined. Thus:
someInstance = MyClass("test123") #lets say that constructor throws an exception
someInstance.doSomething() # will fail, name someInstance not defined.
I do have a situation though, where a lot of code copying would occur if i remove the error-prone code from my constructor. Basically my constructor fills a few attributes (via IO, where a lot can go wrong) that can be accessed with various getters. If I remove the code from the contructor, i'd have 10 getters with copy paste code something like :
is attribute really set?
do some IO actions to fill the attribute
return the contents of the variable in question
I dislike that, because all my getters would contain a lot of code. Instead of that I perform my IO operations in a central location, the constructor, and fill all my attributes.
Whats a proper way of doing this?
There is a difference between a constructor in C++ and an __init__ method
in Python. In C++, the task of a constructor is to construct an object. If it fails,
no destructor is called. Therefore if any resources were acquired before an
exception was thrown, the cleanup should be done before exiting the constructor.
Thus, some prefer two-phase construction with most of the construction done
outside the constructor (ugh).
Python has a much cleaner two-phase construction (construct, then
initialize). However, many people confuse an __init__ method (initializer)
with a constructor. The actual constructor in Python is called __new__.
Unlike in C++, it does not take an instance, but
returns one. The task of __init__ is to initialize the created instance.
If an exception is raised in __init__, the destructor __del__ (if any)
will be called as expected, because the object was already created (even though it was not properly initialized) by the time __init__ was called.
Answering your question:
In Python, if code in your
"constructor" fails, the object ends
up not being defined.
That's not precisely true. If __init__ raises an exception, the object is
created but not initialized properly (e.g., some attributes are not
assigned). But at the time that it's raised, you probably don't have any references to
this object, so the fact that the attributes are not assigned doesn't matter. Only the destructor (if any) needs to check whether the attributes actually exist.
Whats a proper way of doing this?
In Python, initialize objects in __init__ and don't worry about exceptions.
In C++, use RAII.
Update [about resource management]:
In garbage collected languages, if you are dealing with resources, especially limited ones such as database connections, it's better not to release them in the destructor.
This is because objects are destroyed in a non-deterministic way, and if you happen
to have a loop of references (which is not always easy to tell), and at least one of the objects in the loop has a destructor defined, they will never be destroyed.
Garbage collected languages have other means of dealing with resources. In Python, it's a with statement.
In C++ at least, there is nothing wrong with putting failure-prone code in the constructor - you simply throw an exception if an error occurs. If the code is needed to properly construct the object, there reallyb is no alternative (although you can abstract the code into subfunctions, or better into the constructors of subobjects). Worst practice is to half-construct the object and then expect the user to call other functions to complete the construction somehow.
It is not bad practice per se.
But I think you may be after a something different here. In your example the doSomething() method will not be called when the MyClass constructor fails. Try the following code:
class MyClass:
def __init__(self, s):
print s
raise Exception("Exception")
def doSomething(self):
print "doSomething"
try:
someInstance = MyClass("test123")
someInstance.doSomething()
except:
print "except"
It should print:
test123
except
For your software design you could ask the following questions:
What should the scope of the someInstance variable be? Who are its users? What are their requirements?
Where and how should the error be handled for the case that one of your 10 values is not available?
Should all 10 values be cached at construction time or cached one-by-one when they are needed the first time?
Can the I/O code be refactored into a helper method, so that doing something similiar 10 times does not result in code repetition?
...
I'm not a Python developer, but in general, it's best to avoid complex/error-prone operations in your constructor. One way around this would be to put a "LoadFromFile" or "Init" method in your class to populate the object from an external source. This load/init method must then be called separately after constructing the object.
One common pattern is two-phase construction, also suggested by Andy White.
First phase: Regular constructor.
Second phase: Operations that can fail.
Integration of the two: Add a factory method to do both phases and make the constructor protected/private to prevent instantation outside the factory method.
Oh, and I'm neither a Python developer.
If the code to initialise the various values is really extensive enough that copying it is undesirable (which it sounds like it is in your case) I would personally opt for putting the required initialisation into a private method, adding a flag to indicate whether the initialisation has taken place, and making all accessors call the initialisation method if it has not initialised yet.
In threaded scenarios you may have to add extra protection in case initialisation is only allowed to occur once for valid semantics (which may or may not be the case since you are dealing with a file).
Again, I've got little experience with Python, however in C# its better to try and avoid having a constructor that throws an exception. An example of why that springs to mind is if you want to place your constructor at a point where its not possible to surround it with a try {} catch {} block, for example initialisation of a field in a class:
class MyClass
{
MySecondClass = new MySecondClass();
// Rest of class
}
If the constructor of MySecondClass throws an exception that you wish to handle inside MyClass then you need to refactor the above - its certainly not the end of the world, but a nice-to-have.
In this case my approach would probably be to move the failure-prone initialisation logic into an initialisation method, and have the getters call that initialisation method before returning any values.
As an optimisation you should have the getter (or the initialisation method) set some sort of "IsInitialised" boolean to true, to indicate that the (potentially costly) initialisation does not need to be done again.
In pseudo-code (C# because I'll just mess up the syntax of Python):
class MyClass
{
private bool IsInitialised = false;
private string myString;
public void Init()
{
// Put initialisation code here
this.IsInitialised = true;
}
public string MyString
{
get
{
if (!this.IsInitialised)
{
this.Init();
}
return myString;
}
}
}
This is of course not thread-safe, but I don't think multithreading is used that commonly in python so this is probably a non-issue for you.
seems Neil had a good point: my friend just pointed me to this:
http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization
which is basically what Neil said...