Let's examine this.
class SomeObject:
testList = []
def setList(self, data):
self.testList = data
def getList(self):
return self.testList
class UtilClass:
def populateList(self, foreign, str):
data = []
data.append(str)
foreign.setList(data)
def main():
data = SomeObject()
data2 = SomeObject()
util = UtilClass()
util.populateList(data, "Test 1")
util.populateList(data2, "Test 2")
util.populateList(data, "Test 3")
print(data.getList())
print(data2.getList())
if __name__=="__main__":
main()
What does the data object inside of main now contain, a copy of a the list constructed inside of populateList() or a reference to it?
Should I do something like list() or copy.copy() or copy.deepcopy()? The data object should go out of scope inside of populateList, right?
What happens when I pass another list to the util object? Does the list indata get altered or overwritten?
If the data inside of populateList go out of scope, why is it valid after the second call to populateList?
What does the data object inside of main now contain, a copy of a the list constructed inside of populateList() or a reference to it?
A reference to it. But since you do not pass the reference to another object. data is the only one that has a reference to that list.
Should I do something like list() or copy.copy() or copy.deepcopy()? The data object should go out of scope inside of populateList, right?
No, only if you want to construct a copy (for instance if you want to alter the list further later). The object will not be garbage collected because it is allocated on the heap. As long as there is one active variable that references it, it will stay alive.
What happens when I pass another list to the util object? Does the list indata get altered or overwritten?
No, util will no reference to the new list, the old list lives independently. If no other object referenced to the old list, the old list will eventually get garbage collected.
If the data inside of populateList go out of scope, why is it valid after the second call to populateList?
With out of scope programmers say that the list no longer listens to the data identifier. But the list still lives into memory. If other objects reference to it, they can alter/modify it, etc.
Related
I have the following code:
def test(a):
a.append(3)
test_list = [1,2]
test(test_list)
In the above code, when I pass test_list as an argument to test function, does python send the whole list object/data (which is a big byte size) or just the reference to the list (which would be much smaller since its just a pointer)?
From what I understood by looking at Python's pass by object reference, it only sends the reference, but do not know how to verify that it indeed is the case
It's passing an alias to the function; for the life of the function, or until the function intentionally rebinds a to point to something else (with a = something), the function's a is an alias to the same list bound to the caller's test_list.
The straightforward ways to confirm this are:
Print test_list after the call; if the list were actually copied, the append in the function would only affect the copy, the caller wouldn't see it. test_list will in fact have had a new element appended, so it was the same list in the function.
Print id(test_list) outside the function, and id(a) inside the function; ids are required to be unique at any given point in time, so the only way two different lists could have the same ID is if one was destroyed before the other was created; since test_list continues to exist before and after the function call, if a has the same id, it's definitionally the same list.
All function arguments are passed by reference in Python. So the a variable in the function test will refer to the same object as test_list does in the calling code. After the function returns, test_list will contain [1,2,3].
I'm new to programming so sorry for the basic question. I am trying to write a search algorithm for a class, and I thought creating a class for each search node would be helpful.
class Node(object):
def __init__(self, path_to_node, search_depth, current_state):
self.path_to_node = path_to_node
self.search_depth = search_depth
self.current_state = current_state
...
With some functions too. I am now trying to define a function outside of the class to create children nodes of a node and add them to a queue. node.current_state is a list
def bfs_expand(node, queuey, test_states):
# Node Queue List -> Queue List
# If legal move and not already in test states create and put children nodes
# into the queue and their state into test_states. Return queue and test states
# Copy original path, depth, and state to separate variables
original_path = node.path_to_node
original_depth = node.search_depth
original_state = node.current_state
# Check if up is legal, if so add new node to queue and state to test state
if node.is_legal_move('Up'):
up_state = original_state
a = up_state.index(0)
b = a - 3
up_state[a], up_state[b] = up_state[b], up_state[a]
if up_state not in test_states:
test_states.append(up_state)
up_node = Node(original_path + ['Up'], original_depth + 1, up_state)
queuey.put(up_node)
print(test_states)
print(original_state)
I then try to proceed through down, left and right with similar if statements, but they are messed up because the original_state has changed. When I print the original state after that up statement, it returns the up_state created in the if statement. I realize (well, I think) that this is because original_state, and therefore up_state, are actually calling node.current_state and do not store the list in a separate variable. How should I get the variable from a node to manipulate independently? Should I not even be using a class for something like this, maybe a dictionary? I don't need code written for me but a conceptual nudge would be greatly appreciated!
You should use copy.deepcopy if you want to avoid modifying the original
original_path = copy.deepcopy(node.path_to_node)
original_depth = copy.deepcopy(node.search_depth)
original_state = copy.deepcopy(node.current_state)
Or essentially whichever object you want to use as a "working copy" should be a deep copy of the original if you don't want to modify the original version of it.
Expanding a bit on #CoryKramer's answer: In Python, objects have reference semantics, which means that saying
a = b
where a and b are objects, makes both a and b references to the same object, meaning that changing a property on a will change that same property on b as well. In order to actually get a new object with the same properties as the old one, you should use copy.deepcopy as already stated. However, be careful when using that function. If your object contains a reference cycle (i.e.: It contains a reference to an object which contains a reference to itself), copy.deepcopy will lead to an infinite loop.
For this reason, there is also copy.copy, which does not follow object references contained in the object to copy.
To clarify, i'm reading from a file and sending each line to a function(1) where it's relevant elements are put into a list. That list is then sent to another function(2) and added to a dictionary, with one element of that list being a key, and the other(s) put inside another list, being the value. So, basically {key:(value(,value)).
Problem is, whenever I send the list from (1) to (2), the newly created dictionary is overwritten. I'm new to Python, but i'm pretty sure I can add multiple keys and values to one dictionary right? So, is there a way to save the elements of the dictionary each time (2) is called? So, if it's called once, it has tokens(a) in the dictionary. When it's called again, it still has tokens(a), and now tokens(b) is added, and so forth.
If you need code I can include it.
MCVE:
def file_stuff(file name):
#opens and reads file using with x open as thing....then
for line in thing:
function1(line)
def function1(line):
#Breaks down line using regex; variable name var for this example
#list the relevant components of the line will go into
element = list()
for x in var:
list.append(x)
function2(element)
def function2(element):
#Another list is made along with a dictionary
webster = dict()
values = list()
for x in range(len(element)):
#inserts into dictionary.....the problem being, however, that I need this dictionary to "remember" what was already stored inside
In your current code, webster is a local variable in function2 that gets bound to a dictionary. However, you never return or do anything else with the dictionary that would allow code outside that function to see it, so when the function ends, there are no further references to the dictionary and it will be garbage collected.
If you want each call to function2 to use the same dictionary, you need to change the function so that it accesses the dictionary differently. Exactly what way is best will depend on the larger design of your program.
One option would be to make webster a global variable, which function2 can modify in place. This is very easy to do, but it has some pretty severe limitations, since a module has just the one global namespace. Working with multiple files that should have their data put into multiple different dictionaries would be very tough.
It would look something like this:
webster = {}
def function2(element):
...
webster[some_key] = some_value
Another option would be to make the dictionary an argument to the function, so that the calling code is responsible for creating and holding a reference to it in between calls. This is probably a better approach than using a global variable, but it's harder for me to demonstrate since it's not really clear to me where the dictionary should live in your example (maybe in function1, or maybe it needs to be passed all the way through from file_stuff).
It might look something like:
def caller():
the_dict = {}
for item in some_sequence():
function2(item, the_dict)
def function2(item, webster)
...
webster[some_key] = some_value
A final option would be to have function2 still be in charge of creating the dictionary, but for it to return the dictionary to its caller, who could do something with it (such as merging its contents with the dictionaries from previous calls). I'm not even going to attempt to demonstrate this one, since the merging process would depend a lot on what exactly you're putting in your dictionary. (A related option would be to return some other non-dictionary value (or a tuple of values) which could then be inserted in a dictionary by the calling code. This might be easier than dealing with an intermediate dictionary in some situations.)
I want to do matching my time-series data to meta data from a given file.
In my code, main function calls "create_match()" function every 1 minute. Inside "create_match(), there is a "list_from_file()" function to read data from file and store in lists to perform matching.
The problem is that my code is not effective since every 1 minute, it reads the file and rewrite in the same lists. I want to read file only one time (to initialize lists only one time), and after that ignoring the "list_from_file()" function. I do not want to just move this task to main function and pass lists through function.
Does python have a special variable like static variable in c programming?
Python does not have a static variable declaration; however, there are several fairly standard programming patterns in python to do something similar. The easiest way is to just use a global variable.
Global Variables
Define a global variable and check it before running your initialize function. If the global variable has already been set (ie. the lists you're reading in), just return them.
CACHE = None
def function():
global CACHE
if CACHE is None:
CACHE = initialize_function()
return CACHE
You can use a class:
class Match (object):
def __init__(self):
self.data = list_from_file()
def create_match(self):
# do something with `self.data` here
Make an instance:
match = Match()
This calls list_from_file().
Now, you can call create_match() repeatedly with access to self.data
import time
for x in range(10):
match.create_match()
time.sleep(60)
There are lots of ways.
You can make a variable part of a class - not a member of the object, but of the class itself. It is initialized when the class is defined.
Similarly you can put a variable at the outer level of a module. It will belong to the module, and will be initialed when the module is imported the first time.
Finally there's the hack of defining an object as a default parameter to a function. The variable will be initialized when the function is defined, and will belong to the function. You will only be able to access it with the parameter name, and it can be overridden by the caller.
We have a Tree, each node is an object.
The function that this tree has are 3, add(x);getmin();getmax()
The tree works perfectly; for example if i write
a = Heap()
a.add(5)
a.add(15)
a.add(20)
a.getmin()
a.getmax()
the stack look like this [5,15,20], now if i call getmin() it will print min element = 5 and the stack will look like [15,20] and so on.
The problem comes now;
the professor asked us to submit two files which are already created: main.py and minmaxqueue.py
main.py starts like this from minmaxqueue import add, getmin, getmax, and then is has already a list of functions calls of the kind
add(5)
add(15)
add(20)
getmin()
getmax()
in order to make work my script i had to do a=Heap() and then call always a.add(x). Since the TA's are going to run the script from a common file, i cant modify main.py such that it creates an object a=Heap(). It should run directly with add(5) and not with a.add(5)
Is there a way to fix this?
You can modify your module to create a global Heap instance, and define functions that forward everything to that global instance. Like this:
class Heap(object):
# all of your existing code
_heap = Heap()
def add(n):
return _heap.add(n)
def getmin():
return _heap.getmin()
def getmax():
return _heap.getmax()
Or, slightly more briefly:
_heap = Heap()
add = _heap.add
getmin = _heap.getmin
getmax = _heap.getmax
If you look at the standard library, there are modules that do exactly this, like random. If you want to create multiple Random instances, you can; if you don't care about doing that, you can just call random.choice and it works on the hidden global instance.
Of course for Random it makes sense; for Heap, it's a lot more questionable. But if that's what the professor demands, what can you do?
You can use this function to do that more quickly:
def make_attrs_global(obj):
for attr in dir(obj):
if not attr.startswith('__'):
globals()[attr] = getattr(obj, attr)
It makes all attributes of obj defined in global scope.
Just put this code at the end of your minmaxqueue.py file:
a = Heap()
make_attrs_global(a)
Now you should be able to call add directly without a. This is ugly but well...