And how do I get references to each element in the ndarrays?
The goal is to take a list of variable shaped ndarrays containing primitives, use ravel() on all the ndarrays and merge them into a list who elements are all references to the elements in the list of ndarrays.
Here is my attempt at this. To be clear, weights is a list of ndarrays, and I want weights_unraveled to be a one dimensional list containing references to every element in each ndarray.
def unravel_weights(weights):
weights_unraveled = []
for w in weights:
weights_unraveled.extend(w.ravel())
return weights_unraveled
I know that w.ravel() returns a one dimensional list with references to elements of w and assigning it to another variable does not make a copy. I have tested it. Does passing the value to a function make a copy? Does the returning of a value from a function make a copy? I doubt it because assignment did not.
If extend() is making copies, how do I stop it, or how do I merge the list of references returned by w.ravel() into a single list containing references to every element in each ndarray?
This is the code that manifests the undesired behavior.
flat_child_weights = unravel_weights(child_weights)
flat_parent_weights = unravel_weights(weights1)
for i in range(len(flat_child_weights)):
change = random.choice([True, False])
if change:
flat_child_weights[i] = flat_parent_weights[i]
Is a copy being made when assigning the return value of unravel_weights? I also doubt it.
Debugging has verified that flat_child_weights and flat_parent_weights are indeed one dimensional lists, but they are copies of child_weights and weights1 respectively. To be clear, flat_child_weights[i] = flat_parent_weights[i] does change the elements of flat_child_weights, but the changes are not reflected in child_weights.
How can I get the elements of child_weights to reflect changes in flat_child_weights?
You get references instead of copies from your numpy calls because they return views and not copies, unlike the array.extend. Here is a method that will do the same, though it requires some upfront work. You deleted your old question, so I cannot reference that code.
If I recall, you started with a list of ndarrays, which could not be directly turned into a 2D ndarray because the ndarrays are different lengths. This will replace your unravel_weights.
Create a 1D ndarray with all the weights flattened:
flat_child_weights = np.concatenate([weights.ravel() for weights in child_weights)
Key step to link your output to the work on the flattened weights: recreate child_weights from views of flat_child_weights, something like:
child_weights_new = []
index = 0
for weights in child_weights:
child_weights_new.append(flat_child_weights[index:index+weights.size()].reshape(weights.shape))
index += weights.size()
Now you can run through your work on the flat_child_weights, and child_weights_new will be updated (but not child_weights), so return child_weights_new.
I ran into a bug caused by using multiple sets of brackets to index an array, i.e. using a[i][j] (for various reasons, mostly laziness, which I've now fixed properly). While attempting to assign an element of the array, I found that I was unable to, but I didn't receive any kind of error to tell me why. I am confused as to why I can't do this:
>>> import numpy as np
>>> x = np.arange(0,50,10)
>>> idx = np.array([1,3,4])
>>> x[idx]
array([10, 30, 40])
>>> x[idx][1]
30
>>> x[idx][1] = 10 #this assignment doesn't work
>>> x[idx][1]
30
However, if I instead index the idx array inside the brackets, then it seems to work:
>>> x[idx[1]]
30
>>> x[idx[1]] = 100
>>> x[idx[1]]
100
Can someone explain to me why?
Another way to explain this is that each [] translates into __getitem__() call, and each []= is a __setitem__ call.
x[idx][1]= 2
is then
x.__getitem__(idx).__setitem__(1, 2)
If x.__getitem__(idx) produces a new array with its own databuffer, the set changes that, not x. If on the other hand the get produces a view, then the change will appear in x.
It's important when working with arrays, that you understand the difference between a view and copy.
Because:
x[idx]
Creates a new array object with an independent, underlying buffer.
So then you index into that:
[1] = 10
Which does work, but then you don't keep that new array around, and it is discarded immediately.
Whereas:
x[idx[1]] = 100
Assigns to some particular index in the existing array object.
In python, I am trying to change the values of np array inside the function
def function(array):
array = array + 1
array = np.zeros((10, 1))
function(array)
For array as function parameter, it is supposed to be a reference, and I should be able to modify its content inside function.
array = array + 1 performs element wise operation that adds one to every element in the array, so it changes inside values.
But the array actually does not change after the function call. I am guessing that the program thinks I am trying to change the reference itself, not the content of the array, because of the syntax of the element wise operation. Is there any way to make it do the intended behavior? I don't want to loop through individual elements or make the function return the new array.
This line:
array = array + 1
… does perform an elementwise operation, but the operation it performs is creating a new array with each element incremented. Assigning that array back to the local variable array doesn't do anything useful, because that local variable is about to go away, and you haven't done anything to change the global variable of the same name,
On the other hand, this line:
array += 1
… performs the elementwise operation of incrementing all of the elements in-place, which is probably what you want here.
In Python, mutable collections are only allowed, not required, to handle the += statement this way; they could handle it the same way as array = array + 1 (as immutable types like str do). But builtin types like list, and most popular third-party types like np.array, do what you want.
Another solution if you want to change the content of your array is to use this:
array[:] = array + 1
Consider numpy arrays of the object dtype. I can shove anything I want in there.
A common use case for me is to put strings in them. However, for very large arrays, this may use up a lot of memory, depending on how the array is constructed. For example, if you assign a long string (e.g. "1234567890123456789012345678901234567890") to a variable, and then assign that variable to each element in the array, everything is fine:
arr = np.zeros((100000,), dtype=object)
arr[:] = "1234567890123456789012345678901234567890"
The interpreter now has one large string in memory, and an array full of pointers to this one object.
However, we can also do it wrong:
arr2 = np.zeros((100000,), dtype=object)
for idx in range(100000):
arr2[idx] = str(1234567890123456789012345678901234567890)
Now, the interpreter has a hundred thousand copies of my long string in memory. Not so great.
(Naturally, in the above example, the generation of a new string each time is stunted - in real life, imagine reading a string from each line in a file.)
What I want to do is, instead of assigning each element to the string, first check if it's already in the array, and if it is, use the same object as the previous entry, rather than the new object.
Something like:
arr = np.zeros((100000,), dtype=object)
seen = []
for idx, string in enumerate(file): # Length of file is exactly 100000
if string in seen:
arr[idx] = seen[seen.index(string)]
else:
arr[idx] = string
seen.append(string)
(Apologies for not posting fully running code. Hopefully you get the idea.)
Unfortunately this requires a large number of superfluous operations on the seen list. I can't figure out how to make it work with sets either.
Suggestions?
Here's one way to do it, using a dictionary whose values are equal to its keys:
seen = {}
for idx, string in enumerate(file):
arr[idx] = seen.setdefault(string, string)
How do I declare an array in Python?
variable = []
Now variable refers to an empty list*.
Of course this is an assignment, not a declaration. There's no way to say in Python "this variable should never refer to anything other than a list", since Python is dynamically typed.
*The default built-in Python type is called a list, not an array. It is an ordered container of arbitrary length that can hold a heterogenous collection of objects (their types do not matter and can be freely mixed). This should not be confused with the array module, which offers a type closer to the C array type; the contents must be homogenous (all of the same type), but the length is still dynamic.
This is surprisingly complex topic in Python.
Practical answer
Arrays are represented by class list (see reference and do not mix them with generators).
Check out usage examples:
# empty array
arr = []
# init with values (can contain mixed types)
arr = [1, "eels"]
# get item by index (can be negative to access end of array)
arr = [1, 2, 3, 4, 5, 6]
arr[0] # 1
arr[-1] # 6
# get length
length = len(arr)
# supports append and insert
arr.append(8)
arr.insert(6, 7)
Theoretical answer
Under the hood Python's list is a wrapper for a real array which contains references to items. Also, underlying array is created with some extra space.
Consequences of this are:
random access is really cheap (arr[6653] is same to arr[0])
append operation is 'for free' while some extra space
insert operation is expensive
Check this awesome table of operations complexity.
Also, please see this picture, where I've tried to show most important differences between array, array of references and linked list:
You don't actually declare things, but this is how you create an array in Python:
from array import array
intarray = array('i')
For more info see the array module: http://docs.python.org/library/array.html
Now possible you don't want an array, but a list, but others have answered that already. :)
I think you (meant)want an list with the first 30 cells already filled.
So
f = []
for i in range(30):
f.append(0)
An example to where this could be used is in Fibonacci sequence.
See problem 2 in Project Euler
This is how:
my_array = [1, 'rebecca', 'allard', 15]
For calculations, use numpy arrays like this:
import numpy as np
a = np.ones((3,2)) # a 2D array with 3 rows, 2 columns, filled with ones
b = np.array([1,2,3]) # a 1D array initialised using a list [1,2,3]
c = np.linspace(2,3,100) # an array with 100 points beteen (and including) 2 and 3
print(a*1.5) # all elements of a times 1.5
print(a.T+b) # b added to the transpose of a
these numpy arrays can be saved and loaded from disk (even compressed) and complex calculations with large amounts of elements are C-like fast.
Much used in scientific environments. See here for more.
JohnMachin's comment should be the real answer.
All the other answers are just workarounds in my opinion!
So:
array=[0]*element_count
A couple of contributions suggested that arrays in python are represented by lists. This is incorrect. Python has an independent implementation of array() in the standard library module array "array.array()" hence it is incorrect to confuse the two. Lists are lists in python so be careful with the nomenclature used.
list_01 = [4, 6.2, 7-2j, 'flo', 'cro']
list_01
Out[85]: [4, 6.2, (7-2j), 'flo', 'cro']
There is one very important difference between list and array.array(). While both of these objects are ordered sequences, array.array() is an ordered homogeneous sequences whereas a list is a non-homogeneous sequence.
You don't declare anything in Python. You just use it. I recommend you start out with something like http://diveintopython.net.
I would normally just do a = [1,2,3] which is actually a list but for arrays look at this formal definition
To add to Lennart's answer, an array may be created like this:
from array import array
float_array = array("f",values)
where values can take the form of a tuple, list, or np.array, but not array:
values = [1,2,3]
values = (1,2,3)
values = np.array([1,2,3],'f')
# 'i' will work here too, but if array is 'i' then values have to be int
wrong_values = array('f',[1,2,3])
# TypeError: 'array.array' object is not callable
and the output will still be the same:
print(float_array)
print(float_array[1])
print(isinstance(float_array[1],float))
# array('f', [1.0, 2.0, 3.0])
# 2.0
# True
Most methods for list work with array as well, common
ones being pop(), extend(), and append().
Judging from the answers and comments, it appears that the array
data structure isn't that popular. I like it though, the same
way as one might prefer a tuple over a list.
The array structure has stricter rules than a list or np.array, and this can
reduce errors and make debugging easier, especially when working with numerical
data.
Attempts to insert/append a float to an int array will throw a TypeError:
values = [1,2,3]
int_array = array("i",values)
int_array.append(float(1))
# or int_array.extend([float(1)])
# TypeError: integer argument expected, got float
Keeping values which are meant to be integers (e.g. list of indices) in the array
form may therefore prevent a "TypeError: list indices must be integers, not float", since arrays can be iterated over, similar to np.array and lists:
int_array = array('i',[1,2,3])
data = [11,22,33,44,55]
sample = []
for i in int_array:
sample.append(data[i])
Annoyingly, appending an int to a float array will cause the int to become a float, without throwing an exception.
np.array retain the same data type for its entries too, but instead of giving an error it will change its data type to fit new entries (usually to double or str):
import numpy as np
numpy_int_array = np.array([1,2,3],'i')
for i in numpy_int_array:
print(type(i))
# <class 'numpy.int32'>
numpy_int_array_2 = np.append(numpy_int_array,int(1))
# still <class 'numpy.int32'>
numpy_float_array = np.append(numpy_int_array,float(1))
# <class 'numpy.float64'> for all values
numpy_str_array = np.append(numpy_int_array,"1")
# <class 'numpy.str_'> for all values
data = [11,22,33,44,55]
sample = []
for i in numpy_int_array_2:
sample.append(data[i])
# no problem here, but TypeError for the other two
This is true during assignment as well. If the data type is specified, np.array will, wherever possible, transform the entries to that data type:
int_numpy_array = np.array([1,2,float(3)],'i')
# 3 becomes an int
int_numpy_array_2 = np.array([1,2,3.9],'i')
# 3.9 gets truncated to 3 (same as int(3.9))
invalid_array = np.array([1,2,"string"],'i')
# ValueError: invalid literal for int() with base 10: 'string'
# Same error as int('string')
str_numpy_array = np.array([1,2,3],'str')
print(str_numpy_array)
print([type(i) for i in str_numpy_array])
# ['1' '2' '3']
# <class 'numpy.str_'>
or, in essence:
data = [1.2,3.4,5.6]
list_1 = np.array(data,'i').tolist()
list_2 = [int(i) for i in data]
print(list_1 == list_2)
# True
while array will simply give:
invalid_array = array([1,2,3.9],'i')
# TypeError: integer argument expected, got float
Because of this, it is not a good idea to use np.array for type-specific commands. The array structure is useful here. list preserves the data type of the values.
And for something I find rather pesky: the data type is specified as the first argument in array(), but (usually) the second in np.array(). :|
The relation to C is referred to here:
Python List vs. Array - when to use?
Have fun exploring!
Note: The typed and rather strict nature of array leans more towards C rather than Python, and by design Python does not have many type-specific constraints in its functions. Its unpopularity also creates a positive feedback in collaborative work, and replacing it mostly involves an additional [int(x) for x in file]. It is therefore entirely viable and reasonable to ignore the existence of array. It shouldn't hinder most of us in any way. :D
How about this...
>>> a = range(12)
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
>>> a[7]
6
Following on from Lennart, there's also numpy which implements homogeneous multi-dimensional arrays.
Python calls them lists. You can write a list literal with square brackets and commas:
>>> [6,28,496,8128]
[6, 28, 496, 8128]
I had an array of strings and needed an array of the same length of booleans initiated to True. This is what I did
strs = ["Hi","Bye"]
bools = [ True for s in strs ]
You can create lists and convert them into arrays or you can create array using numpy module. Below are few examples to illustrate the same. Numpy also makes it easier to work with multi-dimensional arrays.
import numpy as np
a = np.array([1, 2, 3, 4])
#For custom inputs
a = np.array([int(x) for x in input().split()])
You can also reshape this array into a 2X2 matrix using reshape function which takes in input as the dimensions of the matrix.
mat = a.reshape(2, 2)
# This creates a list of 5000 zeros
a = [0] * 5000
You can read and write to any element in this list with a[n] notation in the same as you would with an array.
It does seem to have the same random access performance as an array. I cannot say how it allocates memory because it also supports a mix of different types including strings and objects if you need it to.