Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
For generating a probability density function of some cases, maybe 1 million observations are considered. When I work with numpy array, I was encountered by size limit 32.
Is it too few ?
In this case, how can we store more than 32 elements without distributing the elements into different columns and maybe arrays in arrays ?
import numpy
my_list = []
for i in range(0, 100):
my_list.append(i)
np_arr = numpy.ndarray(np_arr) # ValueError: sequence too large; cannot be greater than 32
When you create an array with numpy.ndarray, the first argument is the shape of the array. Interpreting that list as a shape would indeed give a huge array. If you just want to turn the list into an array, you want numpy.array:
import numpy
my_list = []
for i in range(0, 100):
my_list.append(i)
np_arr = numpy.array(my_list)
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 months ago.
Improve this question
I have an array arr which is not a numpy type. I would like to find the size of this array. A simple search shows options like len(arr) and arr.shape if it is a numpy array. len seems to be working only for 1D array. If I do arr=numpy.array(arr), then I can do arr.shape. Is there a direct way to get the shape of arr?
The array arr is returned from a function and I do not know the operations inside. A simple of print of arr gives [array([2, 3, 1, ... ]), array([5, 2, 9, ... ]) ]. type(arr) results in <class 'list'>
This being the case, how to find the size of arr "directly"? Something like (2, N)
You have a list of numpy arrays, so you can get the sizes with:
sizes = [a.size for a in arr]
If you assume that all the arrays are the same shape, you can get the size of the first element and combine that with the length of the list.
size = (len(arr), *arr[0].shape)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 12 months ago.
Improve this question
I'd like to make a function that rounds integers or floats homogeneously and smart. For example, if I have an array like:
[0.672, 0.678, 0.672]
my output would be:
[0.67, 0.68, 0.67]
but also if I have this kind of input:
[17836.982, 160293.673, 103974.287]
my output would be:
[17836, 160293, 103974]
But at the same time, if my array only has close together values such as:
[17836.987, 17836.976, 17836.953]
The output would be:
[17836.99, 17836.98, 17836.95]
An automated way could be to compute all absolute differences, getting the min and finding out the number of decimal places to keep to maintain a representative difference.
This doesn't give the exact output you want but follows the general logic.
Here using numpy to help on the computation, the algorithm is O(n**2):
def auto_round(l, round_int_part=False):
import numpy as np
a = np.array(l)
b = abs(a-a[:,None])
np.fill_diagonal(b, float('inf'))
n = int(np.ceil(-np.log10(b.min())))
# print(f'rounding to {n} decimals') # uncomment to get info
if n<0:
if not round_int_part:
return a.astype(int).tolist()
return np.round(a, decimals=n).astype(int).tolist()
return np.round(a, decimals=n).tolist()
auto_round([17836.987, 17836.976, 17836.953])
# [17836.99, 17836.98, 17836.95]
auto_round([0.6726, 0.6785, 0.6723])
# [0.6726, 0.6785, 0.6723]
auto_round([17836.982, 160293.673, 103974.287])
# [ 17836, 160293, 103974]
auto_round([17836.982, 160293.673, 103974.287], round_int_part=True)
# [20000, 160000, 100000]
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm facing a problem.
I have two elements as follow :
[array([130.05297852, 159.25004578, 140.36545944]),
array([115.27301025, 160.63392258, 132.83247375])]
and
[39.44091796875,
52.175140380859375]
and I would like to have something like that :
[array([130.05297852, 159.25004578, 140.36545944, 39.44091796875]),
array([115.27301025, 160.63392258, 132.83247375, 52.175140380859375])]
How can I manage to do this ? Thanks !
You can append elements with the append function.
for i in range(len(small_array)):
bigger_array[i].append(small_array[i])
this appends the first element to the first array, and the second element to the second array.
EDIT:
with numpy arrays you can adapt the previous method in this way:
for i in range(len(small_array):
np.append(bigger_array[i], small_array[i])
import numpy as np
a = [
np.array([130.05297852, 159.25004578, 140.36545944]),
np.array([115.27301025, 160.63392258, 132.83247375])
]
add_to_a = np.array([39.44091796875, 52.175140380859375])
result = []
for i, j in zip(a, add_to_a):
final = np.append(i, j)
result.append(final)
print(result) # If you need a normal array
result = np.array(result) # Making ND Array
print(result)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I have a 100x100 array and another 50x50 array. How do I assign the whole 50x50 array to a slice of the larger array
try this:
larger[:50, :50] = smaller
it will assign the whole smaller array to a slice of the larger array.
If your "array" is a two dimensional list it's not possible to do this with one simple statement (you could create a list comprehension but I think it would be unreadable), this solution iterates over the smaller list and replaces one row/slice at a time. The following assumes the "slice" fits within the larger array. You should add checks for this otherwise you will get IndexErrors when you try to write outside of the limits of the larger array
def replace_2d_list_slice(larger_list, smaller_list, row_start, column_start):
for i, row in enumerate(smaller_list, start=row_start):
larger_list[i][column_start:column_start + len(row)] = row
replace_2d_list_slice(larger_list, smaller_list, 10, 10)
Example:
x = zeros((50, 50))
y = ones((100, 100))
x[0:50, 0:50] = y[20:70, 40:90] # x should now be all 1s
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
sparse
array objects will have a fixed size n set when they are created - attempting to set or get elements larger
than the size of the array should raise an IndexError
Use Scipy sparse Matrics e.g. COO sparse matrix
matrix = sparse.coo_matrix((C,(A,B)),shape=(5,5))
Or you can use Pandas sparseArray :
arr = np.random.randn(10)
arr[2:5] = np.nan; arr[7:8] = np.nan
sparr = pd.SparseArray(arr)
I'd bet these are already implemented in numpy or scipy.