How to convert np.array into pd.DataFrame - python

I have loaded the 'load_iris' toy dataset in the Scikit learn library.
{'data': array([[5.1, 3.5, 1.4, 0.2],
[4.9, 3. , 1.4, 0.2],
[4.7, 3.2, 1.3, 0.2],
[4.6, 3.1, 1.5, 0.2],
[5. , 3.6, 1.4, 0.2],
[5.4, 3.9, 1.7, 0.4],
[4.6, 3.4, 1.4, 0.3],
[5. , 3.4, 1.5, 0.2],
[4.4, 2.9, 1.4, 0.2],
[4.9, 3.1, 1.5, 0.1],
[5.4, 3.7, 1.5, 0.2],
[4.8, 3.4, 1.6, 0.2],
[4.8, 3. , 1.4, 0.1],
[4.3, 3. , 1.1, 0.1],
[5.8, 4. , 1.2, 0.2],
[5.7, 4.4, 1.5, 0.4],
[5.4, 3.9, 1.3, 0.4],
[5.1, 3.5, 1.4, 0.3],
[5.7, 3.8, 1.7, 0.3],
[5.1, 3.8, 1.5, 0.3],
[5.4, 3.4, 1.7, 0.2],
[5.1, 3.7, 1.5, 0.4],
[4.6, 3.6, 1. , 0.2],
[5.1, 3.3, 1.7, 0.5],
[4.8, 3.4, 1.9, 0.2],
[5. , 3. , 1.6, 0.2],
[5. , 3.4, 1.6, 0.4],
[5.2, 3.5, 1.5, 0.2],
[5.2, 3.4, 1.4, 0.2],
[4.7, 3.2, 1.6, 0.2],
[4.8, 3.1, 1.6, 0.2],
[5.4, 3.4, 1.5, 0.4],
[5.2, 4.1, 1.5, 0.1],
[5.5, 4.2, 1.4, 0.2],
[4.9, 3.1, 1.5, 0.2],
[5. , 3.2, 1.2, 0.2],
[5.5, 3.5, 1.3, 0.2],
[4.9, 3.6, 1.4, 0.1],
[4.4, 3. , 1.3, 0.2],
[5.1, 3.4, 1.5, 0.2],
[5. , 3.5, 1.3, 0.3],
[4.5, 2.3, 1.3, 0.3],
[4.4, 3.2, 1.3, 0.2],
[5. , 3.5, 1.6, 0.6],
[5.1, 3.8, 1.9, 0.4],
[4.8, 3. , 1.4, 0.3],
[5.1, 3.8, 1.6, 0.2],
[4.6, 3.2, 1.4, 0.2],
[5.3, 3.7, 1.5, 0.2],
[5. , 3.3, 1.4, 0.2],
[7. , 3.2, 4.7, 1.4],
[6.4, 3.2, 4.5, 1.5],
[6.9, 3.1, 4.9, 1.5],
[5.5, 2.3, 4. , 1.3],
[6.5, 2.8, 4.6, 1.5],
[5.7, 2.8, 4.5, 1.3],
[6.3, 3.3, 4.7, 1.6],
[4.9, 2.4, 3.3, 1. ],
[6.6, 2.9, 4.6, 1.3],
[5.2, 2.7, 3.9, 1.4],
[5. , 2. , 3.5, 1. ],
[5.9, 3. , 4.2, 1.5],
[6. , 2.2, 4. , 1. ],
[6.1, 2.9, 4.7, 1.4],
[5.6, 2.9, 3.6, 1.3],
[6.7, 3.1, 4.4, 1.4],
[5.6, 3. , 4.5, 1.5],
[5.8, 2.7, 4.1, 1. ],
[6.2, 2.2, 4.5, 1.5],
[5.6, 2.5, 3.9, 1.1],
[5.9, 3.2, 4.8, 1.8],
[6.1, 2.8, 4. , 1.3],
[6.3, 2.5, 4.9, 1.5],
[6.1, 2.8, 4.7, 1.2],
[6.4, 2.9, 4.3, 1.3],
[6.6, 3. , 4.4, 1.4],
[6.8, 2.8, 4.8, 1.4],
[6.7, 3. , 5. , 1.7],
[6. , 2.9, 4.5, 1.5],
[5.7, 2.6, 3.5, 1. ],
[5.5, 2.4, 3.8, 1.1],
[5.5, 2.4, 3.7, 1. ],
[5.8, 2.7, 3.9, 1.2],
[6. , 2.7, 5.1, 1.6],
[5.4, 3. , 4.5, 1.5],
[6. , 3.4, 4.5, 1.6],
[6.7, 3.1, 4.7, 1.5],
[6.3, 2.3, 4.4, 1.3],
[5.6, 3. , 4.1, 1.3],
[5.5, 2.5, 4. , 1.3],
[5.5, 2.6, 4.4, 1.2],
[6.1, 3. , 4.6, 1.4],
[5.8, 2.6, 4. , 1.2],
[5. , 2.3, 3.3, 1. ],
[5.6, 2.7, 4.2, 1.3],
[5.7, 3. , 4.2, 1.2],
[5.7, 2.9, 4.2, 1.3],
[6.2, 2.9, 4.3, 1.3],
[5.1, 2.5, 3. , 1.1],
[5.7, 2.8, 4.1, 1.3],
[6.3, 3.3, 6. , 2.5],
[5.8, 2.7, 5.1, 1.9],
[7.1, 3. , 5.9, 2.1],
[6.3, 2.9, 5.6, 1.8],
[6.5, 3. , 5.8, 2.2],
[7.6, 3. , 6.6, 2.1],
[4.9, 2.5, 4.5, 1.7],
[7.3, 2.9, 6.3, 1.8],
[6.7, 2.5, 5.8, 1.8],
[7.2, 3.6, 6.1, 2.5],
[6.5, 3.2, 5.1, 2. ],
[6.4, 2.7, 5.3, 1.9],
[6.8, 3. , 5.5, 2.1],
[5.7, 2.5, 5. , 2. ],
[5.8, 2.8, 5.1, 2.4],
[6.4, 3.2, 5.3, 2.3],
[6.5, 3. , 5.5, 1.8],
[7.7, 3.8, 6.7, 2.2],
[7.7, 2.6, 6.9, 2.3],
[6. , 2.2, 5. , 1.5],
[6.9, 3.2, 5.7, 2.3],
[5.6, 2.8, 4.9, 2. ],
[7.7, 2.8, 6.7, 2. ],
[6.3, 2.7, 4.9, 1.8],
[6.7, 3.3, 5.7, 2.1],
[7.2, 3.2, 6. , 1.8],
[6.2, 2.8, 4.8, 1.8],
[6.1, 3. , 4.9, 1.8],
[6.4, 2.8, 5.6, 2.1],
[7.2, 3. , 5.8, 1.6],
[7.4, 2.8, 6.1, 1.9],
[7.9, 3.8, 6.4, 2. ],
[6.4, 2.8, 5.6, 2.2],
[6.3, 2.8, 5.1, 1.5],
[6.1, 2.6, 5.6, 1.4],
[7.7, 3. , 6.1, 2.3],
[6.3, 3.4, 5.6, 2.4],
[6.4, 3.1, 5.5, 1.8],
[6. , 3. , 4.8, 1.8],
[6.9, 3.1, 5.4, 2.1],
[6.7, 3.1, 5.6, 2.4],
[6.9, 3.1, 5.1, 2.3],
[5.8, 2.7, 5.1, 1.9],
[6.8, 3.2, 5.9, 2.3],
[6.7, 3.3, 5.7, 2.5],
[6.7, 3. , 5.2, 2.3],
[6.3, 2.5, 5. , 1.9],
[6.5, 3. , 5.2, 2. ],
[6.2, 3.4, 5.4, 2.3],
[5.9, 3. , 5.1, 1.8]]),
'target': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]),
'frame': None,
'target_names': array(['setosa', 'versicolor', 'virginica'], dtype='<U10'),
'DESCR': '.. _iris_dataset:\n\nIris plants dataset\n--------------------\n\n**Data Set Characteristics:**\n\n :Number of Instances: 150 (50 in each of three classes)\n :Number of Attributes: 4 numeric, predictive attributes and the class\n :Attribute Information:\n - sepal length in cm\n - sepal width in cm\n - petal length in cm\n - petal width in cm\n - class:\n - Iris-Setosa\n - Iris-Versicolour\n - Iris-Virginica\n \n :Summary Statistics:\n\n ============== ==== ==== ======= ===== ====================\n Min Max Mean SD Class Correlation\n ============== ==== ==== ======= ===== ====================\n sepal length: 4.3 7.9 5.84 0.83 0.7826\n sepal width: 2.0 4.4 3.05 0.43 -0.4194\n petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\n petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\n ============== ==== ==== ======= ===== ====================\n\n :Missing Attribute Values: None\n :Class Distribution: 33.3% for each of 3 classes.\n :Creator: R.A. Fisher\n :Donor: Michael Marshall (MARSHALL%PLU#io.arc.nasa.gov)\n :Date: July, 1988\n\nThe famous Iris database, first used by Sir R.A. Fisher. The dataset is taken\nfrom Fisher\'s paper. Note that it\'s the same as in R, but not as in the UCI\nMachine Learning Repository, which has two wrong data points.\n\nThis is perhaps the best known database to be found in the\npattern recognition literature. Fisher\'s paper is a classic in the field and\nis referenced frequently to this day. (See Duda & Hart, for example.) The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant. One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\n.. topic:: References\n\n - Fisher, R.A. "The use of multiple measurements in taxonomic problems"\n Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to\n Mathematical Statistics" (John Wiley, NY, 1950).\n - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.\n (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\n - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System\n Structure and Classification Rule for Recognition in Partially Exposed\n Environments". IEEE Transactions on Pattern Analysis and Machine\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\n - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions\n on Information Theory, May 1972, 431-433.\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II\n conceptual clustering system finds 3 classes in the data.\n - Many, many more ...',
'feature_names': ['sepal length (cm)',
'sepal width (cm)',
'petal length (cm)',
'petal width (cm)'],
'filename': 'iris.csv',
'data_module': 'sklearn.datasets.data'}
I wish to convert this dataset, which is in array form into a data frame but am unable to do so with the following command, which return the first 4 columns completely filled with Nan
y = pd.DataFrame(datasets.load_iris(),columns = ['sepal length (cm)','sepal width (cm)','petal length (cm)','petal width (cm)','target'])
The command gives the following table, which is not correct
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target
0 NaN NaN NaN NaN 0
1 NaN NaN NaN NaN 0
2 NaN NaN NaN NaN 0
3 NaN NaN NaN NaN 0
4 NaN NaN NaN NaN 0
... ... ... ... ... ...
145 NaN NaN NaN NaN 2
146 NaN NaN NaN NaN 2
147 NaN NaN NaN NaN 2
148 NaN NaN NaN NaN 2
149 NaN NaN NaN NaN 2
How to do it?
How to get data correctly converted from np.array into pd.DataFrame

Use the as_frame=True option:
df = datasets.load_iris(as_frame=True)['data']
output:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)
0 5.1 3.5 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 1.3 0.2
3 4.6 3.1 1.5 0.2
4 5.0 3.6 1.4 0.2
.. ... ... ... ...
145 6.7 3.0 5.2 2.3
146 6.3 2.5 5.0 1.9
147 6.5 3.0 5.2 2.0
148 6.2 3.4 5.4 2.3
149 5.9 3.0 5.1 1.8
[150 rows x 4 columns]
If you also want the target:
iris = datasets.load_iris(as_frame=True)
df = iris['data']
df['target'] = iris['target']
output:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target
0 5.1 3.5 1.4 0.2 0
1 4.9 3.0 1.4 0.2 0
2 4.7 3.2 1.3 0.2 0
3 4.6 3.1 1.5 0.2 0
4 5.0 3.6 1.4 0.2 0
.. ... ... ... ... ...
145 6.7 3.0 5.2 2.3 2
146 6.3 2.5 5.0 1.9 2
147 6.5 3.0 5.2 2.0 2
148 6.2 3.4 5.4 2.3 2
149 5.9 3.0 5.1 1.8 2

Related

Pushing one element after every iteration in an array including the column length

I have a 2D array A where i am adding one element to B after every iteration, the problem is that my code works for 1D array. But since i am trying to pass a 2D array, the columns are turning into lines.
For example:
import numpy as np
test = np.array([
[1, 5, 4, 2, 2, 2.3, 1.27, 1.22, 1, 1.14],
[2, 3.01, 7.7, 9.6, 2.8, 5.4, 2.1, 7.47, 1, 4],
[3, 8, 6.7, 7.1, 5.1, 4.7, 5.9, 4.7, 3.8, 3.05],
[4, 6, 9.7, 3.3, 5.64, 8.41, 2.16, 3.38, 5.3, 8.5],
[5, 4.25, 5.28, 1.8, 2.24, 2.79, 7.68, 9.56, 1.1, 1.47],
[6, 5.18, 6.95, 2.63, 3.60, 4.83, 1.34, 1.86, 2.50, 3.64]])
A = test[0:6, 0:10]
print(A)
B = A[0:3, :]
for i in A[3:]:
B = np.append(B, i)
print(B.shape)
The output is:
(40,)
(50,)
(60,)
What i want to do is add 1 line (sample) while keeping the column length that is 10, the expected output would be:
[1, 5, 4, 2, 2, 2.3, 1.27, 1.22, 1, 1.14],
[2, 3.01, 7.7, 9.6, 2.8, 5.4, 2.1, 7.47, 1, 4],
[3, 8, 6.7, 7.1, 5.1, 4.7, 5.9, 4.7, 3.8, 3.05],
[1, 5, 4, 2, 2, 2.3, 1.27, 1.22, 1, 1.14],
[2, 3.01, 7.7, 9.6, 2.8, 5.4, 2.1, 7.47, 1, 4],
[3, 8, 6.7, 7.1, 5.1, 4.7, 5.9, 4.7, 3.8, 3.05],
[4, 6, 9.7, 3.3, 5.64, 8.41, 2.16, 3.38, 5.3, 8.5],
[1, 5, 4, 2, 2, 2.3, 1.27, 1.22, 1, 1.14],
[2, 3.01, 7.7, 9.6, 2.8, 5.4, 2.1, 7.47, 1, 4],
[3, 8, 6.7, 7.1, 5.1, 4.7, 5.9, 4.7, 3.8, 3.05],
[4, 6, 9.7, 3.3, 5.64, 8.41, 2.16, 3.38, 5.3, 8.5],
[5, 4.25, 5.28, 1.8, 2.24, 2.79, 7.68, 9.56, 1.1, 1.47],
[[1. 5. 4. 2. 2. 2.3 1.27 1.22 1. 1.14]
[2. 3.01 7.7 9.6 2.8 5.4 2.1 7.47 1. 4. ]
[3. 8. 6.7 7.1 5.1 4.7 5.9 4.7 3.8 3.05]
[4. 6. 9.7 3.3 5.64 8.41 2.16 3.38 5.3 8.5 ]
[5. 4.25 5.28 1.8 2.24 2.79 7.68 9.56 1.1 1.47]
[6. 5.18 6.95 2.63 3.6 4.83 1.34 1.86 2.5 3.64]]
But what the code actually outputs:
[1. 5. 4. 2. 2. 2.3 1.27 1.22 1. 1.14 2. 3.01 7.7 9.6
2.8 5.4 2.1 7.47 1. 4. 3. 8. 6.7 7.1 5.1 4.7 5.9 4.7
3.8 3.05 4. 6. 9.7 3.3 5.64 8.41 2.16 3.38 5.3 8.5 5. 4.25
5.28 1.8 2.24 2.79 7.68 9.56 1.1 1.47 6. 5.18 6.95 2.63 3.6 4.83
1.34 1.86 2.5 3.64]
If I understand correctly, what you want is:
B = np.append(B, i[None,:], 0)
which adds a dimension to i and appends along first axis axis=0. But appending to numpy array is costly and discouraged. I suggest using lists and converting to numpy array at the end.
output of your code:
(4, 10)
(5, 10)
(6, 10)

Checking and deleting duplicate neighbor values in a series of rows for a DataFrame

I have a set of rows in a dataframe that have some duplicate neighboring values which are all located in the same position of each column and looks like this:
row_data = pd.DataFrame({0 : [1.1, 1.2, 1.2, 1.3, 1.4, 1.5, 1.5, 1.6],
1 : [2.3, 2.2, 2.2, 2.3, 2.4, 2.5, 2.5, 2.6],
2 : [2.4, 2.2, 2.2, 2.3, 2.4, 2.6, 2.6, 2.7],
3 : [7.1, 7.2, 7.2, 7.3, 7.4, 7.5, 7.5, 7.6]}).T
As stated above the (1.2, 1.2) in row 0 is in the same position as (2.2, 2.2) in row 1, (2.2, 2.2) in row 2, and (7.2, 7.2) in row 3, etc...
I want to be able to first check if there are duplicate neighbors within every row, remove the duplicates leaving only the first instance of it, and give me a count of how many total duplicates were removed.
I've tried iterating over each row but that is much too time intensive as this dataframe is very large (36 rows by 260,000 columns). The pseudo code I'd like to have would follow this logic:
count_dup = 0
for index in range(0, len(row_data.columns)):
if row_data[index] == row_data[index+1]:
count_dup = count_dup + 1
row_data[index] = np.nan
My pseudo code obviously does not work but the rest would be to remove the NANs by dropping the duplicates off all columns.
The output would be:
row_data_dropped = pd.DataFrame({0 : [1.1, 1.2, 1.3, 1.4, 1.5, 1.6],
1 : [2.3, 2.2, 2.3, 2.4, 2.5, 2.6],
2 : [2.4, 2.2, 2.3, 2.4, 2.6, 2.7],
3 : [7.1, 7.2, 7.3, 7.4, 7.5, 7.6]}).T
total_dropped_neighbors = 8
Is there any way I can do this?
IIUC, here's what I would try:
non_dups = row_data.ne(row_data.shift(1,axis=1)).any()
row_data.loc[:,non_dups]
Output:
0 1 3 4 5 7
0 1.1 1.2 1.3 1.4 1.5 1.6
1 2.3 2.2 2.3 2.4 2.5 2.6
2 2.4 2.2 2.3 2.4 2.6 2.7
3 7.1 7.2 7.3 7.4 7.5 7.6

is there a parameter to set the precision for numpy.linspace?

I am trying to check if a numpy array contains a specific value:
>>> x = np.linspace(-5,5,101)
>>> x
array([-5. , -4.9, -4.8, -4.7, -4.6, -4.5, -4.4, -4.3, -4.2, -4.1, -4. ,
-3.9, -3.8, -3.7, -3.6, -3.5, -3.4, -3.3, -3.2, -3.1, -3. , -2.9,
-2.8, -2.7, -2.6, -2.5, -2.4, -2.3, -2.2, -2.1, -2. , -1.9, -1.8,
-1.7, -1.6, -1.5, -1.4, -1.3, -1.2, -1.1, -1. , -0.9, -0.8, -0.7,
-0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0. , 0.1, 0.2, 0.3, 0.4,
0.5, 0.6, 0.7, 0.8, 0.9, 1. , 1.1, 1.2, 1.3, 1.4, 1.5,
1.6, 1.7, 1.8, 1.9, 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6,
2.7, 2.8, 2.9, 3. , 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7,
3.8, 3.9, 4. , 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8,
4.9, 5. ])
>>> -5. in x
True
>>> a = 0.2
>>> a
0.2
>>> a in x
False
I assigned a constant to variable a. It seems that the precision of a is not compatible with the elements in the numpy array generated by np.linspace().
I've searched the docs, but didn't find anything about this.
This is not a question of the precision of np.linspace, but rather of the type of the elements in the generated array.
np.linspace generates elements which, conceptually, equally divide the input range between them. However, these elements are then stored as floating point numbers with limited precision, which makes the generation process itself appear to lack precision.
By passing the dtype argument to np.linspace, you can specify the precision of the floating point type used to store its result, which can increase the apparent precision of the generation process.
Nevertheless, you should not use the equality operator to compare floating point numbers. Instead, use np.isclose in conjunction with np.ndarray.any, or some equivalent:
>>> floats_64 = np.linspace(-5, 5, 101, dtype='float64')
>>> floats_128 = np.linspace(-5, 5, 101, dtype='float128')
>>> print(0.2 in floats_64)
False
>>> print(floats_64[52])
0.20000000000000018
>>> print(np.isclose(0.2, floats_64).any()) # check if any element in floats_64 is close to 0.2
True
>>> print(0.2 in floats_128)
False
>>> print(floats_128[52])
0.20000000000000017764
>>> print(np.isclose(0.2, floats_128).any()) # check if any element in floats_128 is close to 0.2
True

How to refine a mesh in python quickly

I have a numpy array([1.0, 2.0, 3.0]), which is actually a mesh in 1 dimension in my problem. What I want to do is to refine the mesh to get this: array([0.8, 0.9, 1, 1.1, 1.2, 1.8, 1.9, 2, 2.1, 2.2, 2.8, 2.9, 3, 3.1, 3.2,]).
The actual array is very large and this procedure costs a lot of time. How to do this quickly (maybe vectorize) in python?
Here's a vectorized approach -
(a[:,None] + np.arange(-0.2,0.3,0.1)).ravel() # a is input array
Sample run -
In [15]: a = np.array([1.0, 2.0, 3.0]) # Input array
In [16]: (a[:,None] + np.arange(-0.2,0.3,0.1)).ravel()
Out[16]:
array([ 0.8, 0.9, 1. , 1.1, 1.2, 1.8, 1.9, 2. , 2.1, 2.2, 2.8,
2.9, 3. , 3.1, 3.2])
Here are a few options(python 3):
Option 1:
np.array([j for i in arr for j in np.arange(i - 0.2, i + 0.25, 0.1)])
# array([ 0.8, 0.9, 1. , 1.1, 1.2, 1.8, 1.9, 2. , 2.1, 2.2, 2.8,
# 2.9, 3. , 3.1, 3.2])
Option 2:
np.array([j for x, y in zip(arr - 0.2, arr + 0.25) for j in np.arange(x,y,0.1)])
# array([ 0.8, 0.9, 1. , 1.1, 1.2, 1.8, 1.9, 2. , 2.1, 2.2, 2.8,
# 2.9, 3. , 3.1, 3.2])
Option 3:
np.array([arr + i for i in np.arange(-0.2, 0.25, 0.1)]).T.ravel()
# array([ 0.8, 0.9, 1. , 1.1, 1.2, 1.8, 1.9, 2. , 2.1, 2.2, 2.8,
# 2.9, 3. , 3.1, 3.2])
Timing on a larger array:
arr = np.arange(100000)
arr
# array([ 0, 1, 2, ..., 99997, 99998, 99999])
%timeit np.array([j for i in arr for j in np.arange(i-0.2, i+0.25, 0.1)])
# 1 loop, best of 3: 615 ms per loop
%timeit np.array([j for x, y in zip(arr - 0.2, arr + 0.25) for j in np.arange(x,y,0.1)])
# 1 loop, best of 3: 250 ms per loop
%timeit np.array([arr + i for i in np.arange(-0.2, 0.25, 0.1)]).T.ravel()
# 100 loops, best of 3: 1.93 ms per loop

Python: How do I fill an array with a range of numbers?

So I have an array of 100 elements:
a = np.empty(100)
How do I fill it with a range of numbers? I want something like this:
b = a.fill(np.arange(1, 4, 0.25))
So I want it to keep filling a with that values of that range on and on until it reaches the size of it
Thanks
np.put places values from b into a at the target indices, ind. If v is shorter than ind, its values are repeated as necessary:
import numpy as np
a = np.empty(100)
b = np.arange(1, 4, 0.25)
ind = np.arange(len(a))
np.put(a, ind, b)
print(a)
yields
[ 1. 1.25 1.5 1.75 2. 2.25 2.5 2.75 3. 3.25 3.5 3.75
1. 1.25 1.5 1.75 2. 2.25 2.5 2.75 3. 3.25 3.5 3.75
1. 1.25 1.5 1.75 2. 2.25 2.5 2.75 3. 3.25 3.5 3.75
1. 1.25 1.5 1.75 2. 2.25 2.5 2.75 3. 3.25 3.5 3.75
1. 1.25 1.5 1.75 2. 2.25 2.5 2.75 3. 3.25 3.5 3.75
1. 1.25 1.5 1.75 2. 2.25 2.5 2.75 3. 3.25 3.5 3.75
1. 1.25 1.5 1.75 2. 2.25 2.5 2.75 3. 3.25 3.5 3.75
1. 1.25 1.5 1.75 2. 2.25 2.5 2.75 3. 3.25 3.5 3.75
1. 1.25 1.5 1.75]
updating solution to fit description
a = np.empty(100)
filler = np.arange(1,4,0.25)
index = np.arange(a.size)
np.put(a,index,filler)
Such a simple feat can be achieved in plain python, too
>>> size = 100
>>> b = [v/4 for v in range(4,16)]
>>> a = (b * (size // len(b) + 1))[:size]
>>> a
[1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75,
1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75,
1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75,
1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75,
1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75,
1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75,
1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75,
1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75,
1.0, 1.25, 1.5, 1.75]
with these exact conditions it's about 3 times faster than numpy.

Categories