Python. Convert string to dictionary containing numpy arrays - python
I have a string which has this structure:
{0: [array([5.1, 3.5, 1.4, 0.2]), array([4.9, 3. , 1.4, 0.2]), 1: [array([7. , 3.2, 4.7, 1.4]), array([6.4, 3.2, 4.5, 1.5]), 2: [array([6.3, 3.3, 6. , 2.5]), array([7.1, 3. , 5.9, 2.1])]}
It is in the form of a python dictionary containing numpy arrays.
How can I turn this string into a python dictionary containing numpy arrays?
Thanks for the replies already. I can't simply just change the string because the string is an output of a python file that I want to pipe to another one. I could go and change the format of this output, but I was rather hoping to avoid that.
In an interactive ipython session is created an alias:
In [7]: array = np.array
and edited a copy-n-paste of your sample. I had to add a couple of closing ]:
In [8]: data = {0: [array([5.1, 3.5, 1.4, 0.2]), array([4.9, 3. , 1.4, 0.2])], 1
...: : [array([7. , 3.2, 4.7, 1.4]), array([6.4, 3.2, 4.5, 1.5])], 2: [array(
...: [6.3, 3.3, 6. , 2.5]), array([7.1, 3. , 5.9, 2.1])]}
Now a simple execute works:
In [9]: data
Out[9]:
{0: [array([5.1, 3.5, 1.4, 0.2]), array([4.9, 3. , 1.4, 0.2])],
1: [array([7. , 3.2, 4.7, 1.4]), array([6.4, 3.2, 4.5, 1.5])],
2: [array([6.3, 3.3, 6. , 2.5]), array([7.1, 3. , 5.9, 2.1])]}
or with a string (corrected):
In [10]: astr="{0: [array([5.1, 3.5, 1.4, 0.2]), array([4.9, 3. , 1.4, 0.2])], 1
...: : [array([7. , 3.2, 4.7, 1.4]), array([6.4, 3.2, 4.5, 1.5])], 2: [array
...: ([6.3, 3.3, 6. , 2.5]), array([7.1, 3. , 5.9, 2.1])]}"
In [11]: astr
Out[11]: '{0: [array([5.1, 3.5, 1.4, 0.2]), array([4.9, 3. , 1.4, 0.2])], 1: [array([7. , 3.2, 4.7, 1.4]), array([6.4, 3.2, 4.5, 1.5])], 2: [array([6.3, 3.3, 6. , 2.5]), array([7.1, 3. , 5.9, 2.1])]}'
I can do an exec or eval:
In [15]: eval(astr)
Out[15]:
{0: [array([5.1, 3.5, 1.4, 0.2]), array([4.9, 3. , 1.4, 0.2])],
1: [array([7. , 3.2, 4.7, 1.4]), array([6.4, 3.2, 4.5, 1.5])],
2: [array([6.3, 3.3, 6. , 2.5]), array([7.1, 3. , 5.9, 2.1])]}
Use of eval is often discouraged because it can be hacked. But the safer ast_eval only works with dict, list and tuples, not np.array. One way or other you have to edit the string so it is a valid Python/numpy expression.
I changed how the string was being being produced so that it was in the form of a dictionary containing python lists instead of numpy arrays.
With this I could just use ast.literal_eval() to deseriealize it.
Related
How to convert np.array into pd.DataFrame
I have loaded the 'load_iris' toy dataset in the Scikit learn library. {'data': array([[5.1, 3.5, 1.4, 0.2], [4.9, 3. , 1.4, 0.2], [4.7, 3.2, 1.3, 0.2], [4.6, 3.1, 1.5, 0.2], [5. , 3.6, 1.4, 0.2], [5.4, 3.9, 1.7, 0.4], [4.6, 3.4, 1.4, 0.3], [5. , 3.4, 1.5, 0.2], [4.4, 2.9, 1.4, 0.2], [4.9, 3.1, 1.5, 0.1], [5.4, 3.7, 1.5, 0.2], [4.8, 3.4, 1.6, 0.2], [4.8, 3. , 1.4, 0.1], [4.3, 3. , 1.1, 0.1], [5.8, 4. , 1.2, 0.2], [5.7, 4.4, 1.5, 0.4], [5.4, 3.9, 1.3, 0.4], [5.1, 3.5, 1.4, 0.3], [5.7, 3.8, 1.7, 0.3], [5.1, 3.8, 1.5, 0.3], [5.4, 3.4, 1.7, 0.2], [5.1, 3.7, 1.5, 0.4], [4.6, 3.6, 1. , 0.2], [5.1, 3.3, 1.7, 0.5], [4.8, 3.4, 1.9, 0.2], [5. , 3. , 1.6, 0.2], [5. , 3.4, 1.6, 0.4], [5.2, 3.5, 1.5, 0.2], [5.2, 3.4, 1.4, 0.2], [4.7, 3.2, 1.6, 0.2], [4.8, 3.1, 1.6, 0.2], [5.4, 3.4, 1.5, 0.4], [5.2, 4.1, 1.5, 0.1], [5.5, 4.2, 1.4, 0.2], [4.9, 3.1, 1.5, 0.2], [5. , 3.2, 1.2, 0.2], [5.5, 3.5, 1.3, 0.2], [4.9, 3.6, 1.4, 0.1], [4.4, 3. , 1.3, 0.2], [5.1, 3.4, 1.5, 0.2], [5. , 3.5, 1.3, 0.3], [4.5, 2.3, 1.3, 0.3], [4.4, 3.2, 1.3, 0.2], [5. , 3.5, 1.6, 0.6], [5.1, 3.8, 1.9, 0.4], [4.8, 3. , 1.4, 0.3], [5.1, 3.8, 1.6, 0.2], [4.6, 3.2, 1.4, 0.2], [5.3, 3.7, 1.5, 0.2], [5. , 3.3, 1.4, 0.2], [7. , 3.2, 4.7, 1.4], [6.4, 3.2, 4.5, 1.5], [6.9, 3.1, 4.9, 1.5], [5.5, 2.3, 4. , 1.3], [6.5, 2.8, 4.6, 1.5], [5.7, 2.8, 4.5, 1.3], [6.3, 3.3, 4.7, 1.6], [4.9, 2.4, 3.3, 1. ], [6.6, 2.9, 4.6, 1.3], [5.2, 2.7, 3.9, 1.4], [5. , 2. , 3.5, 1. ], [5.9, 3. , 4.2, 1.5], [6. , 2.2, 4. , 1. ], [6.1, 2.9, 4.7, 1.4], [5.6, 2.9, 3.6, 1.3], [6.7, 3.1, 4.4, 1.4], [5.6, 3. , 4.5, 1.5], [5.8, 2.7, 4.1, 1. ], [6.2, 2.2, 4.5, 1.5], [5.6, 2.5, 3.9, 1.1], [5.9, 3.2, 4.8, 1.8], [6.1, 2.8, 4. , 1.3], [6.3, 2.5, 4.9, 1.5], [6.1, 2.8, 4.7, 1.2], [6.4, 2.9, 4.3, 1.3], [6.6, 3. , 4.4, 1.4], [6.8, 2.8, 4.8, 1.4], [6.7, 3. , 5. , 1.7], [6. , 2.9, 4.5, 1.5], [5.7, 2.6, 3.5, 1. ], [5.5, 2.4, 3.8, 1.1], [5.5, 2.4, 3.7, 1. ], [5.8, 2.7, 3.9, 1.2], [6. , 2.7, 5.1, 1.6], [5.4, 3. , 4.5, 1.5], [6. , 3.4, 4.5, 1.6], [6.7, 3.1, 4.7, 1.5], [6.3, 2.3, 4.4, 1.3], [5.6, 3. , 4.1, 1.3], [5.5, 2.5, 4. , 1.3], [5.5, 2.6, 4.4, 1.2], [6.1, 3. , 4.6, 1.4], [5.8, 2.6, 4. , 1.2], [5. , 2.3, 3.3, 1. ], [5.6, 2.7, 4.2, 1.3], [5.7, 3. , 4.2, 1.2], [5.7, 2.9, 4.2, 1.3], [6.2, 2.9, 4.3, 1.3], [5.1, 2.5, 3. , 1.1], [5.7, 2.8, 4.1, 1.3], [6.3, 3.3, 6. , 2.5], [5.8, 2.7, 5.1, 1.9], [7.1, 3. , 5.9, 2.1], [6.3, 2.9, 5.6, 1.8], [6.5, 3. , 5.8, 2.2], [7.6, 3. , 6.6, 2.1], [4.9, 2.5, 4.5, 1.7], [7.3, 2.9, 6.3, 1.8], [6.7, 2.5, 5.8, 1.8], [7.2, 3.6, 6.1, 2.5], [6.5, 3.2, 5.1, 2. ], [6.4, 2.7, 5.3, 1.9], [6.8, 3. , 5.5, 2.1], [5.7, 2.5, 5. , 2. ], [5.8, 2.8, 5.1, 2.4], [6.4, 3.2, 5.3, 2.3], [6.5, 3. , 5.5, 1.8], [7.7, 3.8, 6.7, 2.2], [7.7, 2.6, 6.9, 2.3], [6. , 2.2, 5. , 1.5], [6.9, 3.2, 5.7, 2.3], [5.6, 2.8, 4.9, 2. ], [7.7, 2.8, 6.7, 2. ], [6.3, 2.7, 4.9, 1.8], [6.7, 3.3, 5.7, 2.1], [7.2, 3.2, 6. , 1.8], [6.2, 2.8, 4.8, 1.8], [6.1, 3. , 4.9, 1.8], [6.4, 2.8, 5.6, 2.1], [7.2, 3. , 5.8, 1.6], [7.4, 2.8, 6.1, 1.9], [7.9, 3.8, 6.4, 2. ], [6.4, 2.8, 5.6, 2.2], [6.3, 2.8, 5.1, 1.5], [6.1, 2.6, 5.6, 1.4], [7.7, 3. , 6.1, 2.3], [6.3, 3.4, 5.6, 2.4], [6.4, 3.1, 5.5, 1.8], [6. , 3. , 4.8, 1.8], [6.9, 3.1, 5.4, 2.1], [6.7, 3.1, 5.6, 2.4], [6.9, 3.1, 5.1, 2.3], [5.8, 2.7, 5.1, 1.9], [6.8, 3.2, 5.9, 2.3], [6.7, 3.3, 5.7, 2.5], [6.7, 3. , 5.2, 2.3], [6.3, 2.5, 5. , 1.9], [6.5, 3. , 5.2, 2. ], [6.2, 3.4, 5.4, 2.3], [5.9, 3. , 5.1, 1.8]]), 'target': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]), 'frame': None, 'target_names': array(['setosa', 'versicolor', 'virginica'], dtype='<U10'), 'DESCR': '.. _iris_dataset:\n\nIris plants dataset\n--------------------\n\n**Data Set Characteristics:**\n\n :Number of Instances: 150 (50 in each of three classes)\n :Number of Attributes: 4 numeric, predictive attributes and the class\n :Attribute Information:\n - sepal length in cm\n - sepal width in cm\n - petal length in cm\n - petal width in cm\n - class:\n - Iris-Setosa\n - Iris-Versicolour\n - Iris-Virginica\n \n :Summary Statistics:\n\n ============== ==== ==== ======= ===== ====================\n Min Max Mean SD Class Correlation\n ============== ==== ==== ======= ===== ====================\n sepal length: 4.3 7.9 5.84 0.83 0.7826\n sepal width: 2.0 4.4 3.05 0.43 -0.4194\n petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\n petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\n ============== ==== ==== ======= ===== ====================\n\n :Missing Attribute Values: None\n :Class Distribution: 33.3% for each of 3 classes.\n :Creator: R.A. Fisher\n :Donor: Michael Marshall (MARSHALL%PLU#io.arc.nasa.gov)\n :Date: July, 1988\n\nThe famous Iris database, first used by Sir R.A. Fisher. The dataset is taken\nfrom Fisher\'s paper. Note that it\'s the same as in R, but not as in the UCI\nMachine Learning Repository, which has two wrong data points.\n\nThis is perhaps the best known database to be found in the\npattern recognition literature. Fisher\'s paper is a classic in the field and\nis referenced frequently to this day. (See Duda & Hart, for example.) The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant. One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\n.. topic:: References\n\n - Fisher, R.A. "The use of multiple measurements in taxonomic problems"\n Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to\n Mathematical Statistics" (John Wiley, NY, 1950).\n - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.\n (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\n - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System\n Structure and Classification Rule for Recognition in Partially Exposed\n Environments". IEEE Transactions on Pattern Analysis and Machine\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\n - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions\n on Information Theory, May 1972, 431-433.\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II\n conceptual clustering system finds 3 classes in the data.\n - Many, many more ...', 'feature_names': ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'], 'filename': 'iris.csv', 'data_module': 'sklearn.datasets.data'} I wish to convert this dataset, which is in array form into a data frame but am unable to do so with the following command, which return the first 4 columns completely filled with Nan y = pd.DataFrame(datasets.load_iris(),columns = ['sepal length (cm)','sepal width (cm)','petal length (cm)','petal width (cm)','target']) The command gives the following table, which is not correct sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target 0 NaN NaN NaN NaN 0 1 NaN NaN NaN NaN 0 2 NaN NaN NaN NaN 0 3 NaN NaN NaN NaN 0 4 NaN NaN NaN NaN 0 ... ... ... ... ... ... 145 NaN NaN NaN NaN 2 146 NaN NaN NaN NaN 2 147 NaN NaN NaN NaN 2 148 NaN NaN NaN NaN 2 149 NaN NaN NaN NaN 2 How to do it? How to get data correctly converted from np.array into pd.DataFrame
Use the as_frame=True option: df = datasets.load_iris(as_frame=True)['data'] output: sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) 0 5.1 3.5 1.4 0.2 1 4.9 3.0 1.4 0.2 2 4.7 3.2 1.3 0.2 3 4.6 3.1 1.5 0.2 4 5.0 3.6 1.4 0.2 .. ... ... ... ... 145 6.7 3.0 5.2 2.3 146 6.3 2.5 5.0 1.9 147 6.5 3.0 5.2 2.0 148 6.2 3.4 5.4 2.3 149 5.9 3.0 5.1 1.8 [150 rows x 4 columns] If you also want the target: iris = datasets.load_iris(as_frame=True) df = iris['data'] df['target'] = iris['target'] output: sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target 0 5.1 3.5 1.4 0.2 0 1 4.9 3.0 1.4 0.2 0 2 4.7 3.2 1.3 0.2 0 3 4.6 3.1 1.5 0.2 0 4 5.0 3.6 1.4 0.2 0 .. ... ... ... ... ... 145 6.7 3.0 5.2 2.3 2 146 6.3 2.5 5.0 1.9 2 147 6.5 3.0 5.2 2.0 2 148 6.2 3.4 5.4 2.3 2 149 5.9 3.0 5.1 1.8 2
is there a parameter to set the precision for numpy.linspace?
I am trying to check if a numpy array contains a specific value: >>> x = np.linspace(-5,5,101) >>> x array([-5. , -4.9, -4.8, -4.7, -4.6, -4.5, -4.4, -4.3, -4.2, -4.1, -4. , -3.9, -3.8, -3.7, -3.6, -3.5, -3.4, -3.3, -3.2, -3.1, -3. , -2.9, -2.8, -2.7, -2.6, -2.5, -2.4, -2.3, -2.2, -2.1, -2. , -1.9, -1.8, -1.7, -1.6, -1.5, -1.4, -1.3, -1.2, -1.1, -1. , -0.9, -0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3. , 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4. , 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5. ]) >>> -5. in x True >>> a = 0.2 >>> a 0.2 >>> a in x False I assigned a constant to variable a. It seems that the precision of a is not compatible with the elements in the numpy array generated by np.linspace(). I've searched the docs, but didn't find anything about this.
This is not a question of the precision of np.linspace, but rather of the type of the elements in the generated array. np.linspace generates elements which, conceptually, equally divide the input range between them. However, these elements are then stored as floating point numbers with limited precision, which makes the generation process itself appear to lack precision. By passing the dtype argument to np.linspace, you can specify the precision of the floating point type used to store its result, which can increase the apparent precision of the generation process. Nevertheless, you should not use the equality operator to compare floating point numbers. Instead, use np.isclose in conjunction with np.ndarray.any, or some equivalent: >>> floats_64 = np.linspace(-5, 5, 101, dtype='float64') >>> floats_128 = np.linspace(-5, 5, 101, dtype='float128') >>> print(0.2 in floats_64) False >>> print(floats_64[52]) 0.20000000000000018 >>> print(np.isclose(0.2, floats_64).any()) # check if any element in floats_64 is close to 0.2 True >>> print(0.2 in floats_128) False >>> print(floats_128[52]) 0.20000000000000017764 >>> print(np.isclose(0.2, floats_128).any()) # check if any element in floats_128 is close to 0.2 True
How to import from a data file a numpy structured array
i'm trying to create an array which has 5 columns imported from a data file. The 4 of them are floats and the last one string. The data file looks like this: 5.1,3.5,1.4,0.2,Iris-setosa 4.9,3.0,1.4,0.2,Iris-setosa 4.7,3.2,1.3,0.2,Iris-setosa 4.6,3.1,1.5,0.2,Iris-setosa 5.0,3.6,1.4,0.2,Iris-setosa 5.4,3.9,1.7,0.4,Iris-setosa 4.6,3.4,1.4,0.3,Iris-setosa 5.0,3.4,1.5,0.2,Iris-setosa I tried these: data = np.genfromtxt(filename, dtype = "float,float,float,float,str", delimiter = ",") data = np.loadtxt(filename, dtype = "float,float,float,float,str", delimiter = ",") ,but both codes import only the first column. Why? How can i fix this? Ty for your time! :)
You must specify correctly the str type : "U20" for exemple for 20 characters max : data = np.loadtxt('data.txt', dtype = "float,"*4 + "U20", delimiter = ",") seems to work : array([( 5.1, 3.5, 1.4, 0.2, 'Iris-setosa'), ( 4.9, 3. , 1.4, 0.2, 'Iris-setosa'), ( 4.7, 3.2, 1.3, 0.2, 'Iris-setosa'), ( 4.6, 3.1, 1.5, 0.2, 'Iris-setosa'), ( 5. , 3.6, 1.4, 0.2, 'Iris-setosa'), ( 5.4, 3.9, 1.7, 0.4, 'Iris-setosa'), ( 4.6, 3.4, 1.4, 0.3, 'Iris-setosa'), ( 5. , 3.4, 1.5, 0.2, 'Iris-setosa')], dtype=[('f0', '<f8'), ('f1', '<f8'), ('f2', '<f8'), ('f3', '<f8'), ('f4', '<U20')]) An other method using pandas give you an object array, but this slow down further computations : In [336]: pd.read_csv('data.txt',header=None).values Out[336]: array([[5.1, 3.5, 1.4, 0.2, 'Iris-setosa'], [4.9, 3.0, 1.4, 0.2, 'Iris-setosa'], [4.7, 3.2, 1.3, 0.2, 'Iris-setosa'], [4.6, 3.1, 1.5, 0.2, 'Iris-setosa'], [5.0, 3.6, 1.4, 0.2, 'Iris-setosa'], [5.4, 3.9, 1.7, 0.4, 'Iris-setosa'], [4.6, 3.4, 1.4, 0.3, 'Iris-setosa'], [5.0, 3.4, 1.5, 0.2, 'Iris-setosa']], dtype=object)
How to refine a mesh in python quickly
I have a numpy array([1.0, 2.0, 3.0]), which is actually a mesh in 1 dimension in my problem. What I want to do is to refine the mesh to get this: array([0.8, 0.9, 1, 1.1, 1.2, 1.8, 1.9, 2, 2.1, 2.2, 2.8, 2.9, 3, 3.1, 3.2,]). The actual array is very large and this procedure costs a lot of time. How to do this quickly (maybe vectorize) in python?
Here's a vectorized approach - (a[:,None] + np.arange(-0.2,0.3,0.1)).ravel() # a is input array Sample run - In [15]: a = np.array([1.0, 2.0, 3.0]) # Input array In [16]: (a[:,None] + np.arange(-0.2,0.3,0.1)).ravel() Out[16]: array([ 0.8, 0.9, 1. , 1.1, 1.2, 1.8, 1.9, 2. , 2.1, 2.2, 2.8, 2.9, 3. , 3.1, 3.2])
Here are a few options(python 3): Option 1: np.array([j for i in arr for j in np.arange(i - 0.2, i + 0.25, 0.1)]) # array([ 0.8, 0.9, 1. , 1.1, 1.2, 1.8, 1.9, 2. , 2.1, 2.2, 2.8, # 2.9, 3. , 3.1, 3.2]) Option 2: np.array([j for x, y in zip(arr - 0.2, arr + 0.25) for j in np.arange(x,y,0.1)]) # array([ 0.8, 0.9, 1. , 1.1, 1.2, 1.8, 1.9, 2. , 2.1, 2.2, 2.8, # 2.9, 3. , 3.1, 3.2]) Option 3: np.array([arr + i for i in np.arange(-0.2, 0.25, 0.1)]).T.ravel() # array([ 0.8, 0.9, 1. , 1.1, 1.2, 1.8, 1.9, 2. , 2.1, 2.2, 2.8, # 2.9, 3. , 3.1, 3.2]) Timing on a larger array: arr = np.arange(100000) arr # array([ 0, 1, 2, ..., 99997, 99998, 99999]) %timeit np.array([j for i in arr for j in np.arange(i-0.2, i+0.25, 0.1)]) # 1 loop, best of 3: 615 ms per loop %timeit np.array([j for x, y in zip(arr - 0.2, arr + 0.25) for j in np.arange(x,y,0.1)]) # 1 loop, best of 3: 250 ms per loop %timeit np.array([arr + i for i in np.arange(-0.2, 0.25, 0.1)]).T.ravel() # 100 loops, best of 3: 1.93 ms per loop
Compare adjacent values in numpy array
I have a Numpy one-dimensional array of data, something like this a = [1.9, 2.3, 2.1, 2.5, 2.7, 3.0, 3.3, 3.2, 3.1] I want to create a new array, where the values are composed of the greater of the adjacent values. For the above example, the output would be: b = [2.3, 2.3, 2.5, 2.7, 3.0, 3.3, 3.3, 3.2] I can do this by looping through the input array, comparing the neighbouring values, eg: import numpy as np a = np.array([1.9, 2.3, 2.1, 2.5, 2.7, 3.0, 3.3, 3.2, 3.1]) b = np.zeros(len(a)-1) for i in range(len(a)-1): if (a[i] > a[i+1]): b[i] = a[i] else: b[i] = a[i+1] but I'd like to do this in a more elegant "pythonic" vectorised fashion. I've searched and read about np.zip, np.where, np.diff etc but haven't yet found a way to do this (or more likely, I haven't understood what is possible). Any suggestions ?
You want element-wise maximum of a[1:] and a[:-1]: >>> a array([ 1.9, 2.3, 2.1, 2.5, 2.7, 3. , 3.3, 3.2, 3.1]) >>> a[1:] array([ 2.3, 2.1, 2.5, 2.7, 3. , 3.3, 3.2, 3.1]) >>> a[:-1] array([ 1.9, 2.3, 2.1, 2.5, 2.7, 3. , 3.3, 3.2]) >>> np.maximum(a[1:], a[:-1]) array([ 2.3, 2.3, 2.5, 2.7, 3. , 3.3, 3.3, 3.2])