I'm using matplotlib to draw trendance line for stock data.
import pandas as pd
import matplotlib.pyplot as plt
A = pd.read_csv('daily/A.csv', index_col=[0])
print(A)
AAL = pd.read_csv('daily/AAL.csv', index_col=[0])
print(AAL)
A['Close'].plot()
AAL['Close'].plot()
plt.show()
then result is:
High Low Open Close Volume Adj Close
Date
1999-11-18 35.77 28.61 32.55 31.47 62546300.0 27.01
1999-11-19 30.76 28.48 30.71 28.88 15234100.0 24.79
1999-11-22 31.47 28.66 29.55 31.47 6577800.0 27.01
1999-11-23 31.21 28.61 30.40 28.61 5975600.0 24.56
1999-11-24 30.00 28.61 28.70 29.37 4843200.0 25.21
... ... ... ... ... ... ...
2020-06-24 89.08 86.32 89.08 86.56 1806600.0 86.38
2020-06-25 87.35 84.80 86.43 87.26 1350100.0 87.08
2020-06-26 87.56 85.52 87.23 85.90 2225800.0 85.72
2020-06-29 87.36 86.11 86.56 87.29 1302500.0 87.29
2020-06-30 88.88 87.24 87.33 88.37 1428931.0 88.37
[5186 rows x 6 columns]
High Low Open Close Volume Adj Close
Date
2005-09-27 21.40 19.10 21.05 19.30 961200.0 18.19
2005-09-28 20.53 19.20 19.30 20.50 5747900.0 19.33
2005-09-29 20.58 20.10 20.40 20.21 1078200.0 19.05
2005-09-30 21.05 20.18 20.26 21.01 3123300.0 19.81
2005-10-03 21.75 20.90 20.90 21.50 1057900.0 20.27
... ... ... ... ... ... ...
2020-06-24 13.90 12.83 13.59 13.04 140975500.0 13.04
2020-06-25 13.24 12.18 12.53 13.17 117383400.0 13.17
2020-06-26 13.29 12.13 13.20 12.38 108813000.0 12.38
2020-06-29 13.51 12.02 12.57 13.32 114650300.0 13.32
2020-06-30 13.48 12.88 13.10 13.07 68669742.0 13.07
[3715 rows x 6 columns]
yes, the begin of 2 stocks is different, the end date is same.
so i get the plot is like this:
stockplot
this is not normal like others.
so, who could give me any advice, to draw a normal trendance line for 2 stocks?
You can try for making two different plots with same limits and then put one over the another for comparison.
Related
I am a bit new to Python. I have been trying to graph this QUARTERLY data like the one produced here (How do you create a line chart with quarter and year labels with monthly ticks?). I am sorry for a naïve question, but any help would be appreciated. I am actually not able to make string series of Qs. Thanks.
Date
A
B
C
3/31/2013
7.16333
0.6
0.6982
6/30/2013
7.87967
0.6
0.6726
9/30/2013
7.26133
0.6
0.7771
12/31/2013
6.66667
0.6
0.9108
3/31/2014
7.06267
0.6
0.8292
6/30/2014
7.41867
0.6
0.7069
9/30/2014
7.85617
0.6
0.6246
12/31/2014
6.93353
0.8305
0.5752
3/31/2015
6.40496
0.987
0.586
6/30/2015
6.93939
0.8629
0.575
9/30/2015
6.12374
1.0991
0.5794
12/31/2015
6.12928
1.0922
0.5806
3/31/2016
5.37414
1.2744
0.6523
6/30/2016
5.8968
1.1046
0.5851
9/30/2016
5.84815
1.0991
0.5884
12/31/2016
5.59963
1.1397
0.5901
3/31/2017
5.68668
1.0695
0.5815
6/30/2017
5.0588
1.2126
0.5957
9/30/2017
5.07095
1.2178
0.5888
12/31/2017
5.07308
1.1371
0.593
3/31/2018
5.06668
1.1697
0.6059
6/30/2018
5.30797
0.9936
0.6167
9/30/2018
5.47215
0.8294
0.6733
12/31/2018
4.30104
1.0148
0.903
3/31/2019
4.77011
0.8565
0.924
6/30/2019
6.23133
0.6
1.0592
9/30/2019
6.1635
0.6
1.0556
12/31/2019
6.02583
0.6
1.0303
3/31/2020
7.53533
0.6
1.085
6/30/2020
6.42743
1.0238
0.5975
9/30/2020
7.33954
0.7784
0.6923
12/31/2020
8.62803
0.6514
0.6792
3/31/2021
8.28196
0.6359
0.7152
6/30/2021
7.63684
0.6347
0.7456
9/30/2021
6.92014
0.7851
0.6163
12/31/2021
7.04538
0.8175
0.5068
I have an issue on calculating the rolling mean for a column I added in the code. For some reason, it doesnt work on the column I added but works on a column from the original csv.
Original dataframe from the csv as follow:
Open High Low Last Change Volume Open Int
Time
09/20/19 98.50 99.00 98.35 98.95 0.60 3305.0 0.0
09/19/19 100.35 100.75 98.10 98.35 -2.00 17599.0 0.0
09/18/19 100.65 101.90 100.10 100.35 0.00 18258.0 121267.0
09/17/19 103.75 104.00 100.00 100.35 -3.95 34025.0 122453.0
09/16/19 102.30 104.95 101.60 104.30 1.55 21403.0 127447.0
Ticker = pd.read_csv('\\......\Historical data\kcz19 daily.csv',
index_col=0, parse_dates=True)
Ticker['Return'] = np.log(Ticker['Last'] / Ticker['Last'].shift(1)).fillna('')
Ticker['ret20'] = Ticker['Return'].rolling(window=20, win_type='triang').mean()
print(Ticker.head())
Open High Low ... Open Int Return ret20
Time ...
09/20/19 98.50 99.00 98.35 ... 0.0
09/19/19 100.35 100.75 98.10 ... 0.0 -0.00608213 -0.00608213
09/18/19 100.65 101.90 100.10 ... 121267.0 0.0201315 0.0201315
09/17/19 103.75 104.00 100.00 ... 122453.0 0 0
09/16/19 102.30 104.95 101.60 ... 127447.0 0.0386073 0.0386073
ret20 column should have the rolling mean of the column Return so it should show some data starting from raw 21 whereas it is only a copy of column Return here.
If I replace with the Last column it will work.
Below is the result using colum Last
Open High Low ... Open Int Return ret20
Time ...
09/20/19 98.50 99.00 98.35 ... 0.0 NaN
09/19/19 100.35 100.75 98.10 ... 0.0 -0.00608213 NaN
09/18/19 100.65 101.90 100.10 ... 121267.0 0.0201315 NaN
09/17/19 103.75 104.00 100.00 ... 122453.0 0 NaN
09/16/19 102.30 104.95 101.60 ... 127447.0 0.0386073 NaN
09/13/19 103.25 103.60 102.05 ... 128707.0 -0.0149725 NaN
09/12/19 102.80 103.85 101.15 ... 128904.0 0.00823848 NaN
09/11/19 102.00 104.70 101.40 ... 132067.0 -0.00193237 NaN
09/10/19 98.50 102.25 98.00 ... 135349.0 -0.0175614 NaN
09/09/19 97.00 99.25 95.30 ... 137347.0 -0.0335283 NaN
09/06/19 95.35 97.30 95.00 ... 135399.0 -0.0122889 NaN
09/05/19 96.80 97.45 95.05 ... 136142.0 -0.0171477 NaN
09/04/19 95.65 96.95 95.50 ... 134864.0 0.0125002 NaN
09/03/19 96.00 96.60 94.20 ... 134685.0 -0.0109291 NaN
08/30/19 95.40 97.20 95.10 ... 134061.0 0.0135137 NaN
08/29/19 97.05 97.50 94.75 ... 132639.0 -0.0166584 NaN
08/28/19 97.40 98.15 95.95 ... 130573.0 0.0238601 NaN
08/27/19 97.35 98.00 96.40 ... 129921.0 -0.00410889 NaN
08/26/19 95.55 98.50 95.25 ... 129003.0 0.0035962 NaN
08/23/19 96.90 97.40 95.05 ... 130268.0 -0.0149835 98.97775
Appreciate any help
the .fillna('') is creating a string in the first row which then creates errors for the rolling calculation in Ticker['ret20'].
Delete this and the code will run fine:
Ticker['Return'] = np.log(Ticker['Last'] / Ticker['Last'].shift(1))
I have been trying to complete current data for several stocks from WSE (Warsaw Stock Exchange). I am focused on stooq as yahoo doesn't cover polish stock market.
Once I try the code from pandas-datareader for stooq, I see the below error:
"StooqDailyReader request returned no data;
check URL for invalid inputs: https://stooq.com/q/d/l/"
Under this link csv file telling me the ticker provided is wrong, but the same ticker works well on stooq website directly.
Do you know what may be wrong there?
import pandas_datareader.data as web
prices = web.DataReader('KGH', 'stooq')
print(prices)
Do you want something like this?
import quandl
mydata = quandl.get("WSE/WIG20TR")
print(mydata)
That, produces the following.
Open High Low Close % Change Turnover (1000s)
Date
2016-10-03 2911.76 2919.84 2911.76 2919.84 0.93 401910.61
2016-10-04 2940.43 2968.24 2940.43 2968.24 1.66 707134.92
2016-10-05 2970.79 2982.08 2970.79 2982.08 0.47 713899.62
2016-10-06 2972.15 2981.52 2972.15 2981.52 -0.02 522210.82
2016-10-07 2972.68 2974.87 2964.30 2964.30 -0.58 456252.80
2016-10-10 2977.32 2988.68 2977.32 2988.39 0.81 472514.88
2016-10-11 2987.31 2987.31 2973.30 2973.30 -0.50 457603.07
2016-10-12 2969.48 2979.04 2969.48 2979.04 0.19 447665.23
2016-10-13 2954.50 2954.50 2923.40 2923.40 -1.87 720287.00
2016-10-14 2937.36 2937.36 2910.02 2910.02 -0.46 498163.77
2016-10-17 2919.28 2919.28 2900.82 2900.82 -0.32 621362.01
2016-10-18 2922.67 2922.67 2901.12 2912.68 0.41 548288.70
2016-10-19 2919.90 2956.99 2919.90 2952.13 1.35 676115.64
2016-10-20 2963.88 2965.74 2959.68 2959.68 0.26 440663.31
2016-10-21 2975.40 2975.40 2965.20 2965.20 0.19 580519.75
2016-10-24 2991.65 3017.78 2991.65 3016.87 1.74 470453.44
2016-10-25 3042.78 3042.78 3018.19 3021.62 0.16 549692.97
2016-10-26 3016.35 3016.35 3004.60 3011.71 -0.33 503003.10
2016-10-27 3035.19 3038.10 3034.83 3038.10 0.88 699662.42
2016-10-28 3031.84 3076.10 3031.84 3076.10 1.25 532073.92
2016-10-31 3085.46 3085.46 3070.81 3070.81 -0.17 681466.77
2016-11-02 3026.11 3026.11 2987.83 2987.83 -2.70 928538.72
2016-11-03 2998.86 2998.86 2990.79 2990.79 0.10 656489.77
2016-11-04 2975.22 2980.33 2975.22 2975.30 -0.52 385598.43
2016-11-07 3005.59 3005.59 2981.35 2981.35 0.20 484758.97
2016-11-08 3013.01 3017.61 3008.44 3017.61 1.22 445944.20
2016-11-09 2988.07 3030.20 2988.07 3030.20 0.42 759370.28
2016-11-10 3084.03 3084.03 3040.50 3040.50 0.34 1199406.31
2016-11-14 3031.86 3031.86 2967.93 2967.93 -2.39 972806.47
2016-11-15 2989.57 2989.57 2964.69 2968.20 0.01 604846.49
... ... ... ... ... ...
2019-05-31 3908.15 3969.56 3893.10 3969.56 0.80 816765.70
2019-06-03 3960.51 3984.63 3949.97 3966.72 -0.07 532820.53
2019-06-04 3965.50 3974.06 3946.34 3959.69 -0.18 675309.03
2019-06-05 3967.09 3970.77 3935.23 3941.11 -0.47 655383.13
2019-06-06 3943.94 4027.33 3941.65 4007.89 1.69 995862.14
2019-06-07 4011.61 4050.60 4011.61 4042.75 0.87 616987.47
2019-06-10 4062.75 4068.64 4029.45 4046.13 0.08 651731.30
2019-06-11 4049.73 4078.42 4039.98 4069.34 0.57 1015106.95
2019-06-12 4052.77 4062.90 4013.92 4046.65 -0.56 769784.26
2019-06-13 4040.89 4091.55 4040.53 4077.50 0.76 811131.81
2019-06-14 4075.39 4075.39 4049.28 4053.14 -0.60 620163.19
2019-06-17 4058.67 4062.29 4026.71 4037.13 -0.40 426200.13
2019-06-18 4036.36 4123.23 4030.44 4123.23 2.13 949164.98
2019-06-19 4124.82 4124.82 4108.94 4113.58 -0.23 593252.42
2019-06-21 4124.86 4155.76 4085.95 4093.37 -0.49 1697240.27
2019-06-24 4113.01 4136.59 4099.92 4133.72 0.99 521246.30
2019-06-25 4123.47 4128.56 4073.93 4084.48 -1.19 692376.39
2019-06-26 4095.28 4109.06 4081.32 4109.06 0.60 635235.61
2019-06-27 4119.07 4160.37 4119.07 4140.91 0.78 682904.34
2019-06-28 4141.09 4144.04 4125.12 4132.41 -0.21 592757.96
2019-07-01 4195.52 4195.52 4130.92 4136.08 0.09 527728.12
2019-07-02 4152.31 4156.24 4101.91 4156.24 0.49 719797.10
2019-07-03 4145.92 4170.18 4141.94 4164.17 0.19 670327.79
2019-07-04 4165.05 4184.44 4152.69 4183.57 0.47 490173.23
2019-07-05 4186.73 4186.73 4145.94 4157.86 -0.61 516459.10
2019-07-08 4140.77 4167.74 4131.49 4152.64 -0.13 598552.93
2019-07-09 4148.48 4148.48 4109.20 4125.50 -0.65 679278.48
2019-07-10 4124.36 4174.02 4113.97 4125.75 0.01 764583.50
2019-07-11 4146.63 4167.49 4121.76 4133.21 0.18 598836.12
2019-07-12 4144.03 4145.02 4129.71 4130.38 -0.07 535157.11
[691 rows x 6 columns]
https://www.quandl.com/data/WSE-Warsaw-Stock-Exchange-GPW?keyword=KGH
I know using pandas this is how you normally get daily stock price quotes. But I'm wondering if its possible to get monthly or weekly quotes, is there maybe a parameter I can pass through to get monthly quotes?
from pandas.io.data import DataReader
from datetime import datetime
ibm = DataReader('IBM', 'yahoo', datetime(2000,1,1), datetime(2012,1,1))
print(ibm['Adj Close'])
Monthly closing prices from Yahoo! Finance...
import pandas_datareader.data as web
data = web.get_data_yahoo('IBM','01/01/2015',interval='m')
where you can replace the interval input as required ('d', 'w', 'm', etc).
Using Yahoo Finance, it is possible to get Stock Prices using "interval" option with instead of "m" as shown:
#Library
import yfinance as yf
from datetime import datetime
#Load Stock price
df = yf.download("IBM", start= datetime(2000,1,1), end = datetime(2012,1,1),interval='1mo')
df
The result is:
The other possible interval options are:
1m,
2m,
5m,
15m,
30m,
60m,
90m,
1h,
1d,
5d,
1wk,
1mo,
3mo.
try this:
In [175]: from pandas_datareader.data import DataReader
In [176]: ibm = DataReader('IBM', 'yahoo', '2001-01-01', '2012-01-01')
UPDATE: show average for Adj Close only (month start)
In [12]: ibm.groupby(pd.TimeGrouper(freq='MS'))['Adj Close'].mean()
Out[12]:
Date
2001-01-01 79.430605
2001-02-01 86.625519
2001-03-01 75.938913
2001-04-01 81.134375
2001-05-01 90.460754
2001-06-01 89.705042
2001-07-01 83.350254
2001-08-01 82.100543
2001-09-01 74.335789
2001-10-01 79.937451
...
2011-03-01 141.628553
2011-04-01 146.530774
2011-05-01 150.298053
2011-06-01 146.844772
2011-07-01 158.716834
2011-08-01 150.690990
2011-09-01 151.627555
2011-10-01 162.365699
2011-11-01 164.596963
2011-12-01 167.924676
Freq: MS, Name: Adj Close, dtype: float64
show average for Adj Close only (month end)
In [13]: ibm.groupby(pd.TimeGrouper(freq='M'))['Adj Close'].mean()
Out[13]:
Date
2001-01-31 79.430605
2001-02-28 86.625519
2001-03-31 75.938913
2001-04-30 81.134375
2001-05-31 90.460754
2001-06-30 89.705042
2001-07-31 83.350254
2001-08-31 82.100543
2001-09-30 74.335789
2001-10-31 79.937451
...
2011-03-31 141.628553
2011-04-30 146.530774
2011-05-31 150.298053
2011-06-30 146.844772
2011-07-31 158.716834
2011-08-31 150.690990
2011-09-30 151.627555
2011-10-31 162.365699
2011-11-30 164.596963
2011-12-31 167.924676
Freq: M, Name: Adj Close, dtype: float64
monthly averages (all columns):
In [179]: ibm.groupby(pd.TimeGrouper(freq='M')).mean()
Out[179]:
Open High Low Close Volume Adj Close
Date
2001-01-31 100.767857 103.553571 99.428333 101.870357 9474409 79.430605
2001-02-28 111.193160 113.304210 108.967368 110.998422 8233626 86.625519
2001-03-31 97.366364 99.423637 95.252272 97.281364 11570454 75.938913
2001-04-30 103.990500 106.112500 102.229501 103.936999 11310545 81.134375
2001-05-31 115.781363 117.104091 114.349091 115.776364 7243463 90.460754
2001-06-30 114.689524 116.199048 113.739523 114.777618 6806176 89.705042
2001-07-31 106.717143 108.028095 105.332857 106.646666 7667447 83.350254
2001-08-31 105.093912 106.196521 103.856522 104.939999 6234847 82.100543
2001-09-30 95.138667 96.740000 93.471334 94.987333 12620833 74.335789
2001-10-31 101.400870 103.140000 100.327827 102.145217 9754413 79.937451
2001-11-30 113.449047 114.875715 112.510952 113.938095 6435061 89.256046
2001-12-31 120.651001 122.076000 119.790500 121.087999 6669690 94.878736
2002-01-31 116.483334 117.509524 114.613334 115.994762 9217280 90.887920
2002-02-28 103.194210 104.389474 101.646316 102.961579 9069526 80.764672
2002-03-31 105.246500 106.764499 104.312999 105.478499 7563425 82.756873
... ... ... ... ... ... ...
2010-10-31 138.956188 140.259048 138.427142 139.631905 6537366 122.241844
2010-11-30 144.281429 145.164762 143.385241 144.439524 4956985 126.878319
2010-12-31 145.155909 145.959545 144.567273 145.251819 4245127 127.726929
2011-01-31 152.595000 153.950499 151.861000 153.181501 5941580 134.699880
2011-02-28 163.217895 164.089474 162.510002 163.339473 4687763 144.050847
2011-03-31 160.433912 161.745652 159.154349 160.425651 5639752 141.628553
2011-04-30 165.437501 166.587500 164.760500 165.978500 5038475 146.530774
2011-05-31 169.657144 170.679046 168.442858 169.632857 5276390 150.298053
2011-06-30 165.450455 166.559093 164.691819 165.593635 4792836 146.844772
2011-07-31 178.124998 179.866502 177.574998 178.981500 5679660 158.716834
2011-08-31 169.734350 171.690435 166.749567 169.360434 8480613 150.690990
2011-09-30 169.752858 172.034761 168.109999 170.245714 6566428 151.627555
2011-10-31 181.529525 183.597145 180.172379 182.302381 6883985 162.365699
2011-11-30 184.536668 185.950952 182.780477 184.244287 4619719 164.596963
2011-12-31 188.151428 189.373809 186.421905 187.789047 4925547 167.924676
[132 rows x 6 columns]
weekly averages (all columns):
In [180]: ibm.groupby(pd.TimeGrouper(freq='W')).mean()
Out[180]:
Open High Low Close Volume Adj Close
Date
2001-01-07 89.234375 94.234375 87.890625 91.656250 11060200 71.466436
2001-01-14 93.412500 95.062500 91.662500 93.412500 7470200 72.835824
2001-01-21 100.250000 103.921875 99.218750 102.250000 13851500 79.726621
2001-01-28 109.575000 111.537500 108.675000 110.600000 8056720 86.237303
2001-02-04 113.680000 115.465999 111.734000 113.582001 6538080 88.562436
2001-02-11 113.194002 115.815999 111.639999 113.884001 7269320 88.858876
2001-02-18 113.960002 116.731999 113.238000 115.106000 7225420 89.853021
2001-02-25 109.525002 111.375000 105.424999 107.977501 10722700 84.288436
2001-03-04 103.390001 106.052002 100.386000 103.228001 11982540 80.580924
2001-03-11 105.735999 106.920000 103.364002 104.844002 9226900 81.842391
2001-03-18 95.660001 97.502002 93.185997 94.899998 13863740 74.079992
2001-03-25 90.734000 92.484000 88.598000 90.518001 11382280 70.659356
2001-04-01 95.622000 97.748000 94.274000 96.106001 10467580 75.021411
2001-04-08 95.259999 97.360001 93.132001 94.642000 12312580 73.878595
2001-04-15 98.350000 99.520000 95.327502 97.170000 10218625 75.851980
... ... ... ... ... ... ...
2011-09-25 170.678003 173.695996 169.401996 171.766000 6358100 152.981582
2011-10-02 176.290002 178.850000 174.729999 176.762000 7373680 157.431216
2011-10-09 175.920001 179.200003 174.379999 177.792001 7623560 158.348576
2011-10-16 185.366000 187.732001 184.977997 187.017999 5244180 166.565614
2011-10-23 180.926001 182.052002 178.815997 180.351999 9359200 160.628611
2011-10-30 183.094003 184.742001 181.623996 183.582001 5743800 163.505379
2011-11-06 184.508002 186.067999 183.432004 184.716003 4583780 164.515366
2011-11-13 185.350000 186.690002 183.685999 185.508005 4180620 165.750791
2011-11-20 187.600003 189.101999 185.368002 186.738000 5104420 166.984809
2011-11-27 181.067497 181.997501 178.717499 179.449997 4089350 160.467733
2011-12-04 185.246002 187.182001 184.388000 186.052002 5168720 166.371376
2011-12-11 191.841998 194.141998 191.090002 192.794000 4828580 172.400204
2011-12-18 191.085999 191.537998 187.732001 188.619998 6037220 168.667729
2011-12-25 183.810001 184.634003 181.787997 183.678000 5433360 164.248496
2012-01-01 185.140003 185.989998 183.897499 184.750000 3029925 165.207100
[574 rows x 6 columns]
Get it from Quandl:
import pandas as pd
import quandl
quandl.ApiConfig.api_key = 'xxxxxxxxxxxx' # Optional
quandl.ApiConfig.api_version = '2015-04-09' # Optional
ibm = quandl.get("WIKI/IBM", start_date="2000-01-01", end_date="2012-01-01", collapse="monthly", returns="pandas")
I am trying to produce the map on basemap using vales extracted from meteorological data. Sample code is:-
y=[2.56422, 3.77284,3.52623,3.51468,3.02199]
z=[0.15, 0.3, 0.45, 0.6, 0.75]
n=[58,651,393,203,123]
fig, ax = plt.subplots()
ax.scatter(z, y)
for i, txt in enumerate(n):
ax.annotate(txt, (z[i],y[i]))
The data I am using is a numpy array. I dont know how to loop through each array to plot the kind of map similar to above. I would like to plot only values (ie. no countour or contourf).
Initially I was trying to plot float values using pylab.plot function. However, it retured with error
ValueError: third arg must be a format string
Then I tried to convert this numpy array to string and then plot with this command:-
temperature = np.array2string(data, precision=2)
and the print statement looks like a modified string:-
print temperature
[[ 19.69 21.09 21.57 21.45 20.59 20.53 20.93 20.63 20.64 21.26
21.29 20.63 20.98 21.01 20.84 20.81 20.55 20.33 20.52 20.23
19.84]
[ 20.77 21.35 20.81 20.64 20.9 20.78 20.79 23.57 20.11 21.07
21.06 21.33 21.48 21.18 21.4 21.09 20.5 20.31 20.12 19.8
19.97]
[ 21.51 21.23 20.55 20.08 20.05 20.78 21.17 24.77 21.17 20.95
21.43 21.47 21.46 21.77 21.69 21.13 20.47 20.04 20.08 20.37
20.14]
[ 21.29 21.1 20.63 20.32 20.22 20.37 24.4 23.82 22.23 21.03
22.11 22.62 22.71 22.37 21.73 21.35 21.03 20.67 20.58 20.89
20.93]
[ 21.24 21.04 20.68 20.56 20.76 20.91 24.26 23.75 23.28 21.26
21.48 22. 21.94 21.78 21.36 21.14 20.96 20.92 21.1 21.19
21.31]
[ 20.83 20.88 20.6 20.87 21.01 21.91 22.33 22.21 21.74 20.66
20.76 20.73 21.04 21.09 20.83 20.7 20.72 20.71 21.23 21.04
20.73]
[ 20.32 20.41 20.19 20.05 20.68 22.17 21.82 20.67 19.85 19.02
18.91 19.6 20.15 20.64 20.64 20.09 19.81 19.76 19.9 19.94
19.46]
[ 19.68 20.37 20.56 20.68 20.93 21.28 21.24 20.33 20.7 20.
18.72 18.94 19.56 19.57 19.83 19.74 19.17 18.53 18.1 18.72
19.12]
[ 18.88 19.71 20.77 20.81 20.32 21.58 20.96 21.33 21.2 20.17
19.95 22.05 19.72 19.85 19.3 18.75 18.69 18.44 17.57 17.2
18.22]
[ 19.11 19.19 20.13 20.78 21.25 21.98 21.15 20.96 20.66 20.14
20.51 21.92 20.36 20.27 19. 18.22 17.81 17.58 17.16 16.67
17.46]
[ 18.5 19.28 19.57 20.01 21.16 21.01 21.06 20.93 20.62 19.89
20.3 20.7 19.7 19.76 18.24 17. 16.36 16.63 17.62 17.32
17.38]
[ 17.6 18.33 20.27 19.97 20.63 20.51 21.09 21.39 20.81 19.55
20. 18.3 17.32 18.24 17.57 17.15 16.42 15.76 16.14 16.45
21.95]
[ 17.04 17.55 18.16 18.32 21.23 20.5 20.41 19.82 20.7 20.55
20.41 18.47 18.05 17.63 17.11 15.6 16.02 15.46 14.29 13.88
23.04]]
Finally, I get this error when I tried to plot the above value on a map with this line
pylab.plot(x, y, temperature)
'Unrecognized character %c in format string' % c)
ValueError: Unrecognized character [ in format string
Problem seems to be with nparray to string conversion.
Any help to solve this issue is appreciated.
Your original solution with ax.annotate is perfectly fine for your more general solution. The only thing to change is that in case of 2d arrays, you need to flatten them before looping over them using np.ravel() (which is also a method of the ndarray class).
However, in your specific case you can spare explicit indexing and the use of ravel() by broadcasting the three arrays you need to plot:
import numpy as np
import matplotlib.pyplot as plt
# generate some dummy data
rng = np.random.default_rng()
z, y = np.mgrid[:3, :3]
n = rng.integers(low=50, high=500, size=z.shape)
fig, ax = plt.subplots()
ax.scatter(z, y)
for zz, yy, txt in np.broadcast(z, y, n):
ax.annotate(txt, (zz, yy))
Note that the result of np.broadcast is the same as if we'd used zip(z.ravel(), y.ravel(), n.ravel()).