Is there any efficient way to reshape a dataframe from:
(A1, A2, A3, B1, B2, B3, C1, C2, C3, TT, YY and ZZ are columns)
A1 A2 A3 B1 B2 B3 C1 C2 C3 TT YY ZZ
11 22 33 44 55 66 77 88 99 23 24 25
11 22 33 44 55 66 77 88 99 23 24 25
11 22 33 44 55 66 77 88 99 23 24 25
11 22 33 44 55 66 77 88 99 23 24 25
11 22 33 44 55 66 77 88 99 23 24 25
11 22 33 44 55 66 77 88 99 23 24 25
TO:
HH JJ KK TT YY ZZ
11 22 33 23 24 25
11 22 33 23 24 25
11 22 33 23 24 25
11 22 33 23 24 25
11 22 33 23 24 25
11 22 33 23 24 25
44 55 66 23 24 25
44 55 66 23 24 25
44 55 66 23 24 25
44 55 66 23 24 25
44 55 66 23 24 25
44 55 66 23 24 25
77 88 99 23 24 25
77 88 99 23 24 25
77 88 99 23 24 25
77 88 99 23 24 25
77 88 99 23 24 25
77 88 99 23 24 25
HH, JJ and KK are new columns where I would make a vertical stack of column A, B, C and keeping in horizontal stack TT, YY and ZZ
A1 A2 A3 TT YY ZZ
B1 B2 B3 TT YY ZZ
C1 C2 C3 TT YY ZZ
Thanks for your help
You can use Column splitting and concatenation
df = pd.read_clipboard()
ColSets= [df.columns[i:i+3] for i in np.arange(0,len(df.columns)-3,3)]
LCols = df.columns[-3:]
NewDf = pd.concat([df[ColSet].join(df[LCols]).T.reset_index(drop=True).T for ColSet in ColSets])
NewDf.columns = ['HH', 'JJ', 'KK', 'TT', 'YY', 'ZZ']
Out:
HH JJ KK TT YY ZZ
0 11 22 33 23 24 25
1 11 22 33 23 24 25
2 11 22 33 23 24 25
3 11 22 33 23 24 25
4 11 22 33 23 24 25
5 11 22 33 23 24 25
0 44 55 66 23 24 25
1 44 55 66 23 24 25
2 44 55 66 23 24 25
3 44 55 66 23 24 25
4 44 55 66 23 24 25
5 44 55 66 23 24 25
0 77 88 99 23 24 25
1 77 88 99 23 24 25
2 77 88 99 23 24 25
3 77 88 99 23 24 25
4 77 88 99 23 24 25
5 77 88 99 23 24 25
a bit longer than the previous solution :
#extract columns ending with numbers
abc = df.filter(regex='\d$')
#sort columns into separate lists
from itertools import groupby
from operator import itemgetter
cols = sorted(abc.columns,key=itemgetter(0))
filtered_columns = [list(g) for k,g in groupby(cols,key=itemgetter(0))]
#iterate through the dataframe
#and stack them
abc_stack = pd.concat([abc.filter(col)
.set_axis(['HH','JJ','KK'],axis='columns')
for col in filtered_columns],
ignore_index=True)
#filter for columns ending with alphabets
tyz = df.filter(regex= '[A-Z]$')
#get the dataframe to be the same length as abc_stack
tyz_stack = pd.concat([tyz] * len(filtered_columns),ignore_index=True)
#combine both dataframes
res = pd.concat([abc_stack,tyz_stack], axis=1)
res
HH JJ KK TT YY ZZ
0 11 22 33 23 24 25
1 11 22 33 23 24 25
2 11 22 33 23 24 25
3 11 22 33 23 24 25
4 11 22 33 23 24 25
5 11 22 33 23 24 25
6 44 55 66 23 24 25
7 44 55 66 23 24 25
8 44 55 66 23 24 25
9 44 55 66 23 24 25
10 44 55 66 23 24 25
11 44 55 66 23 24 25
12 77 88 99 23 24 25
13 77 88 99 23 24 25
14 77 88 99 23 24 25
15 77 88 99 23 24 25
16 77 88 99 23 24 25
17 77 88 99 23 24 25
UPDATE : 2021-01-08
The reshaping process could be abstracted by using the pivot_longer function from pyjanitor; at the moment you have to install the latest development version from github:
The data you shared has patterns (some columns ends with 1, others with 2, and some end with 3), we can use these patterns to reshape the data;
# install latest dev version
# pip install git+https://github.com/ericmjl/pyjanitor.git
import janitor
(df.pivot_longer(names_to=("HH", "JJ", "KK"),
names_pattern=("1$", "2$", "3$"),
index=("TT", "YY", "ZZ")
)
.sort_index(axis="columns"))
Basically, what it does is look for columns that end with 1, aggregates them into one column ("TT") and does the same for 2 and 3.
Related
This question already has answers here:
Pandas percentage of total with groupby
(16 answers)
Closed 10 months ago.
I'm trying to find the % total of the value within its respective index level, however, the current result is producing Nan values.
pd.DataFrame({"one": np.arange(0, 20), "two": np.arange(20, 40)}, index=[np.array([np.zeros(10), np.ones(10).flatten()], np.arange(80, 100)])
DataFrame:
one two
0.0 80 0 20
81 1 21
82 2 22
83 3 23
84 4 24
85 5 25
86 6 26
87 7 27
88 8 28
89 9 29
1.0 90 10 30
91 11 31
92 12 32
93 13 33
94 14 34
95 15 35
96 16 36
97 17 37
98 18 38
99 19 39
Aim:
To see the % total of a column 'one' within its respective level.
Excel example:
Current attempted code:
for loc in df.index.get_level_values(0):
df.loc[loc, 'total'] = df.loc[loc, :] / df.loc[loc, :].sum()
IIUC, use:
df['total'] = df['one'].div(df.groupby(level=0)['one'].transform('sum'))
output:
one two total
0 80 0 20 0.000000
81 1 21 0.022222
82 2 22 0.044444
83 3 23 0.066667
84 4 24 0.088889
85 5 25 0.111111
86 6 26 0.133333
87 7 27 0.155556
88 8 28 0.177778
89 9 29 0.200000
1 90 10 30 0.068966
91 11 31 0.075862
92 12 32 0.082759
93 13 33 0.089655
94 14 34 0.096552
95 15 35 0.103448
96 16 36 0.110345
97 17 37 0.117241
98 18 38 0.124138
99 19 39 0.131034
I have a pandas dataframe which looks like this -
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Marks
0 30 31 29 15 30 30 30 50 30 30 30 26 Student1
1 45 45 45 45 41 45 35 45 45 45 37 45 Student2
2 21 11 21 21 21 21 21 21 21 21 17 21 Student3
3 30 30 33 30 30 30 50 30 30 30 22 30 Student4
4 39 34 34 34 34 34 23 34 40 34 34 34 Student5
5 41 41 41 28 41 56 41 41 41 41 41 41 Student6
If I transpose the data like below, I am able to plot a line graph
Marks Student1 Student2 Student3 Student4 Student5 Student6
0 Jan 30 45 21 30 39 41
1 Feb 31 45 11 30 34 41
2 Mar 29 45 21 33 34 41
3 Apr 15 45 21 30 34 28
4 May 30 41 21 30 34 41
5 Jun 30 45 21 30 34 56
6 Jul 30 35 21 50 23 41
7 Aug 50 45 21 30 34 41
8 Sep 30 45 21 30 40 41
9 Oct 30 45 21 30 34 41
10 Nov 30 37 17 22 34 41
11 Dec 26 45 21 30 34 41
However, my original data is huge, and transposing it is taking too long. Is there some other way to achieve this?
Please note - this is just a dummy dataframe I created for the sake of simplicity, my original data is quite complex and huge.
If you're data is huge, you're probably not going to be able to see anything on the line plot anyways...
import matplotlib.pyplot as plt
import pandas as pd
from io import StringIO
import numpy as np
df = pd.read_table(StringIO(""" Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Marks
0 30 31 29 15 30 30 30 50 30 30 30 26 Student1
1 45 45 45 45 41 45 35 45 45 45 37 45 Student2
2 21 11 21 21 21 21 21 21 21 21 17 21 Student3
3 30 30 33 30 30 30 50 30 30 30 22 30 Student4
4 39 34 34 34 34 34 23 34 40 34 34 34 Student5
5 41 41 41 28 41 56 41 41 41 41 41 41 Student6"""), sep='\s+')
x = df.columns.tolist()[:-1]
y = df.iloc[:, :-1].values
for i, j in enumerate(y):
plt.plot(x, j, label=df['Marks'].iloc[i])
plt.ylim(bottom=0)
plt.legend(loc='upper right')
I am comparing the contours of letters and have several cases of unexpected results. The most confusing to me is how X and N are being identified as best matches.
In the images below, yellow represents the unknown shape and blue represents candidate shapes. The white numbers are the result returned by cv.matchShapes using CONTOURS_MATCH_I3. (I've tried the other matching methods and just get similar odd results but with a different set of letters.)
Below shows X matching N better than X
Below shows N matching X better than N
At the end of the post are the raw data and below is a chart of the the data.
I can't come up with a rotation, scale, or skew to show that this is an optical illusion. I'm not suggesting there is an issue in matchShapes but rather an issue in my understanding of Hu moments.
I'd appreciate if someone would take a moment (pun intended) and explain how cv.matchShapes is producing these results.
--- edited ----
The images below are the result of using poly-filled shapes. I am still baffled how these letters match better than the correct ones.
target_letter
33 23
32 24
30 24
28 26
28 30
29 31
29 32
31 34
31 35
33 37
33 38
36 41
36 42
38 44
38 47
35 50
35 51
33 53
33 54
30 57
30 58
28 60
28 61
27 62
27 67
29 69
34 69
38 65
38 64
40 62
40 61
42 59
42 58
46 54
47 54
49 56
49 57
51 59
51 60
53 62
53 63
56 66
56 67
58 69
63 69
65 67
65 60
63 58
63 57
60 54
60 53
58 51
58 50
55 47
55 44
57 42
57 41
61 37
61 36
64 33
64 32
65 31
65 25
64 24
62 24
61 23
60 24
58 24
55 27
55 28
52 31
52 32
50 34
50 35
47 38
45 36
45 35
41 31
41 30
40 29
40 28
38 26
38 25
37 24
35 24
34 23
candidateLetter N
10 3
9 4
7 4
6 5
5 5
5 6
4 7
4 9
3 10
3 44
4 45
4 47
6 49
12 49
13 48
13 47
14 46
14 23
15 22
17 24
17 25
21 29
21 30
24 33
24 34
27 37
27 38
31 42
31 43
34 46
34 47
35 48
36 48
37 49
43 49
45 47
45 6
43 4
38 4
36 6
36 8
35 9
35 27
36 28
36 29
34 31
33 30
33 29
31 27
31 26
27 22
27 21
24 18
24 17
21 14
21 13
18 10
18 9
13 4
11 4
candidateLetter X
10 2
9 3
7 3
6 4
6 6
5 7
5 8
6 9
6 11
8 13
8 14
10 16
10 17
14 21
14 22
16 24
16 25
13 28
13 29
10 32
10 33
7 36
7 37
5 39
5 40
4 41
4 46
6 48
11 48
15 44
15 43
17 41
17 40
19 38
19 37
21 35
21 34
23 32
26 35
26 36
28 38
28 39
30 41
30 42
33 45
33 46
35 48
40 48
42 46
42 39
40 37
40 36
37 33
37 32
34 29
34 28
32 26
32 23
34 21
34 20
37 17
37 16
41 12
41 11
42 10
42 4
41 3
39 3
38 2
37 3
35 3
32 6
32 7
29 10
29 11
27 13
27 14
24 17
21 14
21 13
18 10
18 9
17 8
17 7
15 5
15 4
14 3
12 3
11 2
def main():
l=[]
for i in range(1981,2018):
df = pd.read_csv("ftp://ftp.cpc.ncep.noaa.gov/htdocs/degree_days/weighted/daily_data/"+ str(i)+"/Population.Heating.txt")
print(df[12:])
I am trying to download and read the "CONUS" row in Population.Heating.txt from 1981 to 2017.
My code seems to get the CONUS parts, but How can I actually read it like csv format with |?
Thank you!
Try this:
def main():
l=[]
url = "ftp://ftp.cpc.ncep.noaa.gov/htdocs/degree_days/weighted/daily_data/{}/Population.Heating.txt"
for i in range(1981,2018):
df = pd.read_csv(url.format(i), sep='\|', skiprows=3, engine='python')
print(df[12:])
Demo:
In [14]: url = "ftp://ftp.cpc.ncep.noaa.gov/htdocs/degree_days/weighted/daily_data/{}/Population.Heating.txt"
In [15]: i = 2017
In [16]: df = pd.read_csv(url.format(i), sep='\|', skiprows=3, engine='python')
In [17]: df
Out[17]:
Region 20170101 20170102 20170103 20170104 20170105 20170106 20170107 20170108 20170109 ... 20171222 20171223 \
0 1 30 36 31 25 37 39 47 51 55 ... 40 32
1 2 28 32 28 23 39 41 46 49 51 ... 31 25
2 3 34 30 26 43 52 58 57 54 44 ... 29 32
3 4 37 34 37 57 60 62 59 54 43 ... 39 45
4 5 15 11 9 10 20 21 27 36 33 ... 12 7
5 6 16 9 7 22 31 38 45 44 35 ... 9 9
6 7 8 5 9 23 23 34 37 32 17 ... 9 19
7 8 30 32 34 33 36 42 42 31 23 ... 36 33
8 9 25 25 24 23 22 25 23 15 17 ... 23 20
9 CONUS 24 23 21 26 33 38 40 39 34 ... 23 22
20171224 20171225 20171226 20171227 20171228 20171229 20171230 20171231
0 32 34 43 53 59 59 57 59
1 30 33 43 49 54 53 50 55
2 41 47 58 62 60 54 54 60
3 47 55 61 64 57 54 62 68
4 12 20 21 22 27 26 24 29
5 22 33 31 35 37 33 32 39
6 19 24 23 28 28 23 19 27
7 34 30 32 29 26 24 27 30
8 18 17 17 15 13 11 12 15
9 26 30 34 37 38 35 34 40
[10 rows x 366 columns]
def main():
l=[]
for i in range(1981,2018):
l.append( pd.read_csv("ftp://ftp.cpc.ncep.noaa.gov/htdocs/degree_days/weighted/daily_data/"+ str(i)+"/Population.Heating.txt"
, sep='|', skiprows=3))
Files look like:
Product: Daily Heating Degree Days
Regions: Regions::CensusDivisions
Weights: Population
[... data ...]
so you need to skip 3 rows. Afterwards you have several 'df' in your list 'l' for further processing.
I have a question using matplotlib and imshow. I want to plot in the same figure four "matrices", using imshow, and I need the gradient to be between [0, 1]. I also need to normalize the data with the following formula:
data_norm = data * 2/400
So far I have this:
from matplotlib import mpl,pyplot
import numpy as np
zvals = np.loadtxt("sharedGradient.txt")
img = pyplot.imshow(zvals,interpolation='nearest')
pyplot.colorbar(img)
pyplot.show()
The data is in .txt files, but this is a sample of data:
61 62 63 64 65 66 67 6 5 83 82 81 28 29 30 33 34 35 36 37
60 13 12 11 10 9 8 7 4 3 2 7 27 76 31 32 69 42 41 38
59 14 15 16 17 18 69 12 11 10 1 0 26 75 74 73 70 43 40 39
58 57 56 41 40 19 70 71 72 73 4 3 25 79 133 72 71 44 61 62
160 161 55 42 39 20 21 107 114 0 1 2 24 51 52 47 46 45 60 108
62 61 54 43 38 37 22 35 38 37 36 35 23 50 49 48 57 58 59 0
63 64 53 44 25 24 23 34 31 32 33 34 22 51 56 55 56 108 107 1
203 65 52 45 26 31 24 33 30 33 34 20 21 52 53 54 55 109 106 2
202 66 51 46 27 30 25 28 29 17 18 19 38 37 36 35 111 110 105 3
156 199 50 47 28 29 26 27 28 16 30 54 50 51 52 34 112 103 104 4
121 120 49 48 28 29 46 45 27 15 39 55 49 54 53 33 113 102 6 5
114 113 112 109 27 30 31 12 13 14 40 41 46 55 31 32 120 101 7 8
3 4 5 6 15 0 10 11 25 35 40 42 45 48 30 29 28 100 99 9
2 1 0 3 2 1 2 77 32 33 34 45 46 57 67 68 27 26 25 10
9 6 5 0 1 7 80 81 31 30 35 44 60 58 59 69 70 23 24 11
10 2 3 4 5 6 79 82 83 29 36 43 42 41 60 65 66 22 21 12
11 1 11 10 21 20 23 67 66 28 37 38 39 40 61 64 67 92 20 13
12 0 14 15 20 70 7 6 26 27 80 77 76 73 62 63 68 91 19 14
13 15 51 18 19 71 8 5 4 3 2 82 83 84 71 70 69 90 18 15
14 14 13 12 11 10 9 128 129 0 1 146 147 85 86 87 88 89 17 16
My issue is that I can't get the gradient to be between [0, 1] and I can't put different plots in the same figure. Hope somebody can help.
After you normalize the data the gradient is already adjusted from 0 to 1
to separate the imshow graphs simply add subplots to the figures: plt.subplot(number of rows, number of columns, graph number)
import matplotlib.pyplot as plt
import numpy as np
zvals = np.loadtxt("sharedGradient.txt")
zvals = zvals/200
plt.subplot(2,2,1)
img = plt.imshow(zvals,interpolation='nearest')
plt.colorbar(img)
plt.subplot(2,2,2)
img = plt.imshow(zvals)
plt.colorbar(img)
plt.subplot(2,2,3)
img = plt.imshow(zvals)
plt.colorbar(img)
plt.subplot(2,2,4)
img = plt.imshow(zvals)
plt.colorbar(img)
plt.show()
If you're also trying to make the axis range from 0 to 1 then use the extent=(0,1,0,1) inside imshow()