Dataframe
df=pd.DataFrame({"sym":["a","a","aa","aa","aa","a","ab","ab","ab"],
"id_h":[2.1, 2.2 , 2.5 , 3.1 , 2.5, 3.8 , 2.5, 5,6],
"pm_h":[np.nan, 2.3, np.nan , 2.8, 2.7, 3.7, 2.4, 4.9,np.nan]})
want to fill pm_h nan values with max id_h value of each "sys" group i.e. (a, aa, ab)
Required output:
df1=pd.DataFrame({"sym":["a","a","aa","aa","aa","a","ab","ab","ab"],
"id_h":[2.1, 2.2 , 2.5 , 3.1 , 2.5, 3.8 , 2.5, 5,6],
"pm_h":[3.8, 2.3, 3.1 , 2.8, 2.7, 3.7, 2.4, 4.9, 6})
Use Series.fillna with GroupBy.transform by maximal values for new Series with same index like original:
df['pm_h'] = df['pm_h'].fillna(df.groupby('sym')['id_h'].transform('max'))
print (df)
sym id_h pm_h
0 a 2.1 3.8
1 a 2.2 2.3
2 aa 2.5 3.1
3 aa 3.1 2.8
4 aa 2.5 2.7
5 a 3.8 3.7
6 ab 2.5 2.4
7 ab 5.0 4.9
8 ab 6.0 6.0
Related
In Python, there is a function data.info(). This function gives you all the information about a dataset such as datatypes, memory, number of entries, etc.
Here you can look up for more information about the .info() function in Python.
Is there also a function in R that gives me this kind of information?
So here we have a few options
Base R
Within Base R there are a few options for getting these kind of data regarding your data:
str
You can use str to see the structure of a data frame
str(iris)
'data.frame': 150 obs. of 5 variables:
$ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
$ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
$ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
$ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
$ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
summary
Additionally, there is the summary function which completes a five number summary for each column and then counts for factors:
summary(iris)
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
Min. :4.300 Min. :2.000 Min. :1.000 Min. :0.100 setosa :50
1st Qu.:5.100 1st Qu.:2.800 1st Qu.:1.600 1st Qu.:0.300 versicolor:50
Median :5.800 Median :3.000 Median :4.350 Median :1.300 virginica :50
Mean :5.843 Mean :3.057 Mean :3.758 Mean :1.199
3rd Qu.:6.400 3rd Qu.:3.300 3rd Qu.:5.100 3rd Qu.:1.800
Max. :7.900 Max. :4.400 Max. :6.900 Max. :2.500
dplyr
dplyr provides something similar to str which shows some of the data types
library(dplyr)
glimpse(iris)
Observations: 150
Variables: 5
$ Sepal.Length <dbl> 5.1, 4.9, 4.7, 4.6, 5.0, 5.4, 4.6, 5.0, 4.4, 4.9, 5.4, 4.8, 4.8, 4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5...
$ Sepal.Width <dbl> 3.5, 3.0, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3.0, 3.0, 4.0, 4.4, 3.9, 3.5, 3.8, 3...
$ Petal.Length <dbl> 1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6, 1.4, 1.1, 1.2, 1.5, 1.3, 1.4, 1.7, 1...
$ Petal.Width <dbl> 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0...
$ Species <fct> setosa, setosa, setosa, setosa, setosa, setosa, setosa, setosa, setosa, setosa, setosa, setosa, ...
skimr
Finally, the skimr package provides an enhanced summary including little histograms
library(skimr)
skim(iris)
-- Data Summary ------------------------
Values
Name iris
Number of rows 150
Number of columns 5
_______________________
Column type frequency:
factor 1
numeric 4
________________________
Group variables None
-- Variable type: factor -------------------------------------------------------
skim_variable n_missing complete_rate ordered n_unique top_counts
1 Species 0 1 FALSE 3 set: 50, ver: 50, vir: 50
-- Variable type: numeric ------------------------------------------------------
skim_variable n_missing complete_rate mean sd p0 p25 p50 p75 p100 hist
1 Sepal.Length 0 1 5.84 0.828 4.3 5.1 5.8 6.4 7.9 ▆▇▇▅▂
2 Sepal.Width 0 1 3.06 0.436 2 2.8 3 3.3 4.4 ▁▆▇▂▁
3 Petal.Length 0 1 3.76 1.77 1 1.6 4.35 5.1 6.9 ▇▁▆▇▂
4 Petal.Width 0 1 1.20 0.762 0.1 0.3 1.3 1.8 2.5 ▇▁▇▅▃
Between those functions you can get a pretty good look at your data!
It's not a single function, but the first three things I always do are
library(tidyverse)
# Shows top 6 rows
iris %>% head()
# Gives dimensions of data.frame
iris %>% dim()
# Gives the classes of the data in each column (e.g. numeric, character etc)
iris %>% sapply(class)
The best package I use, that I haven't seen above, is inspectdf (mentioned by Niels in a comment above). inspectdf does much of the summary you see in skimr in #MDEWITT via specific function calls; for instance, inspect_cat and inspect_num for categorical and numerical variable summaries, respectively.
The contribution of my comment is that inspectdf has two additional functions inspect_imb and inspect_cor which, respectively, look at the most common value per column and the correlation between numerical cols. I find these tremendously useful for data cleaning/pre-processing.
I have an array that contains numbers that are distances, and another that represents certain values at that distance. How do I calculate the standard error of all the data at a fixed value of the distance?
The standard error is the standard deviation/ the square-root of the number of observations.
e.g distances(d):
[1 1 14 6 1 12 14 6 6 7 4 3 7 9 1 3 3 6 5 8]
e.g data corresponding to the entry of the distances:
therefore value=3.3 at d=1; value=2,1 at d=1; value=3.5 at d=14; etc..
[3.3 2.1 3.5 2.5 4.6 7.4 2.6 7.8 9.2 10.11 14.3 2.5 6.7 3.4 7.5 8.5 9.7 4.3 2.8 4.1]
For example, at distance d=6 I should calculate the standard error of 2.5, 7.8, 9.2 and 4.3 which would be the standard deviation of these values divided by the square root of the total number of values (4 in this case).
I've used the following code that works, but I don't know how to divide the result be the square-root of the total number of values at each distance:
import numpy as np
result = []
for d in set(key):
result.append(np.std[dist[i] for i in range(len(key)) if key[i] == d])
Any help would be greatly appreciated. Thanks!
Does this help?
for d in set(key):
result.append(np.std[dist[i] for i in range(len(key)) if key[i] == d] / np.sqrt(dist.count(d)))
I'm having a bit of a hard time telling exactly how you want things structured, but I would recommend a dictionary, so that you can know which result is associated with which key value. If your data is like this:
>>> key
array([ 1, 1, 14, 6, 1, 12, 14, 6, 6, 7, 4, 3, 7, 9, 1, 3, 3,
6, 5, 8])
>>> values
array([ 3.3 , 2.1 , 3.5 , 2.5 , 4.6 , 7.4 , 2.6 , 7.8 , 9.2 ,
10.11, 14.3 , 2.5 , 6.7 , 3.4 , 7.5 , 8.5 , 9.7 , 4.3 ,
2.8 , 4.1 ])
You can set up a dictionary along these lines with a dict comprehension:
result = {f'distance_{i}':np.std(values[key==i]) / np.sqrt(sum(key==i)) for i in set(key)}
>>> result
{'distance_1': 1.0045988005169029, 'distance_3': 1.818424226264781, 'distance_4': 0.0, 'distance_5': 0.0, 'distance_6': 1.3372079120316331, 'distance_7': 1.2056170619230633, 'distance_8': 0.0, 'distance_9': 0.0, 'distance_12': 0.0, 'distance_14': 0.3181980515339463}
I have a large dataset with values ranging from 1 to 25 with a resolution of o.1 . The distribution is arbitrary in nature with mode value of 1. The sample dataset can be like :
1,
1,
23.05,
19.57,
1,
1.56,
1,
23.53,
19.74,
7.07,
1,
22.85,
1,
1,
7.78,
16.89,
12.75,
15.32,
7.7,
14.26,
15.41,
1,
16.34,
8.57,
15,
14.97,
1.18,
14.15,
1.94,
14.61,
1,
15.49,
1,
9.18,
1.71,
1,
10.4,
How to evaluate the counts in different ranges (0-0.5,0.5-1, etc) and find out their frequency mean in pandas, Python.
expected output can be
values ranges(f) occurance(n) f*n
1
2.2 1-2 2 3
2.8 2-3 3 7.5
3.7 3-4 2 7
5.5 4-5 1 4.5
5.8 5-6 3 16.5
4.3
2.7 sum- 11 38.5
3.5
1.8 frequency mean 3.5
5.9
You need cut for binning, then convert CategoricalIndex to IntervalIndex for mid value, multiple column by mul, sum and last divide scalars:
df = pd.DataFrame({'col':[1,2.2,2.8,3.7,5.5,5.8,4.3,2.7,3.5,1.8,5.9]})
print (df)
col
0 1.0
1 2.2
2 2.8
3 3.7
4 5.5
5 5.8
6 4.3
7 2.7
8 3.5
9 1.8
10 5.9
binned = pd.cut(df['col'], np.arange(1, 7), include_lowest=True)
df1 = df.groupby(binned).size().reset_index(name='val')
df1['mid'] = pd.IntervalIndex(df1['col']).mid
df1['mul'] = df1['val'].mul(df1['mid'])
print (df1)
col val mid mul
0 (0.999, 2.0] 2 1.4995 2.999
1 (2.0, 3.0] 3 2.5000 7.500
2 (3.0, 4.0] 2 3.5000 7.000
3 (4.0, 5.0] 1 4.5000 4.500
4 (5.0, 6.0] 3 5.5000 16.500
a = df1.sum()
print (a)
val 11.0000
mid 17.4995
mul 38.4990
dtype: float64
b = a['mul'] / a['val']
print (b)
3.49990909091
I have a task kind of like SQL search. I have a "table" which contains the following 1D arrays (about 1 million elements) identified by ID1:
ID1, z, e, PA, n
Another "table" which contains the following 1D arrays (about 1.5 million elements) identified by ID2:
ID2, RA, DEC
I want to match ID1 and ID2 to find the common ones to form another "table" which contains ID, z, e, PA, n, RA, DEC. Most elements in ID1 can be found in ID2 but not all, otherwise I can use numpy.in1d(ID1,ID2) to accomplish it. Anyone has fast way to accomplish this task?
For example:
ID1, z, e, PA, n
101, 1.0, 1.2, 1.5, 1.8
104, 1.5, 1.8, 2.2, 3.1
105, 1.4, 2.0, 3.3, 2.8
ID2, RA, DEC
101, 4.5, 10.5
107, 90.1, 55.5
102, 30.5, 3.3
103, 60.1, 40.6
104, 10.8, 5.6
The output should be
ID, z, e, PA, n, RA, DEC
101, 1.0, 1.2, 1.5, 1.8, 4.5, 10.5
104, 1.5, 1.8, 2.2, 3.1, 10.8, 5.6
Well you can use np.in1d with swapped places for the first columns of the two arrays/tables, such that we would have two masks to index into the arrays for selection. Then, simply stack the results -
mask1 = np.in1d(a[:,0], b[:,0])
mask2 = np.in1d(b[:,0], a[:,0])
out = np.column_stack(( a[mask1], b[mask2,1:] ))
Sample run -
In [44]: a
Out[44]:
array([[ 101. , 1. , 1.2, 1.5, 1.8],
[ 104. , 1.5, 1.8, 2.2, 3.1],
[ 105. , 1.4, 2. , 3.3, 2.8]])
In [45]: b
Out[45]:
array([[ 101. , 4.5, 10.5],
[ 102. , 30.5, 3.3],
[ 103. , 60.1, 40.6],
[ 104. , 10.8, 5.6],
[ 107. , 90.1, 55.5]])
In [46]: mask1 = np.in1d(a[:,0], b[:,0])
In [47]: mask2 = np.in1d(b[:,0], a[:,0])
In [48]: np.column_stack(( a[mask1], b[mask2,1:] ))
Out[48]:
array([[ 101. , 1. , 1.2, 1.5, 1.8, 4.5, 10.5],
[ 104. , 1.5, 1.8, 2.2, 3.1, 10.8, 5.6]])
Assuming your second table, table B, is sorted, you can do a sorted lookup, then check if the indexed element is actually found:
idx = np.searchsorted(B[:-1, 0], A[:, 0])
found = A[:, 0] == B[idx, 0]
np.hstack((A[found, :], B[idx[found], 1:]))
Result:
array([[ 101. , 1. , 1.2, 1.5, 1.8, 4.5, 10.5],
[ 104. , 1.5, 1.8, 2.2, 3.1, 10.8, 5.6]])
The last element of the B indices is excluded to simplify the case where the item in A is beyond the final element in B. Without it, it is possible that the returned index would be greater than the length of B and cause indexing errors.
Use pandas:
import pandas as pd
id1 = pd.read_csv('id1.txt')
id2 = pd.read_csv('id2.txt')
df = id1.merge(id2.sort_values(by='ID2').drop_duplicates('ID2').rename(columns={'ID2':'ID1'}))
print(df)
Produces:
ID1 z e PA n RA DEC
0 101 1.0 1.2 1.5 1.8 4.5 10.5
1 104 1.5 1.8 2.2 3.1 10.8 5.6
With large datasets you may need to do things in place:
# [Optional] sort locations and drop duplicates
id2.sort_values(by='ID2', inplace=True)
id2.drop_duplicates('ID2', inplace=True)
# columns that you are merging must have the same name
id2.rename(columns={'ID2':'ID1'}, inplace=True)
# perform the merge
df = id1.merge(id2)
Without drop_duplicates you get one row for each item:
df = id1.merge(id2.rename(columns={'ID2':'ID1'}))
print(id2)
print(df)
Giving:
ID2 RA DEC
0 101 4.5 10.5
1 107 90.1 55.5
2 102 30.5 3.3
3 103 60.1 40.6
4 104 10.8 5.6
5 103 60.1 40.6
6 104 10.9 5.6
ID1 z e PA n RA DEC
0 101 1.0 1.2 1.5 1.8 4.5 10.5
1 104 1.5 1.8 2.2 3.1 10.8 5.6
2 104 1.5 1.8 2.2 3.1 10.9 5.6
Note that this solution preserves the different types for the columns:
>>> id1.ID1.dtype
dtype('int64')
>>> id1[' z'].dtype
dtype('float64')
Since you have spaces after the comma in the header row those spaces became part of the column name, hence need to refer to the second column using id1[' z']. By modifying the read statement, this is no longer necessary:
>>> id1 = pd.read_csv('id1.txt', skipinitialspace=True)
>>> id1.z.dtype
dtype('float64')
I have a dataframe df1
pid stat h1 h2 h3 h4 h5 h6 ... h20
1 a 3.2 3.5 6.2 7.1 1.2 2.3 ... 3.2
1 b 3.3 1.5 4.2 7.7 4.2 3.5 ... 8.4
1 a 3.1 3.8 2.2 1.1 6.2 5.3 ... 9.2
1 b 3.7 1.2 8.2 4.7 3.2 8.5 ... 2.4
: : : : : : : : : :
2 a 2.2 3.8 6.2 7.3 1.3 4.3 ... 3.2
2 b 4.3 1.3 4.2 5.7 2.2 3.1 ... 2.4
2 a 2.1 3.7 2.4 1.6 6.4 9.3 ... 9.6
2 b 3.8 1.3 8.7 3.7 7.2 8.3 ... 9.4
: : : : : : : : : :
3 a 2.2 3.8 6.2 7.3 1.3 4.3 ... 3.2
3 b 4.3 1.3 4.2 5.7 2.2 3.1 ... 2.4
3 a 2.1 3.7 2.4 1.6 6.4 9.3 ... 9.6
3 b 3.8 1.3 8.7 3.7 7.2 8.3 ... 9.4
: : : : : : : : : :
I would like to obtain groups indexed on pid and stat and then subtract h values of group1 from h values of group2 for a final dataframe (df2). This final dataframe needs to be reindexed with numbers starting from 0:len(groups) Repeat it iteratively for all permutations of pid like 1-2, 1-3, 1-4, 2-1, 2-3 ... etc. I need to perform other calculations on the on the final dataframe df2(values in the below df2 are not exact subtracted, but just a representation)
pid(string) stat h1p1-h1p2 h2p1-h2p2 h3p1-h3p2 h4p1-h4p2 h5p1-h5p2 h6p1-h6p2 ... h20p1-h2p2
1-2 a 3.2 3.5 6.2 7.1 1.2 2.3 ... 3.2
1-2 b 3.3 1.5 4.2 7.7 4.2 3.5 ... 8.4
1-2 a 3.1 3.8 2.2 1.1 6.2 5.3 ... 9.2
1-2 b 3.7 1.2 8.2 4.7 3.2 8.5 ... 2.4
1-3 ....
I looked at options of;
for (pid, stat), group in df1.groupby(['pid', 'stat']):
print('pid = %s Stat = %s' %(pid, stat))
print group
this gives me groups but, I am not sure how to access dataframes from this for loop and use it for subtracting from other groups. Also
df_grouped = df.groupby(['pid', 'stat']).groups()
still not sure how to access the new dataframe of groups and perform operations. I would like to know, if this can be done using groupby or if there is any better approach. Thanks in advance!
I implemented a generator and ignored the stat column because it makes no different in any groups according to your sample. Please tell me if I did it wrong.
import pandas as pd
from itertools import permutations
def subtract_group(df, col):
pid = df['pid'].unique()
# select piece with pid == i
segment = lambda df, i: df[df['pid'] == i].reset_index()[col]
for x, y in permutations(pid, 2):
result_df = pd.DataFrame(segment(df, x) - segment(df, y))
# rename columns
result_df.columns=["%sp%d-%sp%d" % (c, x, c, y) for c in col]
# insert pid column
result_df.insert(0, 'pid', '-'.join([str(x), str(y)]))
yield result_df
You can test it with:
# column name in your case
columns = ['h' + str(i+1) for i in range(20)]
print next(subtract_group(df1, columns))
Hope it helps.