I have csv file looking like this: part of the data.
X and Y are my coordinates of pixel.
I need to filter column ADC only for TDC values (in this column are also 0 values), and after this I need to sum up the energy value for every unique value of pixel, so for every x=0 y=0, x=0 y=1, x=0 y=2... until x=127 y=127. And in another column I need the number of duplicates for the pixel coordinates that occurs (so the number of places/rows from which I need to make a summation in the Energy column).
I don't know how to write the appropriate conditions for this kind of task. I will appreciate any type of help.
The following StackOverflow question and answers might help you out:
Group dataframe and get sum AND count?
But here is some code for your case which might be useful, too:
# import the pandas package, for doing data analysis and manipulation
import pandas as pd
# create a dummy dataframe using data of the type you are using (I hope)
df = pd.DataFrame(
data = {
"X": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"Y": [0, 0, 1, 1, 1, 1, 1, 1, 2, 2],
"ADC": ["TDC", "TDC", "TDC", "TDC", "TDC", 0, 0, 0, "TDC", "TDC"],
"Energy": [0, 0, 1, 1, 1, 2, 2, 2, 3, 3],
"Time": [1.2, 1.2, 2.3, 2.3, 3.6, 3.61, 3.62, 0.66, 0.67, 0.68],
}
)
# use pandas' groupby method and aggregation methods to get the sum of the energy in every unique combination of X and Y, and the number of times those combinations appear
df[df["ADC"] == "TDC"].groupby(by=["X","Y"]).agg({"Energy": ['sum','count']}).reset_index()
The result I get from this in my dummy example is:
X Y Energy
sum count
0 0 0 0 2
1 0 1 3 3
2 0 2 6 2
Related
I am trying to make a simple tetris ai in python(no genetic algorithms)
I want to count the gaps in the grid and make the best choice depending on it.
By gap I mean where you wont be able to place a piece without clearing some lines.
My grid is something like this:
[0, 0, 0, 0, 0]
['#ff0000', ....]
[...]
0 represents a blank space, while the hex code represents its covered by a block
I have tried to calculate gaps like this:
def grid_gaps(grid):
gaps = 0
for x in range(len(grid[0])):
for y in range(len(grid)):
if grid[y][x] == 0 and \
(y > 0 and grid[y - 1][x] != 0):
gaps += 1
return gaps
It works good when the grid is like this:
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 0, 0, 1, 0]
1 is some color, it correctly tells me that there are 3 gaps but when the grid is someting like this:
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 1, 0]
It again returns 3 but I want it to return 6.
I think the problem is that and grid[y - 1][x] != 0 is only looking at the cell directly above the current cell, so your bottom 3 cells in the second example aren't being counted.
One quick fix I can think of is to set a gap cell to some non-zero value once it's counted, that way the gap cells below will be counted too. (Then set them back to 0 after you're done, if you're using the same grid and not a copy for the rest of the game.)
The problem is that you're looking "up" to see whether there's a blocker, but you're only looking up one row. I think you want to reorganize this so you iterate over columns, and for each column, iterate down until you hit a 1, and then continue iterating and add to the gap count for each 0 that's encountered.
I have a numpy array where 0 denotes empty space and 1 denotes that a location is filled. I am trying to find a quick method of scanning the numpy array for where there are multiple values of zero adjacent to each other and return the location of the central zero.
For Example if I had the following array
[0 1 0 1]
[0 0 0 1]
[0 1 0 1]
[1 1 1 1]
I want to return the locations for which there is an adjacent zero on either side of a central zero
e.g
[1,1]
as this is the central of 3 zeros, i.e there is a zero either side of the zero at this location
Im aware that this can be calculated using if statements, but wondered if there was a more pythonic way of doing this.
Any help is greatly appreciated
The desired output here for arbitrary inputs is not exhaustively specified in the question, but here is a possible approach that might be useful for this kind of problem, and adapted to the details of the desired output. It uses np.cumsum, np.bincount, np.where, and np.median to find the middle index for groups of consecutive zeros along rows of a 2D array:
import numpy as np
def find_groups(x, min_size=3, value=0):
# Compute a sequential label for groups in each row.
xc = (x != value).cumsum(1)
# Count the number of occurances per group in each row.
counts = np.apply_along_axis(
lambda x: np.bincount(x, minlength=1 + xc.max()),
axis=1, arr=xc)
# Filter by minimum number of occurances.
i, j = np.where(counts >= min_size)
# Compute the median index of each group.
return [
(ii, int(np.ceil(np.median(np.where(xc[ii] == jj)[0]))))
for ii, jj in zip(i, j)
]
x = np.array([[0, 1, 0, 1],
[0, 0, 0, 1],
[0, 1, 0, 1],
[1, 1, 1, 1]])
print(find_groups(x))
# [(1, 1)]
It should work properly even for multiple rows with groups of varying sizes, and even multiple groups per row:
x2 = np.array([[0, 1, 0, 1, 1, 1, 1],
[0, 0, 0, 1, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0]])
print(find_groups(x2))
# [(1, 1), (1, 5), (2, 3), (3, 3)]
Hi Please help me either: speed up this dictionary compression; offer a better way to do it or gain a higher understanding of why it is so slow internally (like for example is calculation slowing down as the dictionary grows in memory size). I'm sure there must be a quicker way without learning some C!
classes = {i : [1 if x in df['column'].str.split("|")[i] else 0 for x in df['column']] for i in df.index}
with the output:
{1:[0,1,0...0],......, 4000:[0,1,1...0]}
from a df like this:
data_ = {'drugbank_id': ['DB06605', 'DB06606', 'DB06607', 'DB06608', 'DB06609'],
'drug-interactions': ['DB06605|DB06695|DB01254|DB01609|DB01586|DB0212',
'DB06605|DB06695|DB01254|DB01609|DB01586|DB0212',
'DB06606|DB06607|DB06608|DB06609',
'DB06606|DB06607',
'DB06608']
}
pd.DataFrame(data = data_ , index=range(0,5) )
I am preforming it in a df with 4000 rows, the column df['column'] contains a string of Ids separated by |. The number of IDs in each row that needs splitting varies from 1 to 1000, however, this is done for all 4000 indexes. I tested it on the head of the df and it seemed quick enough, now the comprehension has been running for 24hrs. So maybe it is just the sheer size of the job, but feel like I could speed it up and at this point I want to stop it an re-engineer, however, I am scared that will set me back without much increase in speed, so before I do that wanted to get some thoughts, ideas and suggestions.
Beyond 4000x4000 size I suspect that using the Series and Index Objects is the another problem and that I would be better off using lists, but given the size of the task I am not sure how much speed that will gain and maybe I am better off using some other method such as pd.apply(df, f(write line by line to json)). I am not sure - any help and education appreciated, thanks.
Here is one approach:
import pandas as pd
# create data frame
df = pd.DataFrame({'idx': [1, 2, 3, 4], 'col': ['1|2', '1|2|3', '2|3', '1|4']})
# split on '|' to convert string to list
df['col'] = df['col'].str.split('|')
# explode to get one row for each list element
df = df.explode('col')
# create dummy ID (this will become True in the final result)
df['dummy'] = 1
# use pivot to create dense matrix
df = (df.pivot(index='idx', columns='col', values='dummy')
.fillna(0)
.astype(int))
# convert each row to a list
df['test'] = df.apply(lambda x: x.to_list(), axis=1)
print(df)
col 1 2 3 4 test
idx
1 1 1 0 0 [1, 1, 0, 0]
2 1 1 1 0 [1, 1, 1, 0]
3 0 1 1 0 [0, 1, 1, 0]
4 1 0 0 1 [1, 0, 0, 1]
The output you want can be achieved using dummies. We split the column, stack, and use max to turn it into dummy indicators based on the original index. Then we use reindex to get it in the order you want based on the 'drugbank_id' column.
Finally to get the dictionary you want we will transpose and use to_dict
classes = (pd.get_dummies(df['drug-interactions'].str.split('|', expand=True).stack())
.max(level=0)
.reindex(df['drugbank_id'], axis=1)
.fillna(0, downcast='infer')
.T.to_dict('list'))
print(classes)
{0: [1, 0, 0, 0, 0], #Has DB06605, No DB06606, No DB06607, No DB06608, No DB06609
1: [1, 0, 0, 0, 0],
2: [0, 1, 1, 1, 1],
3: [0, 1, 1, 0, 0],
4: [0, 0, 0, 1, 0]}
I am looking to quickly combine columns that are genetic complements of each other. I have a large data frame with counts and want to combine columns where the column names are complements. I have a currently have a system that
Gets the complement of a column name
Checks the columns names for the compliment
Adds together the columns if there is a match
Then deletes the compliment column
However, this is slow (checking every column name) and gives different column names based on the ordering of the columns (i.e. deletes different compliment columns between runs). I was wondering if there was a way to incorporate a dictionary key:value pair to speed the process and keep the output consistent. I have an example dataframe below with the desired result (ATTG|TAAC & CGGG|GCCC are compliments).
df = pd.DataFrame({"ATTG": [3, 6, 0, 1],"CGGG" : [0, 2, 1, 4],
"TAAC": [0, 1, 0, 1], "GCCC" : [4, 2, 0, 0], "TTTT": [2, 1, 0, 1]})
## Current Pseudocode
for item in df.columns():
if compliment(item) in df.columns():
df[item] = df[item] + df[compliment(item)]
del df[compliment(item)]
## Desired Result
df_result = pd.DataFrame({"ATTG": [3, 7, 0, 2],"CGGG" : [4, 4, 1, 4], "TTTT": [2, 1, 0, 1]})
Translate the columns, then assign the columns the translation or original that is sorted first. This allows you to group compliments.
import numpy as np
mytrans = str.maketrans('ATCG', 'TAGC')
df.columns = np.sort([df.columns, [x.translate(mytrans) for x in df.columns]], axis=0)[0, :]
df.groupby(level=0, axis=1).sum()
# AAAA ATTG CGGG
#0 2 3 4
#1 1 7 4
#2 0 0 1
#3 1 2 4
Hello everyone
For a school project, I am stuck with the duration of an operation with Pandas Dataframe.
I have one dataframe df which shape is (250 000 000, 200). This dataframe contains values of variable describing the behaviours of sensors on a machine.
They are organized by 'Cycle' (everytime the machine begins a new cycle, this variable is incremented by one). And in this cycle, 'CycleTime' describes the position of the row within the 'Cycle'.
In the 'mean' DataFrame, I compute the mean of each variables group by the 'CycleTime'
The 'anomaly_matrix' DataFrame represents the global anomaly of each cycle which is the sum of the square difference of each rows belonging to the Cycle with the mean of corresponding CycleTime.
An example of my code is below
df = pd.DataFrame({'Cycle': [0, 0, 0, 1, 1, 1, 2, 2], 'CycleTime': [0, 1, 2, 0, 1, 2, 0, 1], 'variable1': [0, 0.5, 0.25, 0.3, 0.4, 0.1, 0.2, 0.25], 'variable2':[1, 2, 1, 1, 2, 2, 1, 2], 'variable3': [100, 5000, 200, 900, 100, 2000, 300, 300]})
mean = df.drop(['Cycle'], axis = 1).groupby("CycleTime").agg('mean')
anomali_matrix = df.drop(['CycleTime'], axis = 1).groupby("Cycle").agg('mean')
anomaly_matrix = anomali_matrix - anomali_matrix
for index, row in df.iterrows():
cycle = row["Cycle"]
time = row["CycleTime"]
anomaly_matrix.loc[cycle] += (row - mean.loc[time])**2
>>>anomaly_matrix
variable1 variable2 variable3
Cycle
0 0.047014 0.25 1.116111e+07
1 0.023681 0.25 3.917778e+06
2 0.018889 0.00 2.267778e+06
This calculation for my (250 000 000, 200) DataFrame last 6 hours, it is due to anomaly_matrix.loc[cycle] += (row - mean.loc[time])**2
I tried to improve by using an apply function but I do not succeed in adding other DataFrame in that apply function. Same thing trying to vectorize pandas.
Do you have any idea how to accelerate this process ? Thanks
You can use:
df1 = df.set_index(['Cycle', 'CycleTime'])
mean = df1.sub(df1.groupby('CycleTime').transform('mean'))**2
df2 = mean.groupby('Cycle').sum()
print (df2)
variable1 variable2 variable3
Cycle
0 0.047014 0.25 1.116111e+07
1 0.023681 0.25 3.917778e+06
2 0.018889 0.00 2.267778e+06