Reset fmax.accumulate when cross array (Python Numpy) - python

I'm trying to get cumulative max of array1 until it reach the levels of array2, then it will restart accumulation from those points.
So: (RsiMa and DeltaFastAtrRsi are arrays)
long = (RsiMa - DeltaFastAtrRsi)
longband = np.fmax.accumulate(long)
but at the points where longband >= RsiMa:
longband = long
and then, max accumulation will restart from this point.
UPPER LINE = RsiMa (array2)
LOWER LINE = longband (array1)
I NEED TO DO THIS WITHOUT LOOPS!!! (NUMPY)
EDIT EXAMPLE:
0 1 2 3 4 5 6
RsiMa 4 4 4 3 5 2 1
long 1 2 3 2 4 1 0.5
np.fmax.accumulate(long) 1 2 3 3 4 4 4
expected output 1 2 3 2 4 1 0.5
^ ^ ^
In the highlighted points (2, 1, 0.5) expected output was >= RsiMa, so the output is equal to the long value, and accumulation will restart from those points.

Related

Shuffling an array except the first and the last element in Python

I am generating a normal distribution but keeping the mean and std exactly the same by using np.random.seed(0). I am trying to shuffle r except the first and the last elements of the array but it keeps the remaining elements at the same location in the array as shown in the current output. I also present the expected output.
import numpy as np
np.random.seed(0)
mu, sigma = 50, 2.0 # mean and standard deviation
Nodes=10
r = np.random.normal(mu, sigma, Nodes)
sort_r = np.sort(r);
r1=sort_r[::-1]
r1=r1.reshape(1,Nodes)
r2 = r.copy()
np.random.shuffle(r2.ravel()[1:])
r2=r2.reshape(1,Nodes) #actual radius values in mu(m)
maximum = r2.max()
indice1 = np.where(r2 == maximum)
r2[indice1] = r2[0][0]
r2[0][0] = maximum
r2[0][Nodes-1] = maximum #+0.01*maximum
print("r2 with max at (0,0)=",[r2])
The current output for many runs is
r2 with max at (0,0)= [array([[54.4817864 , 51.90017684, 53.52810469, 53.73511598, 48.04544424,
51.95747597, 50.80031442, 50.821197 , 49.7935623 , 54.4817864 ]])]
The expected output is (shuffling all elements randomly except the first and the last element)
Run 1: r2 with max at (0,0)= [array([[54.4817864 , 53.52810469, 51.90017684, ,53.73511598, 48.04544424,49.7935623 ,50.80031442, 50.821197 , 51.95747597, 54.4817864 ]])]
Run 2: r2 with max at (0,0)= [array([[54.4817864 , 51.90017684,53.52810469, 48.04544424, 53.73511598, 51.95747597, 49.7935623 ,50.80031442, 50.821197 , 54.4817864 ]])]
It's not that clear from your question what do you include in a run.
If, like it seems, you're initializing distribution and seed every time, shuffling it once will always give you the same result. It must be like that because random state is fixed, just like you want your random numbers to be predictable also the shuffle operation will return always the same result.
Let me show you what I mean with some simpler code than yours:
# reinit distribution and seed at each run
for run in range(5):
np.random.seed(0)
a = np.random.randint(10, size=10)
np.random.shuffle(a)
print(f'{run}:{a}')
Which will print
0:[2 3 9 0 3 7 4 5 3 5]
1:[2 3 9 0 3 7 4 5 3 5]
2:[2 3 9 0 3 7 4 5 3 5]
3:[2 3 9 0 3 7 4 5 3 5]
4:[2 3 9 0 3 7 4 5 3 5]
What you want is to initialize your distribution once and shuffle it at each run:
# init distribution and just shuffle it at each run
np.random.seed(0)
a = np.random.randint(10, size=10)
for run in range(5):
np.random.shuffle(a)
print(f'{run}:{a}')
Which will print:
0:[2 3 9 0 3 7 4 5 3 5]
1:[9 0 3 4 2 5 7 3 3 5]
2:[2 0 3 3 3 5 7 5 4 9]
3:[5 3 5 3 0 2 7 4 9 3]
4:[3 9 3 2 5 7 3 4 0 5]

How to delete a matrix cell's neighbors which are the same value with it

I have a matrix as shown below (taken from a txt file with an argument), and every cell has neighbors. Once you pick a cell, that cell and all neighboring cells that containing the same number will disappear.
1 0 4 7 6 8
0 5 4 4 5 5
2 1 4 4 4 6
4 1 3 7 4 4
I've tried to do this with using recursion. I separated function four parts which are up(), down() , left() and right(). But I got an error message: RecursionError: maximum recursion depth exceeded in comparison
cmd=input("Row,column:")
cmdlist=command.split(",")
row,column=int(cmdlist[0]),int(cmdlist[1])
num=lines[row-1][column-1]
def up(x,y):
if lines[x-2][y-1]==num and x>1:
left(x,y)
right(x,y)
lines[x-2][y-1]=None
def left(x,y):
if lines[x-1][y-2]==num and y>1:
up(x,y)
down(x,y)
lines[x-1][y-2]=None
def right(x,y):
if lines[x-1][y]==num and y<len(lines[row-1]):
up(x,y)
down(x,y)
lines[x-1][y]=None
def down(x,y):
if lines[x][y-1]==num and x<len(lines):
left(x,y)
right(x,y)
lines[x][y-1]=None
up(row,column)
down(row,column)
for i in lines:
print(str(i).strip("[]").replace(",","").replace("None"," "))
When I give the input (3,3) which represents the number of "4", the output must be like this:
1 0 7 6 8
0 5 5 5
2 1 6
4 1 3 7
I don't need fixed code, just the main idea will be enough. Thanks a lot.
Recursion error happens when your recursion does not terminate.
You can solve this without recursing using set's of indexes:
search all indexes that contain the looked for number into all_num_idx
add the index you are currently at (your input) to a set tbd (to be deleted)
loop over the tbd and add all indexed from all_num_idx that differ only in -1/+1 in row or col to any index thats already in the set
do until tbd does no longer grow
delete all indexes from tbd:
t = """4 0 4 7 6 8
0 5 4 4 5 5
2 1 4 4 4 6
4 1 3 7 4 4"""
data = [k.strip().split() for k in t.splitlines()]
row,column=map(int,input("Row,column:").strip().split(";"))
num = data[row][column]
len_r =len(data)
len_c = len(data[0])
all_num_idx = set((r,c) for r in range(len_r) for c in range(len_c) if data[r][c]==num)
tbd = set( [ (row,column)] ) # inital field
tbd_size = 0 # different size to enter while
done = set() # we processed those already
while len(tbd) != tbd_size: # loop while growing
tbd_size=len(tbd)
for t in tbd:
if t in done:
continue
# only 4-piece neighbourhood +1 or -1 in one direction
poss_neighbours = set( [(t[0]+1,t[1]), (t[0],t[1]+1),
(t[0]-1,t[1]), (t[0],t[1]-1)] )
# 8-way neighbourhood with diagonals
# poss_neighbours = set((t[0]+a,t[1]+b) for a in range(-1,2) for b in range(-1,2))
tbd = tbd.union( poss_neighbours & all_num_idx)
# reduce all_num_idx by all those that we already addded
all_num_idx -= tbd
done.add(t)
# delete the indexes we collected
for r,c in tbd:
data[r][c]=None
# output
for line in data:
print(*(c or " " for c in line) , sep=" ")
Output:
Row,column: 3,4
4 0 7 6 8
0 5 5 5
2 1 6
4 1 3 7
This is a variant of a "flood-fill-algorythm" flooding only cells of a certain value. See https://en.wikipedia.org/wiki/Flood_fill
Maybe you should replace
def right(x,y):
if lines[x-1][y]==num and y<len(lines[row-1]):
up(x,y)
down(x,y)
lines[x-1][y]=None
by
def right(x,y):
if lines[x-1][y]==num and y<len(lines[row-1]):
lines[x-1][y]=None
up(x - 1,y)
down(x - 1,y)
right(x - 1, y)
and do the same for all the other functions.
Putting lines[x-1][y]=None ensure that your algorithm stops and changing the indices ensure that the next step of your algorithm will start from the neighbouring cell.

Identify clusters linked by delta to the left and different delta to the right

Consider the sorted array a:
a = np.array([0, 2, 3, 4, 5, 10, 11, 11, 14, 19, 20, 20])
If I specified left and right deltas,
delta_left, delta_right = 1, 1
Then this is how I'd expect the clusters to be assigned:
# a = [ 0 . 2 3 4 5 . . . . 10 11 . . 14 . . . . 19 20
# 11 20
#
# [10--|-12] [19--|-21]
# [1--|--3] [10--|-12] [19--|-21]
# [-1--|--1] [3--|--5] [9--|-11] [18--|-20]
# +--+--|--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--|
# [2--|--4] [13--|-15]
#
# │ ╰──┬───╯ ╰┬─╯ │ ╰┬─╯
# │ cluster 2 Cluster 3 │ Cluster 5
# Cluster 1 Cluster 4
NOTE: Despite the interval [-1, 1] sharing an edge with [1, 3], neither interval includes an adjacent point and therefore do not constitute joining their respective clusters.
Assuming the cluster assignments were stored in an array named clusters, I'd expect the results to look like this
print(clusters)
[1 2 2 2 2 3 3 3 4 5 5 5]
However, suppose I change the left and right deltas to be different:
delta_left, delta_right = 2, 1
This means that for a value of x it should be combined with any other point in the interval [x - 2, x + 1]
# a = [ 0 . 2 3 4 5 . . . . 10 11 . . 14 . . . . 19 20
# 11 20
#
# [9-----|-12] [18-----|-21]
# [0-----|--3] [9-----|-12] [18-----|-21]
# [-2-----|--1][2-----|--5] [8-----|-11] [17-----|-20]
# +--+--|--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--|
# [1 ----|--4] [12-----|-15]
#
# ╰─────┬─────╯ ╰┬─╯ │ ╰┬─╯
# cluster 1 Cluster 2 │ Cluster 4
# Cluster 3
NOTE: Despite the interval [9, 12] sharing an edge with [12, 15], neither interval includes an adjacent point and therefore do not constitute joining their respective clusters.
Assuming the cluster assignments were stored in an array named clusters, I'd expect the results to look like this:
print(clusters)
[1 1 1 1 1 2 2 2 3 4 4 4]
We will leverage np.searchsorted and logic to find cluster edges.
First, let's take a closer look at what np.searchsorted does:
Find the indices into a sorted array a such that, if the corresponding elements in v were inserted before the indices, the order of a would be preserved.
What I'll do is execute np.searchsorted with a using a - delta_left. Let's look at that for delta_left = 1
# a =
# [ 0 2 3 4 5 10 11 11 14 19 20 20]
#
# a - delta_left
# [-1 1 2 3 4 9 10 10 13 18 19 19]
-1 would get inserted at position 0 to maintain order
1 would get inserted at position 1 to maintain order
2 would get inserted at position 1 as well, indicating that 2 might be in the same cluster as 1
3 would get inserted at position 2 indicating that 3 might be in the same cluster as 2
so on and so forth
What we notice is that only when an element less delta would get inserted at its current position would we consider a new cluster starting.
We do this again for the right side with a difference. The difference is that by default if a bunch of elements are the same, np.searchsorted assumes to insert into the front of values. To identify the ends of clusters, I'm going to want to insert after the identical elements. Therefore I'll use the paramater side='right'
If ‘left’, the index of the first suitable location found is given. If ‘right’, return the last such index. If there is no suitable index, return either 0 or N (where N is the length of a).
Now the logic. A cluster can only begin if a prior cluster has ended, with the exception of the first cluster. We'll then consider a shifted version of the results of our second np.searchsorted
Let's now define our function
def delta_cluster(a, dleft, dright):
# use to track whether searchsorted results are at correct positions
rng = np.arange(len(a))
edge_left = a.searchsorted(a - dleft)
starts = edge_left == rng
# we append 0 to shift
edge_right = np.append(0, a.searchsorted(a + dright, side='right')[:-1])
ends = edge_right == rng
return (starts & ends).cumsum()
demonstration
with left, right deltas equal to 1 and 1
print(delta_cluster(a, 1, 1))
[1 2 2 2 2 3 3 3 4 5 5 5]
with left, right deltas equal to 2 and 1
print(delta_cluster(a, 2, 1))
[1 1 1 1 1 2 2 2 3 4 4 4]
Extra Credit
What if a isn't sorted?
I'll utilize information learned from this post
def delta_cluster(a, dleft, dright):
s = a.argsort()
size = s.size
if size > 1000:
y = np.empty(s.size, dtype=np.int64)
y[s] = np.arange(s.size)
else:
y = s.argsort()
a = a[s]
rng = np.arange(len(a))
edge_left = a.searchsorted(a - dleft)
starts = edge_left == rng
edge_right = np.append(0, a.searchsorted(a + dright, side='right')[:-1])
ends = edge_right == rng
return (starts & ends).cumsum()[y]
demonstration
b = np.random.permutation(a)
print(b)
[14 10 3 11 20 0 19 20 4 11 5 2]
print(delta_cluster(a, 2, 1))
[1 1 1 1 1 2 2 2 3 4 4 4]
print(delta_cluster(b, 2, 1))
[3 2 1 2 4 1 4 4 1 2 1 1]
print(delta_cluster(b, 2, 1)[b.argsort()])
[1 1 1 1 1 2 2 2 3 4 4 4]

How do you triangulate a number in python?

I have to do these for school and I don't know how to.
Write a function print_triangular_numbers(n) that prints out the first n triangular numbers (n is an input). A call to print_triangular_numbers(5) would produce the following output:
n result
1 1
2 3
3 6
4 10
5 15
A triangular number can be expressed as
n(n+1)/2
Thus, you need to build a simple loop, starting at 1 and going through your passed parameter:
def print_triangular_numbers(n):
for x in range(1,n+1):
print x, x * (x + 1) / 2
The for loop starts at 1 and goes through n+1 because range is not inclusive of the end point.
This outputs:
1 1
2 3
3 6
4 10
5 15

Numpy, problem with long arrays

I have two arrays (a and b) with n integer elements in the range (0,N).
typo: arrays with 2^n integers where the largest integer takes the value N = 3^n
I want to calculate the sum of every combination of elements in a and b (sum_ij_ = a_i_ + b_j_ for all i,j). Then take modulus N (sum_ij_ = sum_ij_ % N), and finally calculate the frequency of the different sums.
In order to do this fast with numpy, without any loops, I tried to use the meshgrid and the bincount function.
A,B = numpy.meshgrid(a,b)
A = A + B
A = A % N
A = numpy.reshape(A,A.size)
result = numpy.bincount(A)
Now, the problem is that my input arrays are long. And meshgrid gives me MemoryError when I use inputs with 2^13 elements. I would like to calculate this for arrays with 2^15-2^20 elements.
that is n in the range 15 to 20
Is there any clever tricks to do this with numpy?
Any help will be highly appreciated.
--
jon
try chunking it. your meshgrid is an NxN matrix, block that up to 10x10 N/10xN/10 and just compute 100 bins, add them up at the end. this only uses ~1% as much memory as doing the whole thing.
Edit in response to jonalm's comment:
jonalm: N~3^n not n~3^N. N is max element in a and n is number of
elements in a.
n is ~ 2^20. If N is ~ 3^n then N is ~ 3^(2^20) > 10^(500207).
Scientists estimate (http://www.stormloader.com/ajy/reallife.html) that there are only around 10^87 particles in the universe. So there is no (naive) way a computer can handle an int of size 10^(500207).
jonalm: I am however a bit curios about the pv() function you define. (I
do not manage to run it as text.find() is not defined (guess its in another
module)). How does this function work and what is its advantage?
pv is a little helper function I wrote to debug the value of variables. It works like
print() except when you say pv(x) it prints both the literal variable name (or expression string), a colon, and then the variable's value.
If you put
#!/usr/bin/env python
import traceback
def pv(var):
(filename,line_number,function_name,text)=traceback.extract_stack()[-2]
print('%s: %s'%(text[text.find('(')+1:-1],var))
x=1
pv(x)
in a script you should get
x: 1
The modest advantage of using pv over print is that it saves you typing. Instead of having to
write
print('x: %s'%x)
you can just slap down
pv(x)
When there are multiple variables to track, it's helpful to label the variables.
I just got tired of writing it all out.
The pv function works by using the traceback module to peek at the line of code
used to call the pv function itself. (See http://docs.python.org/library/traceback.html#module-traceback) That line of code is stored as a string in the variable text.
text.find() is a call to the usual string method find(). For instance, if
text='pv(x)'
then
text.find('(') == 2 # The index of the '(' in string text
text[text.find('(')+1:-1] == 'x' # Everything in between the parentheses
I'm assuming n ~ 3^N, and n~2**20
The idea is to work module N. This cuts down on the size of the arrays.
The second idea (important when n is huge) is to use numpy ndarrays of 'object' type because if you use an integer dtype you run the risk of overflowing the size of the maximum integer allowed.
#!/usr/bin/env python
import traceback
import numpy as np
def pv(var):
(filename,line_number,function_name,text)=traceback.extract_stack()[-2]
print('%s: %s'%(text[text.find('(')+1:-1],var))
You can change n to be 2**20, but below I show what happens with small n
so the output is easier to read.
n=100
N=int(np.exp(1./3*np.log(n)))
pv(N)
# N: 4
a=np.random.randint(N,size=n)
b=np.random.randint(N,size=n)
pv(a)
pv(b)
# a: [1 0 3 0 1 0 1 2 0 2 1 3 1 0 1 2 2 0 2 3 3 3 1 0 1 1 2 0 1 2 3 1 2 1 0 0 3
# 1 3 2 3 2 1 1 2 2 0 3 0 2 0 0 2 2 1 3 0 2 1 0 2 3 1 0 1 1 0 1 3 0 2 2 0 2
# 0 2 3 0 2 0 1 1 3 2 2 3 2 0 3 1 1 1 1 2 3 3 2 2 3 1]
# b: [1 3 2 1 1 2 1 1 1 3 0 3 0 2 2 3 2 0 1 3 1 0 0 3 3 2 1 1 2 0 1 2 0 3 3 1 0
# 3 3 3 1 1 3 3 3 1 1 0 2 1 0 0 3 0 2 1 0 2 2 0 0 0 1 1 3 1 1 1 2 1 1 3 2 3
# 3 1 2 1 0 0 2 3 1 0 2 1 1 1 1 3 3 0 2 2 3 2 0 1 3 1]
wa holds the number of 0s, 1s, 2s, 3s in a
wb holds the number of 0s, 1s, 2s, 3s in b
wa=np.bincount(a)
wb=np.bincount(b)
pv(wa)
pv(wb)
# wa: [24 28 28 20]
# wb: [21 34 20 25]
result=np.zeros(N,dtype='object')
Think of a 0 as a token or chip. Similarly for 1,2,3.
Think of wa=[24 28 28 20] as meaning there is a bag with 24 0-chips, 28 1-chips, 28 2-chips, 20 3-chips.
You have a wa-bag and a wb-bag. When you draw a chip from each bag, you "add" them together and form a new chip. You "mod" the answer (modulo N).
Imagine taking a 1-chip from the wb-bag and adding it with each chip in the wa-bag.
1-chip + 0-chip = 1-chip
1-chip + 1-chip = 2-chip
1-chip + 2-chip = 3-chip
1-chip + 3-chip = 4-chip = 0-chip (we are mod'ing by N=4)
Since there are 34 1-chips in the wb bag, when you add them against all the chips in the wa=[24 28 28 20] bag, you get
34*24 1-chips
34*28 2-chips
34*28 3-chips
34*20 0-chips
This is just the partial count due to the 34 1-chips. You also have to handle the other
types of chips in the wb-bag, but this shows you the method used below:
for i,count in enumerate(wb):
partial_count=count*wa
pv(partial_count)
shifted_partial_count=np.roll(partial_count,i)
pv(shifted_partial_count)
result+=shifted_partial_count
# partial_count: [504 588 588 420]
# shifted_partial_count: [504 588 588 420]
# partial_count: [816 952 952 680]
# shifted_partial_count: [680 816 952 952]
# partial_count: [480 560 560 400]
# shifted_partial_count: [560 400 480 560]
# partial_count: [600 700 700 500]
# shifted_partial_count: [700 700 500 600]
pv(result)
# result: [2444 2504 2520 2532]
This is the final result: 2444 0s, 2504 1s, 2520 2s, 2532 3s.
# This is a test to make sure the result is correct.
# This uses a very memory intensive method.
# c is too huge when n is large.
if n>1000:
print('n is too large to run the check')
else:
c=(a[:]+b[:,np.newaxis])
c=c.ravel()
c=c%N
result2=np.bincount(c)
pv(result2)
assert(all(r1==r2 for r1,r2 in zip(result,result2)))
# result2: [2444 2504 2520 2532]
Check your math, that's a lot of space you're asking for:
2^20*2^20 = 2^40 = 1 099 511 627 776
If each of your elements was just one byte, that's already one terabyte of memory.
Add a loop or two. This problem is not suited to maxing out your memory and minimizing your computation.

Categories