Thanks for taking the time to read this :) I'm implementing my own version of quick-sort in python and i'm trying to get it too work within some restrictions from a previous school assignment. Note that the reasons I've avoided using IN is because it wasn't allowed in the project i worked on (not sure why :3).
it was working fine for integers and strings but i cannot manage to adapt it for my CounterList() which is a list of nodes containing an arbitrary integer and string in each even though i'm only sorting by the integers contained in those nodes.
Pastebins:
My QuickSort: http://pastebin.com/mhAm3YYp.
The CounterList and CounterNode, code. http://pastebin.com/myn5xuv6.
from classes_1 import CounterNode, CounterList
def bulk_append(array1, array2):
# takes all the items in array2 and appends them to array1
itr = 0
array = array1
while itr < len(array2):
array.append(array2[itr])
itr += 1
return array
def quickSort(array):
lss = CounterList()
eql = CounterList()
mre = CounterList()
if len(array) <= 1:
return array # Base case.
else:
pivot = array[len(array)-1].count # Pivoting on the last item.
itr = 0
while itr < len(array)-1:
# Essentially editing "for i in array:" to handle CounterLists
if array[itr].count < pivot:
lss.append(array[itr])
elif array[itr].count > pivot:
mre.append(array[itr])
else:
eql.append(array[itr])
itr += 1
# Recursive step and combining seperate lists.
lss = quickSort(lss)
eql = quickSort(eql)
mre = quickSort(mre)
fnl = bulk_append(lss, eql)
fnl = bulk_append(fnl, mre)
return fnl
I know it is probably quite straightforward but i just can't seem to see the issue.
(Pivoting on last item)
Here is the test im using:
a = CounterList()
a.append(CounterNode("ack", 11))
a.append(CounterNode("Boo", 12))
a.append(CounterNode("Cah", 9))
a.append(CounterNode("Doh", 7))
a.append(CounterNode("Eek", 5))
a.append(CounterNode("Fuu", 3))
a.append(CounterNode("qck", 1))
a.append(CounterNode("roo", 2))
a.append(CounterNode("sah", 4))
a.append(CounterNode("toh", 6))
a.append(CounterNode("yek", 8))
a.append(CounterNode("vuu", 10))
x = quickSort(a)
print("\nFinal List: \n", x)
And the resulting CounterList:
['qck': 1, 'Fuu': 3, 'Eek': 5, 'Doh': 7, 'Cah': 9, 'ack': 11]
Which as you can tell, is missing multiple values?
Either way thanks for any advice, and your time.
There are two mistakes in the code:
You don't need "eql = quickSort(eql)" line because it contains all equal values, so no need to sort.
In every recursive call you loose pivot (reason for missing entries) as you don't append it to any list. You need to append it to eql. So after the code line shown below:
pivot = array[len(array)-1].count
insert this line:
eql.append(array[len(array)-1])
Also remove the below line from your code as it may cause recursion depth sometimes (only with arrays with some repeating values if any repeated value selected as pivot):
eql = quickSort(eql)
Related
Suppose I have a list that goes like :
'''
[1,2,3,4,9,10,11,20]
'''
I need the result to be like :
'''
[[4,9],[11,20]]
'''
I have defined a function that goes like this :
def get_range(lst):
i=0
seqrange=[]
for new in lst:
a=[]
start=new
end=new
if i==0:
i=1
old=new
else:
if new - old >1:
a.append(old)
a.append(new)
old=new
if len(a):
seqrange.append(a)
return seqrange
Is there any other easier and efficient way to do it? I need to do this in the range of millions.
You can use numpy arrays and the diff function that comes along with them. Numpy is so much more efficient than looping when you have millions of rows.
Slight aside:
Why are numpy arrays so fast? Because they are arrays of data instead of arrays of pointers to data (which is what Python lists are), because they offload a whole bunch of computations to a backend written in C, and because they leverage the SIMD paradigm to run a Single Instruction on Multiple Data simultaneously.
Now back to the problem at hand:
The diff function gives us the difference between consecutive elements of the array. Pretty convenient, given that we need to find where this difference is greater than a known threshold!
import numpy as np
threshold = 1
arr = np.array([1,2,3,4,9,10,11,20])
deltas = np.diff(arr)
# There's a gap wherever the delta is greater than our threshold
gaps = deltas > threshold
gap_indices = np.argwhere(gaps)
gap_starts = arr[gap_indices]
gap_ends = arr[gap_indices + 1]
# Finally, stack the two arrays horizontally
all_gaps = np.hstack((gap_starts, gap_ends))
print(all_gaps)
# Output:
# [[ 4 9]
# [11 20]]
You can access all_gaps like a 2D matrix: all_gaps[0, 1] would give you 9, for example. If you really need the answer as a list-of-lists, simply convert it like so:
all_gaps_list = all_gaps.tolist()
print(all_gaps_list)
# Output: [[4, 9], [11, 20]]
Comparing the runtime of the iterative method from #happydave's answer with the numpy method:
import random
import timeit
import numpy
def gaps1(arr, threshold):
deltas = np.diff(arr)
gaps = deltas > threshold
gap_indices = np.argwhere(gaps)
gap_starts = arr[gap_indices]
gap_ends = arr[gap_indices + 1]
all_gaps = np.hstack((gap_starts, gap_ends))
return all_gaps
def gaps2(lst, thr):
seqrange = []
for i in range(len(lst)-1):
if lst[i+1] - lst[i] > thr:
seqrange.append([lst[i], lst[i+1]])
return seqrange
test_list = [i for i in range(100000)]
for i in range(100):
test_list.remove(random.randint(0, len(test_list) - 1))
test_arr = np.array(test_list)
# Make sure both give the same answer:
assert np.all(gaps1(test_arr, 1) == gaps2(test_list, 1))
t1 = timeit.timeit('gaps1(test_arr, 1)', setup='from __main__ import gaps1, test_arr', number=100)
t2 = timeit.timeit('gaps2(test_list, 1)', setup='from __main__ import gaps2, test_list', number=100)
print(f"t1 = {t1}s; t2 = {t2}s; Numpy gives ~{t2 // t1}x speedup")
On my laptop, this gives:
t1 = 0.020834800001466647s; t2 = 1.2446780000027502s; Numpy gives ~59.0x speedup
My word that's fast!
There is iterator based solution. It'is allow to get intervals one by one:
flist = [1,2,3,4,9,10,11,20]
def get_range(lst):
start_idx = lst[0]
for current_idx in flist[1:]:
if current_idx > start_idx+1:
yield [start_idx, current_idx]
start_idx = current_idx
for inverval in get_range(flist):
print(inverval)
I don't think there's anything inefficient about the solution, but you can clean up the code quite a bit:
seqrange = []
for i in range(len(lst)-1):
if lst[i+1] - lst[i] > 1:
seqrange.append([lst[i], lst[i+1]])
I think this could be more efficient and a bit cleaner.
def func(lst):
ans=0
final=[]
sol=[]
for i in range(1,lst[-1]+1):
if(i not in lst):
ans+=1
final.append(i)
elif(i in lst and ans>0):
final=[final[0]-1,i]
sol.append(final)
ans=0
final=[]
else:
final=[]
return(sol)
I'm trying to solve the following simplified problem. I have an array with data and two arrays with start and end indices stored. What I would like is to double the values of the dataset that fall between the indices (so between and including start[0], end[0] and start[1], end[1] etc). I tried a nested loop as follows:
data = np.array([0,1,2,8,4,5,6,5,4,5,6,7,8])
start = np.array([0,5,7])
end = np.array([3,6,9])
new_data = np.zeros(len(data))
for i in range(len(start)):
for j in range(len(data)):
if (j >= start[i]) & (j <= end[i]):
new_data[j] = data[j]*2
else:
new_data[j] = data[j]
The result should be [0,2,4,16,4,10,12,10,8,10,6,7,8], and yet the code returns:
[ 0. 1. 2. 8. 4. 5. 6. 10. 8. 10. 6. 7. 8.]
Only the part between the last indices is correct. Any ideas why? And what if I want to triple the values not satisfying the if statement?
You're repeatedly assigning to new_data, overwriting previous changes.
I.e.:
new_data[j] = data[j]*2 # won't work
data[j] = data[j]*2 # will work.
I'd suggest using python's zip command in order to simultaneously unpack multiple lists.
Code explaination:
Since the datapoints which don't belong to any sublists remain the same, I used numpy's copy method to create a new copy of your original data. Then I muliplied your desired sublists by two.
import numpy as np
data = np.array([0, 1, 2, 8, 4, 5, 6, 5, 4, 5, 6, 7, 8])
start = np.array([0, 5, 7])
end = np.array([3, 6, 9])
new_data = data.copy() #Not to mess up our original dataset.
for s, e in zip(start, end):
new_data[s:e+1] *= 2 #Because it's an inclusive set.
Initialize new_data with data and remove the else.
Edit:
This should answer you, it's more efficient, since you're iterating through the whole array the number of intervals you have, and that will be slow for large inputs, but here you'll go through the array once.
And if you want to triple the other elements, a naive solution is to multiply new_data by 3 before the for loop, so put new_data = np.copy(data)*3.
import numpy as np
data = np.array([0,1,2,8,4,5,6,5,4,5,6,7,8])
start = np.array([0,5,7])
end = np.array([3,6,9])
new_data = np.copy(data) # change to new_data = np.copy(data)*3 to triple other elements.
for i in range(len(start)):
for j in range(start[i], end[i]+1):
new_data[j] = data[j]*2
Another notice in your code is (j >= start[i]) & (j <= end[i]), this could be simplified to start[i] <= j <= end[i], probably more fast.
I am a learner in nested loops in python.
Below I have written my code. I want to make my code simpler, since when I run the code it takes so much time to produce the result.
I have a list which contains 1000 values:
Brake_index_values = [ 44990678, 44990679, 44990680, 44990681, 44990682, 44990683,
44997076, 44990684, 44997077, 44990685,
...
44960673, 8195083, 8979525, 100107546, 11089058, 43040161,
43059162, 100100533, 10180192, 10036189]
I am storing the element no 1 in another list
original_top_brake_index = [Brake_index_values[0]]
I created a temporary list called temp and a numpy array for iteration through Loop:
temp =[]
arr = np.arange(0,1000,1)
Loop operation:
for i in range(1, len(Brake_index_values)):
if top_15_brake <= 15:
a1 = Brake_index_values[i]
#a2 = Brake_index_values[j]
a3 = arr[:i]
for j in a3:
a2 = range(Brake_index_values[j] - 30000, Brake_index_values[j] + 30000)
if a1 in a2:
pass
else:
temp.append(a1)
if len(temp)== len(a3):
original_top_brake_index.append(a1)
top_15_brake += 1
del temp[:]
else:
del temp[:]
continue
I am comparing the Brake_index_values[1] element available between the range of 30000 before and after Brake_index_values[0] element, that is `range(Brake_index_values[0]-30000, Brake_index_values[0]+30000).
If the Brake_index_values[1] available between the range, I should ignore that element and go for the next element Brake_index_values[2] and follow the same process as before for Brake_index_values[0] & Brake_index_values[1]
If it is available, store the Value, in original_top_brake_index thorough append operation.
In other words :
(Lets take 3 values a,b & c. I am checking whether the value b is in range between (a-30000 to a+30000). Possibility 1: If b is in between (a-30000 to a+30000) , neglect that element (Here I am storing inside a temporary list). Then the same process continues with c (next element) Possibility 2: If b is not in b/w those range put b in another list called original_top_brake_index
(this another list is the actual result what i needed)
The result I get:
It is working, but it takes so much time to complete the operation and sometimes it shows MemoryError.
I just want my code to work simpler and efficient with simple operations.
Try this code (with numpy):
import numpy as np
original_top_brake_index = [Brake_index_values[0]]
top_15_brake = 0
Brake_index_values = np.array(Brake_index_values)
for i, a1 in enumerate(Brake_index_values[0:]):
if top_15_brake > 15:
break
m = (Brake_index_values[:i] - a1)
if np.logical_or(m > 30000, m < - 30000).all():
original_top_brake_index.append(a1)
top_15_brake += 1
Note: you can probably make it even more efficient, but this already should reduce the number of operations significantly (and doesn't change much the logic of your original code)
We can use the bisect module to shorten the elements we actually have to lookup by finding the smallest element that's greater or less than the current value. We will use recipes from here
Let's look at this example:
from bisect import bisect_left, bisect_right
def find_lt(a, x):
'Find rightmost value less than x'
i = bisect_left(a, x)
if i:
return a[i-1]
return
def find_gt(a, x):
'Find leftmost value greater than x'
i = bisect_right(a, x)
if i != len(a):
return a[i]
return
vals = [44990678, 44990679, 44990680, 44990681, 44990682, 589548954, 493459734, 3948305434, 34939349534]
vals.sort() # we have to sort the values for bisect to work
passed = []
originals = []
for val in vals:
passed.append(val)
l = find_lt(passed, val)
m = find_gt(passed, val)
cond1 = (l and l + 30000 >= val)
cond2 = (m and m - 30000 <= val)
if not l and not m:
originals.append(val)
continue
elif cond1 or cond2:
continue
else:
originals.append(val)
Which gives us:
print(originals)
[44990678, 493459734, 589548954, 3948305434, 34939349534]
There might be another, more mathematical way to do this, but this should at least simplify your code.
So I have a problem that might be super duper simple.
I have these numpy ndarrays that I allocated and want to assign values to them via indices returned as lists. It might be easier if I showed you some example code. The questionable code I have is at the bottom, and in my testing (before actually taking this to scale) I keep getting syntax errors :'(
EDIT: edited to make it easier to troubleshoot and put some example code at the bottoms
import numpy as np
def do_stuff(index, mask):
# this is where the calculations are made
magic = sum(mask)
return index, magic
def foo(full_index, comparison_dims, *xargs):
# I have this function executed in Parallel since I'm using a machine with 36 nodes per core, and can access upto 16 cores for each script #blessed
# figure out how many dimensions there are, and how big they are
parent_dims = []
parent_diffs = []
for j in xargs:
parent_dims += [len(j)]
parent_diffs += [j[1] - j[0]] # this is used to find a mask
index = [] # this is where the individual dimension indices will be stored
dim_n = 0
# loop through the dimensions
while dim_n < len(parent_dims):
dim_index = full_index % parent_dims[dim_n]
index += [dim_index]
if dim_n == 0:
mask = (comparison_dims[dim_n] > xargs[dim_n][dim_index] - parent_diffs[dim_n]/2) * \
(comparison_dims[dim_n] <= xargs[dim_n][dim_index] +parent_diffs[dim_n] / 2)
else:
mask *= (comparison_dims[dim_n] > xargs[dim_n][dim_index] - parent_diffs[dim_n]/2) * \
(comparison_dims[dim_n] <=xargs[dim_n][dim_index] + parent_diffs[dim_n] / 2)
full_index //= parent_dims[dim_n]
dim_n += 1
return do_stuff(index, mask)
def bar(comparison_dims, *xargs):
if len(xargs) == comparison_dims.shape[0]:
pass
elif len(comparison_dims.shape) == 2:
pass
else:
raise ValueError("silly person, you failed")
from joblib import Parallel, delayed
dims = []
for j in xargs:
dims += [len(j)]
myArray = np.empty(tuple(dims))
results = Parallel(n_jobs=1)(
delayed(foo)(
index, comparison_dims, *xargs)
for index in range(np.prod(dims))
)
# LOOK HERE, HELP HERE!
for index_list, result in results:
# I thought this would work, but oh golly was I was wrong, index_list here is a list of ints, and result is a value
# for example index, result = [0,3,7], 45.4
# so in execution, that would yield: myArray[0,3,7] = 45.4
# instead it yields SyntaxError because I don't know what I'm doing XD
myArray[*index_list] = result
return myArray
Any ideas how I can make that work. What do I need to do?
I'm not the sharpest tool in the shed, but I think with your help we might be able to figure this out!
A quick example to troubleshoot this problem would be:
compareDims = np.array([np.random.rand(1000), np.random.rand(1000)])
dim0 = np.arange(0,1,1./20)
dim1 = np.arange(0,1,1./30)
myArray = bar(compareDims, dim0, dim1)
To index a numpy array with an arbitrary list of multidimensional indices. you actually need to use a tuple:
for index_list, result in results:
myArray[tuple(index_list)] = result
I am currently using python and numpy for calculations of correlations between 2 lists: data_0 and data_1. Each list contains respecively sorted times t0 and t1.
I want to calculate all the events where 0 < t1 - t0 < t_max.
for time_0 in np.nditer(data_0):
delta_time = np.subtract(data_1, np.full(data_1.size, time_0))
delta_time = delta_time[delta_time >= 0]
delta_time = delta_time[delta_time < time_max]
Doing so, as the list are sorted, I am selecting a subarray of data_1 of the form data_1[index_min: index_max].
So I need in fact to find two indexes to get what I want.
And what's interesting is that when I go to the next time_0, as data_0 is also sorted, I just need to find the new index_min / index_max such as new_index_min >= index_min / new_index_max >= index_max.
Meaning that I don't need to scann again all the data_1.
(data list from scratch).
I have implemented such a solution not using the numpy methods (just with while loop) and it gives me the same results as before but not as fast than before (15 times longer!).
I think as normally it requires less calculation, there should be a way to make it faster using numpy methods but I don't know how to do it.
Does anyone have an idea?
I am not sure if I am super clear so if you have any questions, do not hestitate.
Thank you in advance,
Paul
Here is a vectorized approach using argsort. It uses a strategy similar to your avoid-full-scan idea:
import numpy as np
def find_gt(ref, data, incl=True):
out = np.empty(len(ref) + len(data) + 1, int)
total = (data, ref) if incl else (ref, data)
out[1:] = np.argsort(np.concatenate(total), kind='mergesort')
out[0] = -1
split = (out < len(data)) if incl else (out >= len(ref))
if incl:
out[~split] -= len(data)
split[0] = False
return np.maximum.accumulate(np.where(split, -1, out))[split] + 1
def find_intervals(ref, data, span, incl=(True, True)):
index_min = find_gt(ref, data, incl[0])
index_max = len(ref) - find_gt(-ref[::-1], -span-data[::-1], incl[1])[::-1]
return index_min, index_max
ref = np.sort(np.random.randint(0,20000,(10000,)))
data = np.sort(np.random.randint(0,20000,(10000,)))
span = 2
idmn, idmx = find_intervals(ref, data, span, (True, True))
print('checking')
for d,mn,mx in zip(data, idmn, idmx):
assert mn == len(ref) or ref[mn] >= d
assert mn == 0 or ref[mn-1] < d
assert mx == len(ref) or ref[mx] > d+span
assert mx == 0 or ref[mx-1] <= d+span
print('ok')
It works by
indirectly sorting both sets together
finding for each time in one set the preceding time in the other
this is done using maximum.reduce
the preceding steps are applied twice, the second time the times in
one set are shifted by span