Is there a constant time solution to the safe queens problem?(python) - python

I'm working on the safe queens problem and I'm curious if there is a constant time solution to the question.
Safe Queens
In chess the queen is the most powerful piece, as it can move any number of unoccupied squares in a straight line along columns (called "files"), rows (called "ranks"), and diagonals. It is, however, possible to place 8 queens on a board such that none are threatening another.
In this challenge we read an input of 8 positions in standard algebraic notation (files "a" to "h" and ranks "1" to "8"). Our goal is to determine, using the safe_queens function, if all of the queens are safe from each other – that is, none of them should share a file, rank, or diagonal. If they are all safe, the function should print 'YES', otherwise it should print 'NO'.
Example:
Input:
a5
b3
c1
d7
e2
f8
g6
h4
(example input positions shown on board below)
Output:
YES
This example outputs 'YES' because no queens share a file, rank, or diagonal. If the a5 were an a4 it would output 'NO' because the leftmost two queens would be on the same diagonal.
I believe that you could transform the solution be into constant time by keeping track of visited squares and checking for inference for the diagonals, but I'm not 100% sure.
def safe_queens():
positions = [input() for _ in range(8)]
# if there isn't exactly one queen per file and per rank then return 'NO'
if (len({i[0] for i in positions}) != 8) or (
len({i[1] for i in positions}) != 8):
return 'NO'
# to check diagonals compare difference between each queen combinations
# rank values and file values, adjust for both diagonal directions by
# using abs()
for p in range(8):
for q in range(p+1, 8):
if (abs(ord(positions[p][0]) - ord(positions[q][0]))) == (
abs(int(positions[p][1]) - int(positions[q][1]))):
return 'NO'
return 'YES'
Link to colab notebook with the problem description and my soluion: https://colab.research.google.com/drive/1sJN6KnQ0yN-E_Uj6fAgQCNdOO7rcif8G
The solution above is O(n^2), but I'm looking for a constant time solution. Thanks!

Just do the same for the diagonals as you do for the files and ranks: make a set of all diagonal numbers and check for the length of the set. You can number the diagonals this way:
diagonals1 = {ord(position[0]) - int(position[1]) for position in positions}
if len(diagonals1) < 8:
return 'NO'
diagonals2 = {ord(position[0]) + int(position[1]) for position in positions}
if len(diagonals2) < 8:
return 'NO'

Related

How to use random numbers that executes a one dimensional random walk in python?

Start with a one dimensional space of length m, where m = 2 * n + 1. Take a step either to the left or to the right at random, with equal probability. Continue taking random steps until you go off one edge of the space, for which I'm using while 0 <= position < m.
We have to write a program that executes the random walk. We have to create a 1D space using size n = 5 and place the marker in the middle. Every step, move it either to the left or to the right using the random number generator. There should be an equal probability that it moves in either direction.
I have an idea for the plan but do not know how to write it in python:
Initialize n = 1, m = 2n + 1, and j = n + 1.
Loop until j = 0 or j = m + 1 as shown. At each step:
Move j left or right at random.
Display the current state of the walk, as shown.
Make another variable to count the total number of steps.
Initialize this variable to zero before the loop.
However j moves always increase the step count.
After the loop ends, report the total steps.
1 - Start with a list initialized with 5 items (maybe None?)
2 - place the walker at index 2
3 - randomly chose a direction (-1 or + 1)
4 - move the walker in the chosen direction
5 - maybe print the space and mark the location of the walker
6 - repeat at step 3 as many times as needed

Minimum number of iterations in matrix where cell value replaced by maximum of neighbour cell value in single iteration

I have an matrix with values in each cell (minimum value=1), where the maximum value is 'max'.
At a time, I modify each cell value by the highest value of its neighboring cells i.e. all 8 neighbors, and this occurs for the whole matrix, simultaneously. I want to find after what minimum number of iterations after which value of all cells will be max.
One brute force method of doing this is by padding the matrix by zeros, and
for i in range (1,x_max+1):
for j in range(1,y_max+1):
maximum = 0
for k in range(-1,2):
for l in range(-1,2):
if matrix[i+k][j+l]>maximum:
maximum = matrix[i+k][j+l]
matrix[i][j] = maximum
But is there an intelligent and faster way of doing this?
Thanks in advance.
I think this can be solved by BFS(Breadth first Search).
Start BFS simulatneously with all the matrix cells with 'max' value.
dis[][] == infinite // min. distance of cell from nearest cell with 'max' value, initially infinite for all
Q // Queue
M[][] // matrix
for all i,j // travers the matrix, enqueue all cells with 'max'
if M[i][j] == 'max'
dis[i][j] = 0 , Q.push( cell(i,j) )
while !Q.empty:
cell Current = Q.front
for all neighbours Cell(p,q) of Current:
if dis[p][q] == infinite
dis[p][q] = dis[Current.row][Current.column] + 1
Q.push( cell(p,q))
Q.pop()
The cell with max(dis[i][j]) for all i,j will be the no. of iterations needed.
Use an array with a "border".
Testing the edge conditions is tedious and can be avoided by making the array 1-bigger around the edge, each element with the value of INT_MIN.
Additionally, consider 8 tests, rather than a double nested loop
// Data is in matrix[1...N][1...M], yet is size matrix[N+2][M+2]
for (i=1; i <= N; i++) {
for (j=1; j <= M; j++) {
maximum = matrix[i-1][j-l];
if (matrix[i-1][j+0] > maximum) maximum = matrix[i-1][j+0];
if (matrix[i-1][j+1] > maximum) maximum = matrix[i-1][j+1];
if (matrix[i+0][j-1] > maximum) maximum = matrix[i+0][j-1];
if (matrix[i+0][j+0] > maximum) maximum = matrix[i+0][j+0];
if (matrix[i+0][j+1] > maximum) maximum = matrix[i+0][j+1];
if (matrix[i+1][j-1] > maximum) maximum = matrix[i+1][j-1];
if (matrix[i+1][j+0] > maximum) maximum = matrix[i+1][j+0];
if (matrix[i+1][j+1] > maximum) maximum = matrix[i+1][j+1];
newmatrix[i][j] = maximum
All existing answers require examining every cell in the matrix. If you don't already know what the locations of the maximum value are, this is unavoidable, and in that case, Amit Kumar's BFS algorithm has optimal time complexity: O(wh), if the matrix has width w and height h.
OTOH, perhaps you already know the locations of the k maximum values, and k is relatively small. In that case, the following algorithm will find the answer in just O(k^2*(log(k)+log(max(w, h)))) time, which is much faster when either w or h is large. It doesn't actually look at any matrix entries; instead, it runs a binary search to look for candidate stopping times (that is, answers). For each candidate stopping time it builds the set of rectangles that would be occupied by max by that time, and checks whether any matrix cell remains uncovered by a rectangle.
To explain the idea, we first need some terms. Call the top row of a rectangle a "starting vertical event", and the row below its bottom edge an "ending vertical event". A "basic interval" is the interval of rows spanned by any pair of vertical events that does not have a third vertical event anywhere between them (the event pairs defining these intervals can be from the same or different rectangles). Notice that with k rectangles, there can never be more than 2k+1 basic intervals -- there is no dependence here on h.
The basic idea is to walk left-to-right through the columns of the matrix that correspond to horizontal events: columns in which either a new rectangle "starts" (the left vertical edge of a rectangle), or an existing rectangle "finishes" (the column to the right of the right vertical edge of a rectangle), keeping track of how many rectangles are currently covering every basic interval. If we ever detect a basic interval covered by 0 rectangles, we can stop: we have found a column containing one or more cells that are not yet covered at time t. If we get to the right edge of the matrix without this happening, then all cells are covered at time t.
Here is pseudocode for a function that checks whether any matrix cell remains uncovered by time t, given a length-k array peak, where (peak[i].x, peak[i].y) is the location of the i-th max-containing cell in the original matrix, in increasing order of x co-ordinate (so the leftmost max-containing cell is at (peak[1].x, peak[1].y)).
Function IsMatrixCovered(t, peak[]) {
# Discover all vertical events and basic intervals
Let vertEvents[] be an empty array of integers.
For i from 1 to k:
top = max(1, peak[i].y - t)
bot = min(h, peak[i].y + t)
Append top to vertEvents[]
Append bot+1 to vertEvents[]
Sort vertEvents in increasing order, and remove duplicates.
x = 1
Let horizEvents[] be an empty array of { col, type, top, bot } structures.
For i from 1 to k:
# Calculate the (clipped) rectangle that peak[i] will cover at time t:
lft = max(1, peak[i].x - t)
rgt = min(w, peak[i].x + t)
top = max(1, peak[i].y - t)
bot = min(h, peak[i].y + t)
# Convert vertical positions to vertical event indices
top = LookupIndexUsingBinarySearch(top, vertEvents[])
bot = LookupIndexUsingBinarySearch(bot+1, vertEvents[])
# Record horizontal events
Append (lft, START, top, bot) to horizEvents[]
Append (rgt+1, STOP, top, bot) to horizEvents[]
Sort horizEvents in increasing order by its first 2 fields, with START considered < STOP.
# Walk through all horizontal events, from left to right.
Let basicIntervals[] be an array of size(vertEvents[]) integers, initially all 0.
nOccupiedBasicIntervalsFirstCol = 0
For i from 1 to size(horizEvents[]):
If horizEvents[i].type = START:
d = 1
Else (if it is STOP):
d = -1
If horizEvents[i].col <= w:
For j from horizEvents[i].top to horizEvents[i].bot:
If horizEvents[i].col = 1 and basicIntervals[j] = 0:
++nOccupiedBasicIntervalsFirstCol # Must be START
basicIntervals[j] += d
If basicIntervals[j] = 0:
return FALSE
If nOccupiedBasicIntervalsFirstCol < size(basicIntervals):
return FALSE # Could have checked earlier, but the code is simpler this way
return TRUE
}
The above function can simply be called inside a binary search on t, that looks for the smallest value of t for which the function returns TRUE.
A further factor of k/log(k) could be removed by exploiting the fact that the set of basic intervals affected by any rectangle starting or ending is always an interval, through the use of Fenwick trees.

python codefights Count The Black Boxes, modeling the diagonal of a rectangle

Given the dimensions of a rectangle,(m,n), made up of unit squares, output the number of unit squares the diagonal of the rectangle touches- that includes borders and vertices.
My algorithm approaches this by cycling through all the unit squares(under assumption that can draw our diagonal from (0,0) to (m,n)
My algorithm solves 9 of 10 tests, but is too inefficient to solve the tenth test in given time.
I"m uopen to all efficiency suggestions, but in the name of asking a specific question... I seem to be having a disconnect in my own logic concerning adding a break statement, to cut some steps out of the process. My thinking is, this shouldn't matter, but it does affect the result, and I haven't been able to figure out why.
So, can someone help me understand how to insert a break that doesn't affect the output.
Or how to eliminate a loop. I"m currently using nested loops.
So, yeah, I think my problems are algorithmic rather than syntax.
def countBlackCells(m, n):
counter=0
y=[0,0]
testV=0
for i in xrange(n): #loop over m/x first
y[0]=float(m)/n*i
y[1]=float(m)/n*(i+1)
#print(str(y))
for j in xrange(m): #loop over every n/y for each x
if((y[0]<=(j+1) and y[0]>=j) or (y[1]>=(j) and y[1]<=j+1)):#is min of line in range inside teh box? is max of line?
counter+=1
#testV += 1
else: pass # break# thinking that once you are beyond the line in either direction, your not coming back to it by ranging up m anymore. THAT DOESN"T SEEM TO BE THE CASE
#tried adding a flag (testV), so that inner loop would only break if line was found and then lost again, still didn't count ALL boxes. There's something I'm not understanding here.
return counter
Some sample, input/output
Input:
n: 3
m: 4
Output:
6
Input:
n: 3
m: 3
Output:
7
Input:
n: 33
m: 44
Output:
86
Find G - the greatest common divisor of m and n.
If G > 1 then diagonal intersects G-1 inner vertices, touching (not intersecting) 2*(G-1) cells.
And between these inner vertices there are G sub-rectangles with mutually prime sides M x N (m/G x n/G)
Now consider case of mutually prime M and N. Diagonal of such rectangle does not intersect any vertex except for starting and ending. But it must intersect M vertical lines and N horizontal lines, and at every intersection diagonal enters into the new cell, so it intersects M + N - 1 cells (subtract 1 to account for the first corner where both vertical and horizontal lines are met together)
So use these clues and deduce final solution.
I used math.gcd() to solve the problem in python.
def countBlackCells(n, m):
return m+n+math.gcd(m,n)-2

k-greatest double selection

Imagine you have two sacks (A and B) with N and M balls respectively in it. Each ball with a known numeric value (profit). You are asked to extract (with replacement) the pair of balls with the maximum total profit (given by the multiplication of the selected balls).
The best extraction is obvious: Select the greatest valued ball from A as well as from B.
The problem comes when you are asked to give the 2nd or kth best selection. Following the previous approach you should select the greatest valued balls from A and B without repeating selections.
This can be clumsily solved calculating the value of every possible selection, ordering and ordering it (example in python):
def solution(A,B,K):
if K < 1:
return 0
pool = []
for a in A:
for b in B:
pool.append(a*b)
pool.sort(reverse=True)
if K>len(pool):
return 0
return pool[K-1]
This works but its worst time complexity is O(N*M*Log(M*M)) and I bet there are better solutions.
I reached a solution based on a table where A and B elements are sorted from higher value to lower and each of these values has associated an index representing the next value to test from the other column. Initially this table would look like:
The first element from A is 25 and it has to be tested (index 2 select from b = 0) against 20 so 25*20=500 is the first greatest selection and, after increasing the indexes to check, the table changes to:
Using these indexes we have a swift way to get the best selection candidates:
25 * 20 = 500 #first from A and second from B
20 * 20 = 400 #second from A and first from B
I tried to code this solution:
def solution(A,B,K):
if K < 1:
return 0
sa = sorted(A,reverse=true)
sb = sorted(B,reverse=true)
for k in xrange(K):
i = xfrom
j = yfrom
if i >= n and j >= n:
ret = 0
break
best = None
while i < n and j < n:
selected = False
#From left
nexti = i
nextj = sa[i][1]
a = sa[nexti][0]
b = sb[nextj][0]
if best is None or best[2]<a*b:
selected = True
best = [nexti,nextj,a*b,'l']
#From right
nexti = sb[j][1]
nextj = j
a = sa[nexti][0]
b = sb[nextj][0]
if best is None or best[2]<a*b:
selected = True
best = [nexti,nextj,a*b,'r']
#Keep looking?
if not selected or abs(best[0]-best[1])<2:
break
i = min(best[:2])+1
j = i
print("Continue with: ", best, selected,i,j)
#go,go,go
print(best)
if best[3] == 'l':
dx[best[0]][1] = best[1]+1
dy[best[1]][1] += 1
else:
dx[best[0]][1] += 1
dy[best[1]][1] = best[0]+1
if dx[best[0]][1]>= n:
xfrom = best[0]+1
if dy[best[1]][1]>= n:
yfrom = best[1]+1
ret = best[2]
return ret
But it did not work for the on-line Codility judge (Did I mention this is part of the solution to an, already expired, Codility challenge? Sillicium 2014)
My questions are:
Is the second approach an unfinished good solution? If that is the case, any clue on what I may be missing?
Do you know any better approach for the problem?
You need to maintain a priority queue.
You start with (sa[0], sb[0]), then move onto (sa[0], sb[1]) and (sa[1], sb[0]). If (sa[0] * sb[1]) > (sa[1] * sb[0]), can we say anything about the comparative sizes of (sa[0], sb[2]) and (sa[1], sb[0])?
The answer is no. Thus we must maintain a priority queue, and after removing each (sa[i], sb[j]) (such that sa[i] * sb[j] is the biggest in the queue), we must add to the priority queue (sa[i - 1], sb[j]) and (sa[i], sb[j - 1]), and repeat this k times.
Incidentally, I gave this algorithm as an answer to a different question. The algorithm may seem to be different at first, but essentially it's solving the same problem.
I'm not sure I understand the "with replacement" bit...
...but assuming this is in fact the same as "How to find pair with kth largest sum?", then the key to the solution is to consider the matrix S of all the sums (or products, in your case), constructed from A and B (once they are sorted) -- this paper (referenced by #EvgenyKluev) gives this clue.
(You want A*B rather than A+B... but the answer is the same -- though negative numbers complicate but (I think) do not invalidate the approach.)
An example shows what is going on:
for A = (2, 3, 5, 8, 13)
and B = (4, 8, 12, 16)
we have the (notional) array S, where S[r, c] = A[r] + B[c], in this case:
6 ( 2+4), 10 ( 2+8), 14 ( 2+12), 18 ( 2+16)
7 ( 3+4), 11 ( 3+8), 15 ( 3+12), 19 ( 3+16)
9 ( 5+4), 13 ( 5+8), 17 ( 5+12), 21 ( 5+16)
12 ( 8+4), 16 ( 8+8), 20 ( 8+12), 14 ( 8+16)
17 (13+4), 21 (13+8), 25 (13+12), 29 (13+16)
(As the referenced paper points out, we don't need to construct the array S, we can generate the value of an item in S if or when we need it.)
The really interesting thing is that each column of S contains values in ascending order (of course), so we can extract the values from S in descending order by doing a merge of the columns (reading from the bottom).
Of course, merging the columns can be done using a priority queue (heap) -- hence the max-heap solution. The simplest approach being to start the heap with the bottom row of S, marking each heap item with the column it came from. Then pop the top of the heap, and push the next item from the same column as the one just popped, until you pop the kth item. (Since the bottom row is sorted, it is a trivial matter to seed the heap with it.)
The complexity of this is O(k log n) -- where 'n' is the number of columns. The procedure works equally well if you process the rows... so if there are 'm' rows and 'n' columns, you can choose the smaller of the two !
NB: the complexity is not O(k log k)... and since for a given pair of A and B the 'n' is constant, O(k log n) is really O(k) !!
If you want to do many probes for different 'k', then the trick might be to cache the state of the process every now and then, so that future 'k's can be done by restarting from the nearest check-point. In the limit, one would run the merge to completion and store all possible values, for O(1) lookup !

Can we solve N Queens without backtracking? and How to calculate and what will be the complexity of the backtracking solution?

I have tried solving this problem with backtracking and it prints all possible solutions.
Two questions came up:
1.Can i implement n queen using other techniques?
2.Is it possible to make the code below print only the first solution and then terminate?
My current code uses backtracking:
n = 8
x = [-1 for x in range(n)]
def safe(k,i):
for j in xrange(k):
if x[j] == i or abs(x[j] - i) == abs(k - j) :
return False
return True
def nqueen(k):
for i in xrange(n):
if safe(k,i):
x[k] = i
if k == n-1:
print "SOLUTION", x
else:
nqueen(k+1)
nqueen(0)
Note: I am interested in techniques that do not depend on a particular programming language.
According to Wikipedia, you can do using heuristics:
This heuristic solves N queens for any N ≥ 4. It forms the list of numbers for vertical positions (rows) of queens with horizontal position (column) simply increasing. N is 8 for eight queens puzzle.
If the remainder from dividing N by 6 is not 2 or 3 then the list is simply all even numbers followed by all odd numbers ≤ N
Otherwise, write separate lists of even and odd numbers (i.e. 2,4,6,8 - 1,3,5,7)
If the remainder is 2, swap 1 and 3 in odd list and move 5 to the end (i.e. 3,1,7,5)
If the remainder is 3, move 2 to the end of even list and 1,3 to the end of odd list (i.e. 4,6,8,2 - 5,7,9,1,3)
Append odd list to the even list and place queens in the rows given by these numbers, from left to right (i.e. a2, b4, c6, d8, e3, f1, g7, h5)
This heuristics is O(n) since it's just printing the result after some if statements.
Regarding your second question: "Is it possible to make code below to print only first solution and then terminate?"
You can just call sys.exit(0) after you print:
import sys
n = 8
x = [-1 for x in range(n)]
def safe(k,i):
for j in xrange(k):
if x[j] == i or abs(x[j] - i) == abs(k - j) :
return False
return True
def nqueen(k):
for i in xrange(n):
if safe(k,i):
x[k] = i
if k == n-1:
print "SOLUTION", x
sys.exit(0)
else:
nqueen(k+1)
nqueen(0)
or, alternatively you can return a value and then propagate the value if it indicates termination:
n = 8
x = [-1 for x in range(n)]
def safe(k,i):
for j in xrange(k):
if x[j] == i or abs(x[j] - i) == abs(k - j) :
return False
return True
def nqueen(k):
for i in xrange(n):
if safe(k,i):
x[k] = i
if k == n-1:
print "SOLUTION", x
return True # Found a solution, send this good news!
else:
if nqueen(k+1): # Good news!
return True # Share the good news to parent!
return False # We have searched every possible combinations, and I regret to tell you that I could not find any solution.
nqueen(0)
As for the time complexity, since it's a complete search, it's n^n. Although due to the pruning (using safe(k,i)), in practice it's a lot faster.
The question of solving N-queens without backtracking has another interesting question attached to it. Are there almost perfect queen placing heuristics such that, in a backtracking framework, you nearly always find a valid configuration? This is equivalent to saying that the heuristic almost always tells you the correct square to place the next queen on the board. This heuristic is much more interesting than the known closed form solution that gives a valid configuration for all values of N (except N=2 and 3, obviously).
The analysis of almost perfect minimum backtracking heuristics is an issue which has been studied in literature. The most important references are [Kalé 90] and [San Segundo 2011]. The best heuristic to place the next queen in the backtracking framework seems to be the following:
Choose the most-restricted row to place the next queen (i.e that with less squares available)
Choose the square from the row in (1) which closes the least number of diagonals (a least-restricting strategy).
Here to close a diagonal refers to cancel all its available squares.

Categories