If i specify a number, is there a way to assign a random portion of that number as a total to several groups?
e.g Total 1.
Group 1 - 0.1
Group 2 - 0.3
Group 3 - 0.4
Group 4 - 0.2
It's very simple to do in java ...
You generate a random number from 1 to 100 by using a function like this
// min = 1 max=100 in your case
public int getRandomNumber(int min, int max) {
return (int) ((Math.random() * (max - min)) + min);
}
Then in your function which selects the element you do like this ...
Group 1 - 0.1 Group 2 - 0.3 Group 3 - 0.4 Group 4 - 0.2
If the number is 1 to 10 select group 1.
If the number is 11 to 40 select group 2.
If the number is 41 to 80 select group 3
If the number is 81 to 100 select group 4
Its as easy as calculating percentages.
Does this solve your problem ? Let me know in the comments.
Well if you don't care about somewhat same distribution, you can just
void foo(){
double total = 1.0;
double[] group = new double[4];
for(int i = 0; i < group.length-1; i++){
//get random 0.0-1.0
double rand = getRandom(0.0, 1.0);
double portion = random * total
group[i] = portion;
total -= portion;
}
group[group.length-1] = total;
}
If you do care, you can set getRandom to your liking, f.e.
//get random 0.0-1.0
double rand = getRandom(1.0/group.length*0.7, 1.0/group.length * 1.3);
so it will be 70% to 130% of the average.
This method can distribute a value to an array or "groups" of that value type randomly,
you can switch out double with int or float as you see fit.
public void assignGroups(double[] groups, double number){
Random rand = new Random();
for(int i=0; i<groups.length-1; i++){
double randomNum = rand.nextDouble()*number; // randomly picks number between 0 and the current amount left
groups[i] = randomNum;
number-=randomNum; // subtracts the random number from the total
}
groups[groups.length-1] = number; // sets the last value of the groups to the number left
}
In Python:
from random import random
num_groups = 5 # Number of groups
total = 5 # The given number
base = [0.] + sorted(random() for _ in range(num_groups - 1)) + [1.]
portions = [(right - left) * total for left, right in zip(base[:-1], base[1:])]
Result (print(portions)): A list of length num_groups (number of groups) which contains the distributed total (given number):
[2.5749833618941995, 0.010389749273946869, 0.3718137712569358, 0.3725336641218424, 1.6702794534530752]
Using Java:
private static double roundValue(double value, int precision) {
return (double) Math.round(value * Math.pow(10, precision))/
Math.pow(10, precision);
}
public static double[] generateGroups(double total, int groupsNumber, int precision){
double[] result = new double[groupsNumber];
double sum = 0;
for (int i = 0; i < groupsNumber - 1; i++) {
result[i] = roundValue((total - sum) * Math.random(), precision);
sum += result[i];
}
result[groupsNumber-1] = roundValue((total - sum), precision);
return result;
}
public static void main(String... args) {
double[] result = generateGroups(1.0, 4, 1);
System.out.println(Arrays.toString(result));
}
Related
Try to calculate:
by storing 1/N and X as float variable. Which result do you get for N=10000, 100000 and 1000000?
Now try to use double variables. Does it change outcome?
In order to do this I wrote this code:
#TRUNCATION ERRORS
import numpy as np #library for numerical calculations
import matplotlib.pyplot as plt #library for plotting purposes
x = 0
n = 10**6
X = []
N = []
for i in range(1,n+1):
x = x + 1/n
item = float(x)
item2 = float(n)
X.append(item)
N.append(item2)
plt.figure() #block for plot purpoes
plt.plot(N,X,marker=".")
plt.xlabel('N')
plt.ylabel('X')
plt.grid()
plt.show()
The output is:
This is wrong because the output should be like that (showed in the lecture):
First, you want to plot N on the x-axis, but you're actually plotting 1/N.
Second, you aren't calculating the expression you think you're calculating. It looks like you're calculating sum_{i=1..N}(1/i).
You need to calculate sum_{i=1..N}(1/N) which is 1/N + 1/N + ... + 1/N repeated N times. In other words, you want to calculate N * 1 / N, which should be equal to 1. Your exercise is showing you that it won't be when you use floating-point math, because reasons.
To do this correctly, let's first define a list of values for N
Nvals = [1, 10, 100, 1000, 10000, 100000, 1000000]
Let's define a function that will calculate our summation for a single value of N:
def calc_sum(N):
total = 0
for i in range(N):
total += 1 / N
return total
Next, let's create an empty list of Xvals and fill it up with the calculated sum for each N
Xvals = []
for N in Nvals:
Xvals.append(calc_sum(N))
or, as a listcomprehension:
Xvals = [calc_sum(N) for N in Nvals]
Now we get this value of Xvals:
[1.0,
0.9999999999999999,
1.0000000000000007,
1.0000000000000007,
0.9999999999999062,
0.9999999999980838,
1.000000000007918]
Clearly, they are not all equal to 1.
You can increase the number of values in Nvals to get a denser plot, but the idea is the same.
Now pay attention to what #khelwood said in their comment:
"float variables" and "double variables" are not a thing in Python. Variables don't have types. And floats are 64 bit
Python floats are all 64-bit floating-point numbers, so you can't do your exercise in python. If you used a language like C or C++ that actually has 32-bit float and 64-bit double types, you'd get something like this:
Try it online
#include <iostream>
float calc_sum_f(int N) {
float total = 0.0;
for (int i = 0; i < N; i++)
total += ((float)1 / N);
return total;
}
double calc_sum_d(int N) {
double total = 0.0;
for (int i = 0; i < N; i++)
total += ((double)1 / N);
return total;
}
int main()
{
int Nvals[7] = { 1, 10, 100, 1000, 10000, 100000, 1000000 };
std::cout << "N\tdouble\tfloat" << std::endl;
for (int ni = 0; ni < 7; ni++) {
int N = Nvals[ni];
double x_d = calc_sum_d(N);
float x_f = calc_sum_f(N);
std::cout << N << "\t" << x_d << "\t" << x_f << std::endl;
}
}
Output:
N double float
1 1 1
10 1 1
100 1 0.999999
1000 1 0.999991
10000 1 1.00005
100000 1 1.00099
1000000 1 1.00904
Here you can see that 32-bit floats don't have enough precision beyond a certain value of N to accurately calculate N * 1 / N. There's no reason the plot should look like your hand-drawn plot, because there's no reason it will decrease consistently as we can evidently see here.
Using numpy Thanks for the suggestion #Kelly to get 32-bit and 64-bit floating point types in python, we can similarly define two functions:
def calc_sum_64(N):
c = np.float64(0)
one_over_n = np.float64(1) / np.float64(N)
for i in range(N):
c += one_over_n
return c
def calc_sum_32(N):
c = np.float32(0)
one_over_n = np.float32(1) / np.float32(N)
for i in range(N):
c += one_over_n
return c
Then, we find Xvals_64 and Xvals_32
Nvals = [10**i for i in range(7)]
Xvals_32 = [calc_sum_32(N) for N in Nvals]
Xvals_64 = [calc_sum_64(N) for N in Nvals]
And we get:
Xvals_32 = [1.0, 1.0000001, 0.99999934, 0.9999907, 1.0000535, 1.0009902, 1.0090389]
Xvals_64 = [1.0,
0.9999999999999999,
1.0000000000000007,
1.0000000000000007,
0.9999999999999062,
0.9999999999980838,
1.000000000007918]
I haven't vectorized my numpy code to make it easier for you to understand what's going on, but Kelly shows a great way to vectorize it to speed up the calculation:
sum(1/N) from i = 1 to N is (1 / N) + (1 / N) + (1 / N) + ... {N times} , which is an array of N ones, divided by N and then summed. You could write the calc_sum_32 and calc_sum_64 functions like so:
def calc_sum_32(N):
return (np.ones((N,), dtype=np.float32) / np.float32(N)).sum()
def calc_sum_64(N):
return (np.ones((N,), dtype=np.float64) / np.float64(N)).sum()
You can then call these functions for every value of N you care about, and get a plot that looks like so, which shows the result oscillating about 1 for float32, but barely any oscillation for float64:
I have two functions, in c++ and python, that determine how many times an event with a certain probability will occur over a number of rolls.
Python version:
def get_loot(rolls):
drops = 0
for i in range(rolls):
# getting a random float with 2 decimal places
roll = random.randint(0, 10000) / 100
if roll < 0.04:
drops += 1
return drops
for i in range(0, 10):
print(get_loot(1000000))
Python output:
371
396
392
406
384
392
380
411
393
434
c++ version:
int get_drops(int rolls){
int drops = 0;
for(int i = 0; i < rolls; i++){
// getting a random float with 2 decimal places
float roll = (rand() % 10000)/100.0f;
if (roll < 0.04){
drops++;
}
}
return drops;
}
int main()
{
srand(time(NULL));
for (int i = 0; i <= 10; i++){
cout << get_drops(1000000) << "\n";
}
}
c++ output:
602
626
579
589
567
620
603
608
594
610
626
The cood looks identical (at least to me). Both functions simulate an occurence of an event with a probablilty of 0.04 over 1,000,000 rolls. However the output of the python version is about 30% lower than that of the c++ version. How are these two versions different and why do they have different outputs?
In C++ rand() "Returns a pseudo-random integral number in the range between 0 and RAND_MAX."
RAND_MAX is "is library-dependent, but is guaranteed to be at least 32767 on any standard library implementation."
Let's set RAND_MAX at 32,767.
When calculating [0, 32767) % 10000 the random number generation is skewed.
The values 0-2,767 all occur 4 times in the range (% 10000)->
Value
Calculation
Result
1
1 % 10000
1
10001
10001 % 10000
1
20001
20001 % 10000
1
30001
30001 % 10000
1
Where as the values 2,768-9,999 occur only 3 times in the range (% 10000) ->
Value
Calculation
Result
2768
2768 % 10000
2768
12768
12768 % 10000
2768
22768
22768 % 10000
2768
This makes the values 0-2767 25% more likely to occur than the values 2768-9,999 (assuming rand() does, in fact, produce an even distribution between 0 and RAND_MAX).
Python on the other hand using randint produces an even distribution between start and end as randint is an "Alias for randrange(a, b+1)"
And randrange (in python 3.2 and newer) will produce evenly distributed values:
Changed in version 3.2: randrange() is more sophisticated about producing equally distributed values. Formerly it used a style like int(random()*n) which could produce slightly uneven distributions.
There are several approaches to generating random numbers in C++. Something perhaps the most similar to python would be to use a Mersenne Twister Engine (which is the same as python if with some differences).
Via uniform_int_distribution with mt19937:
#include <iostream>
#include <random>
#include <chrono>
int get_drops(int rolls) {
std::mt19937 e{
static_cast<unsigned int> (
std::chrono::steady_clock::now().time_since_epoch().count()
)
};
std::uniform_int_distribution<int> d{0, 9999};
int drops = 0;
for (int i = 0; i < rolls; i++) {
float roll = d(e) / 100.0f;
if (roll < 0.04) {
drops++;
}
}
return drops;
}
int main() {
for (int i = 0; i <= 10; i++) {
std::cout << get_drops(1000000) << "\n";
}
}
It is notable that the underlying implementation of the two engines as well as seeding and distribution are all slightly different, however, this will be much closer to python.
Alternatively as Matthias Fripp suggests scaling up rand and dividing by RAND_MAX:
int get_drops(int rolls) {
int drops = 0;
for (int i = 0; i < rolls; i++) {
float roll = (10000 * rand() / RAND_MAX) / 100.0f;
if (roll < 0.04) {
drops++;
}
}
return drops;
}
This is also much closer to the python output (again with some differences in the way random numbers are generated in the underlying implementations).
The results are skewed because rand() % 10000 is not the correct way to achieve a uniform distribution. (See also rand() Considered Harmful by Stephan T. Lavavej.) In modern C++, prefer the pseudo-random number generation library provided in header <random>. For example:
#include <iostream>
#include <random>
int get_drops(int rolls)
{
std::random_device rd;
std::mt19937 gen{ rd() };
std::uniform_real_distribution<> dis{ 0.0, 100.0 };
int drops{ 0 };
for(int roll{ 0 }; roll < rolls; ++roll)
{
if (dis(gen) < 0.04)
{
++drops;
}
}
return drops;
}
int main()
{
for (int i{ 0 }; i <= 10; ++i)
{
std::cout << get_drops(1000000) << '\n';
}
}
Both languages use different pseudo-random generators. If you would like to unify the performance, you might want to deterministically generate your own pseudo-random values.
Here is how it should look like in Python:
SEED = 101
TOP = 999
class my_random(object):
def seed(self, a=SEED):
"""Seeds a deterministic value that should behave the same irrespectively of the coding language"""
self.seedval = a
def random(self):
"""generates and returns the random number based on the seed"""
self.seedval = (self.seedval * SEED) % TOP
return self.seedval
instance = my_random(SEED)
read_seed = instance.seed
read_random = instance.random()
However, in C++, it should become:
const int SEED = 101;
const int TOP = 9999;
class myRandom(){
int seedval;
public int random();
myRandom(int a=SEED){
this.seedval = a;
}
int random(){
this.seedval = (this.seedval * SEED) % TOP;
return this.seedval;
}
int seed(){
return this.seedval;
}
}
instance = myRandom(SEED);
readSeed = instance.seed;
readRandom = instance.random();
Please need your help, I got one failed test case due to time out if anyone can help me to improve the time taken by code to be executed. This problem is from HackerRank website if anyone needs more explanation I will refer the link of the problem in the comments below
from itertools import combinations
def powerSum(X, N,n=1,poss=[]):
if(n**N <= X):
poss.append(n)
n+=1
rslt = powerSum(X,N,n,poss)
else:
tmp=[]
for _ in range(len(poss)):
oc=combinations(poss,_+1)
for x in oc:
ok = sum([num**N for num in x])
if(ok == X):
tmp.append(ok)
return len(tmp)
return rslt
I am not good in python, but I hope below java code can be easily understood, This is a indirectly a variation of subset sum problem which is a dynamic programming problem where you have to find no. of ways to get a given particular sum given an array of values,so basically before applying subset problem, I have made a list of number which can be used in making the required sum by stopping at that number whose kth power exceed the x because starting from that natural number, further natural number are going to have much larger kth power value so no need of keeping them in our list so break there then it is just a dynamic programming problem as mentioned above where our list has value of kth power of valid natural number and we have to find the different way to get the sum x using those kth power values.
below is the code for more clear understanding
import java.util.*;
public class Main {
public static int find_it(int x , int n , List<Integer> a , int [][] dp){
for(int i = 0; i < n; ++i){
dp[i][0] = 1;
}
for(int i = 1; i <= n; ++i){
for(int j = 1; j <= x; ++j){
dp[i][j] += dp[i - 1][j];
if(j - a.get(i - 1) >= 0){
dp[i][j] += dp[i - 1][j - a.get(i - 1)];
}
}
}
return dp[n][x];
}
public static void main(String [] args){
Scanner input = new Scanner(System.in);
int x = input.nextInt() , k = input.nextInt();
List<Integer> a = new ArrayList<>();
for(int i = 1; ; ++i){
double value = Math.pow(i , k);
if(value > x){
break;
}
a.add((int)value);
}
int n = a.size();
int [][]dp = new int[n + 1][x + 1];
int answer = find_it(x , n , a , dp);
System.out.println(answer);
input.close();
}
}
I'm running these two codes. They both perform the same mathematical procedure (calculate series value up to large terms), and also, as expected, produce the same output.
But for some reason, the PyPy code is running significantly faster than the C code.
I cannot figure out why this is happening, as I expected the C code to run faster.
I'd be thankful if anyone could help me by clarifying that (maybe there is a better way to write the C code?)
C code:
#include <stdio.h>
#include <math.h>
int main()
{
double Sum = 0.0;
long n;
for(n = 2; n < 1000000000; n = n + 1) {
double Sign;
Sign = pow(-1.0, n % 2);
double N;
N = (double) n;
double Sqrt;
Sqrt = sqrt(N);
double InvSqrt;
InvSqrt = 1.0 / Sqrt;
double Ln;
Ln = log(N);
double LnSq;
LnSq = pow(Ln, 2.0);
double Term;
Term = Sign * InvSqrt * LnSq;
Sum = Sum + Term;
}
double Coeff;
Coeff = Sum / 2.0;
printf("%0.14f \n", Coeff);
return 0;
}
PyPy code (faster implementation of Python):
from math import log, sqrt
Sum = 0
for n in range(2, 1000000000):
Sum += ((-1)**(n % 2) * (log(n))**2) / sqrt(n)
print(Sum / 2)
This is far from surprising, PyPy does a number of run-time optimizations by default, where as C compilers by default do not perform any optimization. Dave Beazley's 2012 PyCon Keynote covers this pretty explicitly and provides an deep explanation of why this happens.
Per the referenced talk, C should surpass PyPy when compiled with optimization level 2 or 3 (you can watch the full section on the performance of fibonacci generation in cpython, pypy and C starting here).
Additionally to compiler's optimisation level, you can improve your code as well:
int main()
{
double Sum = 0.0;
long n;
for(n = 2; n < 1000000000; ++n)
{
double N = n; // cast is implicit, only for code readability, no effect on runtime!
double Sqrt = sqrt(N);
//double InvSqrt; // spare that:
//InvSqrt = 1.0/Sqrt; // you spare this division with!
double Ln = log(N);
double LnSq;
//LnSq = pow(Ln,2.0);
LnSq = Ln*Ln; // more efficient
double Term;
//Term = Sign * InvSqrt * LnSq;
Term = LnSq / Sqrt;
if(n % 2)
Term = -Term; // just negating, no multiplication
// (IEEE provided: just one bit inverted)
Sum = Sum + Term;
}
// ...
Now we can simplify the code a little more:
int main()
{
double Sum = 0.0;
for(long n = 2; n < 1000000000; ++n)
// ^^^^ possible since C99, better scope, no runtime effect
{
double N = n;
double Ln = log(N);
double Term = Ln * Ln / sqrt(N);
if(n % 2)
Sum -= Term;
else
Sum += Term;
}
// ...
I need to know how to calculate the positions of the QR Code alignment patterns as defined in the table of ISO/IEC 18004:2000 Annex E.
I don't understand how it's calculated. If you take the Version 16, for example, the positions are calculated using {6,26,50,74} and distance between the points are {20,24,24}. Why isn't it {6,28,52,74}, if the distances between the points, {22,24,22}, is distributed more equally?
I would like to know how this can be generated procedurally.
While the specification does provide a table of the alignment, this is a reasonable question (and one I found myself with :-)) - the possibility of generating the positions procedurally has its merits (less typo-prone code, smaller code footprint, knowing pattern/properties of the positions).
I'm happy to report that, yes, a procedure exists (and it is even fairly simple).
The specification itself says most of it:
[The alignment patterns] are spaced as evenly as possible between the Timing Pattern and the opposite side of the symbol, any uneven spacing being accommodated between the timing pattern and the first alignment pattern in the symbol interior.
That is, only the interval between the first and second coordinate may differ from the rest of the intervals. The rest must be equal.
Another important bit is of course that, for the APs to agree with the timing patterns, the intervals must be even.
The remaining tricky bit is just getting the rounding right.
Anyway - here's code printing the alignment position table:
def size_for_version(version):
return 17 + 4 * version
def alignment_coord_list(version):
if version == 1:
return []
divs = 2 + version // 7
size = size_for_version(version)
total_dist = size - 7 - 6
divisor = 2 * (divs - 1)
# Step must be even, for alignment patterns to agree with timing patterns
step = (total_dist + divisor // 2 + 1) // divisor * 2 # Get the rounding right
coords = [6]
for i in range(divs - 2, -1, -1): # divs-2 down to 0, inclusive
coords.append(size - 7 - i * step)
return coords
for version in range(1, 40 + 1): # 1 to 40 inclusive
print("V%d: %s" % (version, alignment_coord_list(version)))
Here's a Python solution which is basically equivalent to the C# solution posted by #jgosar, except that it corrects a deviation from the thonky.com table for version 32 (that other solution reports 110 for the second last position, whereas the linked table says 112):
def get_alignment_positions(version):
positions = []
if version > 1:
n_patterns = version // 7 + 2
first_pos = 6
positions.append(first_pos)
matrix_width = 17 + 4 * version
last_pos = matrix_width - 1 - first_pos
second_last_pos = (
(first_pos + last_pos * (n_patterns - 2) # Interpolate end points to get point
+ (n_patterns - 1) // 2) # Round to nearest int by adding half
# of divisor before division
// (n_patterns - 1) # Floor-divide by number of intervals
# to complete interpolation
) & -2 # Round down to even integer
pos_step = last_pos - second_last_pos
second_pos = last_pos - (n_patterns - 2) * pos_step
positions.extend(range(second_pos, last_pos + 1, pos_step))
return positions
The correction consists of first rounding the second last position (up or down) to the nearest integer and then rounding down to the nearest even integer (instead of directly rounding down to the nearest even integer).
Disclaimer: Like #jgosar, I don't know whether the thonky.com table is correct (I'm not going to buy the spec to find out). I've simply verified (by pasting the table into a suitable wrapper around the above function) that my solution matches that table in its current version.
sorry about my English.
I hope this can help you, and not to later reply.
first things, the standard forget a important thing is that the top left is define with (0,0).
the { 6, 26, 50, 74 } means the alignment points row coordinate and col coordinate, and I don't
know why they do like this, maybe for save space. but we combine all the values for example the:
{ 6, 26, 50, 74 }
and we get :
{ 6 , 6 } ---> ( the x coordinate is 6, and the y is 6, from top/left )
{ 6 , 26 }
{ 6 , 50 }
{ 6 , 74 }
{ 26, 26 }
{ 26, 50 }
{ 26, 74 }
{ 50, 50 }
{ 50, 74 }
{ 74, 74 }
those point's are the actual coordinate of alignment patterns center.
Ps: if a position has the position detection patterns, we ignore output alignment, like the position
(6, 6).
I also have this question before, but now, I solve it, so I hope you can solve it too.
good luck~
There are some comments on the top rated answer that suggest it isn't 100% accurate, so i'm contributing my solution as well.
My solution is written in C#. It should be easy to translate it to a language of your choice.
private static int[] getAlignmentCoords(int version)
{
if (version <= 1)
{
return new int[0];
}
int num = (version / 7) + 2;//number of coordinates to return
int[] result = new int[num];
result[0] = 6;
if (num == 1)
{
return result;
}
result[num - 1] = 4 * version + 10;
if (num == 2)
{
return result;
}
result[num - 2] = 2 * ((result[0] + result[num - 1] * (num - 2)) / ((num - 1) * 2)); //leave these brackets alone, because of integer division they ensure you get a number that's divisible by 2
if (num == 3)
{
return result;
}
int step = result[num - 1] - result[num - 2];
for (int i = num - 3; i > 0; i--)
{
result[i] = result[i + 1] - step;
}
return result;
}
The values i get with it are the same as shown here: http://www.thonky.com/qr-code-tutorial/alignment-pattern-locations/
To sum it up, the first coordinate is always 6.
The last coordinate is always 7 less than the image size. The image size is calculated as 4*version+17, therefore the last coordinate is 4*version+10.
If the coordinates were precisely evenly spaced, the position of one coordinate before the last would be (first_coordinate+(num-2) * last_coordinate)/(num-1), where num is the number of all coordinates.
But the coordinates are not evenly spaced, so this position has to be reduced to an even number.
Each of the remaining coordinates is spaced the same distance from the next one as the last two are from each other.
Disclaimer: I didn't read any of the documentation, i just wrote some code that generates a sequence of numbers that's the same as in the table i linked to.
Starting with #ericsoe's answer, and noting it's incorrect for v36 and v39 (thanks to #Ana's remarks), I've developed a function that returns the correct sequences. Pardon the JavaScript (fairly easy to translate to other languages, though):
function getAlignmentCoordinates(version) {
if (version === 1) {
return [];
}
const intervals = Math.floor(version / 7) + 1;
const distance = 4 * version + 4; // between first and last alignment pattern
const step = Math.ceil(distance / intervals / 2) * 2; // To get the next even number
return [6].concat(Array.from(
{ length: intervals },
(_, index) => distance + 6 - (intervals - 1 - index) * step)
);
}
I don't know if this is a useful question to ask. It just is the way it is, and it doesn't really matter much if it were {22,24,22}. Why are you asking?
My guess it that the spacing should be multiples of 4 modules.
It seems like most answers aren't correct for all versions (especially v32, v36 and v39) and/or are quite convoluted.
Based on #MaxArt's great solution (which produces wrong coordinates for v32), here's a C function which calculates the correct coordinates for all versions:
#include <math.h>
int getAlignmentCoordinates(int version, int *coordinates) {
if (version <= 1) return 0;
int intervals = (version / 7) + 1; // Number of gaps between alignment patterns
int distance = 4 * version + 4; // Distance between first and last alignment pattern
int step = lround((double)distance / (double)intervals); // Round equal spacing to nearest integer
step += step & 0b1; // Round step to next even number
coordinates[0] = 6; // First coordinate is always 6 (can't be calculated with step)
for (int i = 1; i <= intervals; i++) {
coordinates[i] = 6 + distance - step * (intervals - i); // Start right/bottom and go left/up by step*k
}
return intervals+1;
}
The key is to first round the division to the nearest integer (instead of up) and then round it to the next largest even number.
The C program below uses this function to generate the same values as in the table of ISO/IEC 18004:2000 Annex E linked by OP and the (updated) list found on thonky.com:
#include <stdio.h>
void main() {
for (int version = 2; version <= 40; version++) {
int coordinates[7];
int n = getAlignmentCoordinates(version, coordinates);
printf("%d:", version);
for (int i = 0; i < n; i++) {
printf(" %d", coordinates[i]);
}
printf("\n");
}
}