Tensorflow GO - how to port python querie - python

pretty new to the whole world of tf and co.
managed to create/train/predict a model - in jupyter-playbook using python.
for production, i'd like to use golang. but i am unable to find a "simple" sample on how to do the prediction inside go.
i'd like to have this piece of python, for go:
sample = {
'b': 200,
'c': 10,
'd': 1,
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = reloaded_model.predict(input_dict)
prob = tf.nn.sigmoid(predictions[0])
anyone has a good tutorial for model.Session.Run using github.com/galeone/tensorflow/tensorflow/go
regards
helmut

ok so i managed todo it, so i answer it myself for other folks!
using tfgo
import (
tf "github.com/galeone/tensorflow/tensorflow/go"
tg "github.com/galeone/tfgo"
)
func main() {
model := tg.LoadModel("../krot-classifier", []string{"serve"}, nil)
start := time.Now()
fakeInput, _ := tf.NewTensor([][]float32{{1}})
//fakeString, _ := tf.NewTensor([][]string{{"127.0.0"}})
results := model.Exec([]tf.Output{
model.Op("StatefulPartitionedCall", 0),
}, map[tf.Output]*tf.Tensor{
model.Op("serving_default_b", 0): fakeInput,
model.Op("serving_default_c", 0): fakeInput,
model.Op("serving_default_d", 0): fakeInput,
})
predictions := results[0]
pred := predictions.Value().([][]float32)
prob := sigmoid(float64(pred[0][0]))
fmt.Printf("%.1f percent probability\n", (100 * prob))
}
func sigmoid(x float64) float64 {
return 1 / (1 + math.Exp(x*(-1)))
}
for string inputs see the fakeString

Related

How to change the code from Scala to Python

def computeTotalVariationDistance(p: Distribution, q: Distribution): Double = {
val pSum = p.sum
val qSum = q.sum
val l1Distance = p.zip(q)
.map { case (_, pVal, qVal) =>
math.abs((pVal / pSum) - (qVal / qSum))
}
.sum
0.5 * l1Distance
}
Can someone help me to change this code into python.
It's actually relatively straightforward. Instead of map, you can use list comprehension:
l1Distance = sum(
[abs((pVal / pSum) - (qVal / qSum))
for pVal, qVal in zip(p, q)]
)
I have not tested this but it should work, or something very similar.

Tradingview pinescript's RMA (Moving average used in RSI. It is the exponentially weighted moving average with alpha = 1 / length) in python, pandas

I've been trying to get same results from tradingviews RMA method but I dont know how to accomplish it
In their page RMA is computed as:
plot(rma(close, 15))
//the same on pine
pine_rma(src, length) =>
alpha = 1/length
sum = 0.0
sum := na(sum[1]) ? sma(src, length) : alpha * src + (1 - alpha) * nz(sum[1])
plot(pine_rma(close, 15))
to test I used input and their result, this is input column and the same input after applying tradingview´s rma(input,14):
data = [[588.0,519.9035093599585],
[361.98999999999984,508.62397297710436],
[412.52000000000055,501.7594034787397],
[197.60000000000042,480.0337318016869],
[208.71999999999932,460.6541795301378],
[380.1100000000006,454.90102384941366],
[537.6599999999999,460.8123792887413],
[323.5600000000013,451.0086379109742],
[431.78000000000077,449.6351637744761],
[299.6299999999992,438.9205092191563],
[225.1900000000005,423.65404427493087],
[292.42000000000013,414.28018396957873],
[357.64999999999964,410.23517082889435],
[692.5100000000003,430.3976586268306],
[219.70999999999916,415.34854015348543],
[400.32999999999987,414.2757872853794],
[604.3099999999995,427.849659622138],
[204.29000000000087,411.8811125062711],
[176.26000000000022,395.0510330415374],
[204.1800000000003,381.41738782428473],
[324.0,377.3161458368358],
[231.67000000000007,366.91284970563316],
[184.21000000000092,353.8626461552309],
[483.0,363.08674285842864],
[290.6399999999994,357.911975511398],
[107.10000000000036,339.996834403441],
[179.0,328.49706051748086],
[182.36000000000058,318.05869905194663],
[275.0,314.98307769109323],
[135.70000000000073,302.17714357030087],
[419.59000000000015,310.56377617242225],
[275.6399999999994,308.06922073153487],
[440.48999999999984,317.5278478221396],
[224.0,310.8472872634153],
[548.0100000000001,327.78748103031415],
[257.0,322.73123238529183],
[267.97999999999956,318.82043007205664],
[366.51000000000016,322.2268279240526],
[341.14999999999964,323.57848307233456],
[147.4200000000001,310.9957342814536],
[158.78000000000063,300.12318183277836],
[416.03000000000077,308.4022402732943],
[360.78999999999917,312.14422311091613],
[1330.7299999999996,384.90035003156487],
[506.92000000000013,393.61603931502464],
[307.6100000000006,387.4727507925229],
[296.7299999999996,380.991125735914],
[462.0,386.7774738976345],
[473.8099999999995,392.9940829049463],
[769.4200000000002,419.88164841173585],
[971.4799999999997,459.2815306680404],
[722.1399999999994,478.0571356203232],
[554.9799999999996,483.5516259331572],
[688.5,498.19079550936027],
[292.0,483.462881544406],
[634.9500000000007,494.2833900055199]]
# Create the pandas DataFrame
dfRMA = pd.DataFrame(data, columns = ['input', 'wantedrma'])
dfRMA['try1'] = dfRMA['input'].ewm( alpha=1/14, adjust=False).mean()
dfRMA['try2'] = numpy_ewma_vectorized(dfRMA['input'],14)
dfRMA
ewm does not give me same results, so I searched and found this but I just replicated ewma
def numpy_ewma_vectorized(data, window):
alpha = 1/window
alpha_rev = 1-alpha
scale = 1/alpha_rev
n = data.shape[0]
r = np.arange(n)
scale_arr = scale**r
offset = data[0]*alpha_rev**(r+1)
pw0 = alpha*alpha_rev**(n-1)
mult = data*pw0*scale_arr
cumsums = mult.cumsum()
out = offset + cumsums*scale_arr[::-1]
return out
I am getting these results
Do you know how to translate pinescript rma method in pandas?
I realized that using pandas ewm it seems to converge, last rows are closer and closer to the value, is this correct?
...
As far as I know Pine-script will use data that is not exported, so the weighted mean is taking into account previous records that are not in your table, meaning that you can't reproduce the results without more information.
What you need to do is load around 50-100 points (depending on alpha) of data further into the past than what you actually will use, and use a threshold for the comparing the data. You need that both python and pine-script is using data with the same or at least similar "history".
So you make the calculations using the whole dataframe and then you skip the first rows. You can see the effect of the historical data in your own example as difference between your calculation and pine-script one is quickly vanishing after the 55 point, but of course the difference will also depend on alpha.
So actually your code could be already well written. In any case you can use the pandas implementation directly, it will be easier and probably faster.
const cloneArray = (input) => [...input]
const pluck = (input, key) => input.map(element => element[key])
const pineSma = (source, length) => {
let sum = 0.0
for (let i = 0; i < length; ++i) {
sum += source[i] / length
}
return sum
}
const pineRma = (src, length, last) => {
const alpha = 1.0 / length
return alpha * src[0] + (1.0 - alpha) * last
}
const calculatePineRma = (candles, sourceKey, length) => {
const results = []
for (let i = 0; i <= candles.length - length; ++i) {
const sourceCandles = cloneArray(candles.slice(i, i + length)).reverse()
const closes = pluck(sourceCandles, sourceKey)
if (i === 0) {
results.push(pineSma(closes, length))
continue
}
results.push(pineRma(closes, length, results[results.length - 1]))
}
return results
}

does C# have something equivalent to Pythons random.choices()

I'm trying to do choices based on their weight/probability
this is what I had in python:
import random
myChoiceList = ["Attack", "Heal", "Amplify", "Defense"]
myWeights = [70, 0, 15, 15] // % probability = 100% Ex. Attack has 70% of selection
print(random.choices(myChoicelist , weights = myWeights, k = 1))
I want to do the same thing in c#, how does one do that?
does C# have any methods similar to random.choices() all I know is random.Next()
*this python code works fine randome.choice takes in (sequence, weights, k)
sequence: values,
weights: A list were you can weigh the possibility for each value,
k: the length of the returned list,
I'm looking to do the same for C#,
choose values based on there probability
There is nothing built into C# like this, however, it's not that hard to add an extension method to recreate the same basic behavior:
static class RandomUtils
{
public static string Choice(this Random rnd, IEnumerable<string> choices, IEnumerable<int> weights)
{
var cumulativeWeight = new List<int>();
int last = 0;
foreach (var cur in weights)
{
last += cur;
cumulativeWeight.Add(last);
}
int choice = rnd.Next(last);
int i = 0;
foreach (var cur in choices)
{
if (choice < cumulativeWeight[i])
{
return cur;
}
i++;
}
return null;
}
}
Then you can call it in a similar way as the Python version:
string[] choices = { "Attack", "Heal", "Amplify", "Defense" };
int[] weights = { 70, 0, 15, 15 };
Random rnd = new Random();
Console.WriteLine(rnd.Choice(choices, weights));
you can get random.next(0,100), then choose the relevant item with a simple switch case or something. your domains will be like this , [0-70 , 70-85, 85-100]. let me know if you need full code.
Random ran = new Random();
int probability = ran.Next(0, 100);
string s;
if (probability == 0)
s = "Heal";
else if (probability <= 70)
s = "Attack";
else if (probability <= 85)
s = "Amplify";
else if (probability <= 100)
s = "Defense";

percentiles pandas vs. scala where is the bug?

For a list of numbers
val numbers = Seq(0.0817381355303346, 0.08907955219917718, 0.10581384008994665, 0.10970915785902469, 0.1530743353025532, 0.16728932033107657, 0.181932212814931, 0.23200826752868853, 0.2339654613723784, 0.2581657775305527, 0.3481071101229365, 0.5010850992326521, 0.6153244818101578, 0.6233250409474894, 0.6797744231690304, 0.6923891392381571, 0.7440316016776881, 0.7593186414698002, 0.8028091068764153, 0.8780699052482807, 0.8966649331194205)
python / pandas computes the following percentiles:
25% 0.167289
50% 0.348107
75% 0.692389
However, scala returns:
calcPercentiles(Seq(.25, .5, .75), sortedNumber.toArray)
25% 0.1601818278168149
50% 0.3481071101229365
75% 0.7182103704579226
The numbers are almost matching - but different. How can I get rid of the difference (and most likely fix a bug in my scala code?
val sortedNumber = numbers.sorted
import scala.collection.mutable
case class PercentileResult(percentile:Double, value:Double)
// https://github.com/scalanlp/breeze/blob/master/math/src/main/scala/breeze/stats/DescriptiveStats.scala#L537
def calculatePercentile(arr: Array[Double], p: Double)={
// +1 so that the .5 == mean for even number of elements.
val f = (arr.length + 1) * p
val i = f.toInt
if (i == 0) arr.head
else if (i >= arr.length) arr.last
else {
arr(i - 1) + (f - i) * (arr(i) - arr(i - 1))
}
}
def calcPercentiles(percentiles:Seq[Double], arr: Array[Double]):Array[PercentileResult] = {
val results = new mutable.ListBuffer[PercentileResult]
percentiles.foreach(p => {
val r = PercentileResult(percentile = p, value = calculatePercentile(arr, p))
results.append(r)
})
results.toArray
}
python:
import pandas as pd
df = pd.DataFrame({'foo':[0.0817381355303346, 0.08907955219917718, 0.10581384008994665, 0.10970915785902469, 0.1530743353025532, 0.16728932033107657, 0.181932212814931, 0.23200826752868853, 0.2339654613723784, 0.2581657775305527, 0.3481071101229365, 0.5010850992326521, 0.6153244818101578, 0.6233250409474894, 0.6797744231690304, 0.6923891392381571, 0.7440316016776881, 0.7593186414698002, 0.8028091068764153, 0.8780699052482807, 0.8966649331194205]})
display(df.head())
df.describe()
After a bit trial and error I write this code that returns the same results as Panda (using linear interpolation as this is pandas default):
def calculatePercentile(numbers: Seq[Double], p: Double): Double = {
// interpolate only - no special handling of the case when rank is integer
val rank = (numbers.size - 1) * p
val i = numbers(math.floor(rank).toInt)
val j = numbers(math.ceil(rank).toInt)
val fraction = rank - math.floor(rank)
i + (j - i) * fraction
}
From that I would say that the errors was here:
(arr.length + 1) * p
Percentile of 0 should be 0, and percentile at 100% should be a maximal index.
So for numbers (.size == 21) that would be indices 0 and 20. However, for 100% you would get index value of 22 - bigger than the size of array! If not for these guard clauses:
else if (i >= arr.length) arr.last
you would get error and you could suspect that something is wrong. Perhaps authors of the code:
https://github.com/scalanlp/breeze/blob/master/math/src/main/scala/breeze/stats/DescriptiveStats.scala#L537
used a different definition of percentile... (?) or they might simply have a bug. I cannot tell.
BTW: This:
def calcPercentiles(percentiles:Seq[Double], arr: Array[Double]): Array[PercentileResult]
could be much easier to write like this:
def calcPercentiles(percentiles:Seq[Double], numbers: Seq[Double]): Seq[PercentileResult] =
percentiles.map { p =>
PercentileResult(p, calculatePercentile(numbers, p))
}

Splitting a string of letters into 3s in swift

Swift newbie here. I am trying to convert some of my python code to swift and im stuck at the point where I need to split a string of letters into and array with each item being 3 letters:
For example my python code is as follows:
name = "ATAGASSTSSGASTA"
threes =[]
for start in range(0, len(name),3):
threes.append(name[start : start + 3])
print threes
For swift ive come as far as this:
var name = "ATAGASSTSSGASTA"
let namearr = Array(name)
let threes = []
threes.append(namearr[0...3])
This gives me an error.
I realize there may be an much easier way to do this, but I have not been able to find anything in my research. Any help is appreciated!
An easy and Swifty way to do this is to map an array of chars using the stride and advance functions:
let name = Array("ATAGASSTSSGASTA")
let splitName = map(stride(from: 0, to: name.count, by: 3)) {
String(name[$0..<advance($0, 3, name.count)])
}
This is pretty verbose, but it does the job:
let name = "ATAGASSTSSGASTA"
let array = reduce(name, [String]()) {
switch $0.last {
case .Some(let last) where countElements(last) < 3:
var array = $0
array[array.endIndex-1].append($1)
return array
case .Some(_), .None:
return $0 + [String($1)]
}
}
Edit: In Swift 1.2, I think countElements has changed to just count. Not sure, don't have it yet, but the documents make it look that way.
var nucString = "aatttatatatattgctgatctgatctEOS"
let nucArrayChar = Array(nucString)
var nucArray: [String] = []
var counter: Int = nucArrayChar.count
if counter % 3 == 0 {
for var startNo = 0; startNo < counter; startNo += 3 {
println("\(nucArray)\(startNo)")
nucArray.append(String(nucArrayChar[(startNo)...(startNo + 2)]))
}
}

Categories