I have a pandas data frame with columns of data type float64.
I would like to increase the floating point precision to 200. I know you can do this with the BigFloat library.
I'm not sure what's the best way to increase the precision for floating point numbers in pandas.
Related
csv.write_csv(table, "animals.csv") when result floating point look like
1.12999999999
Is there a similar pandas export with float_format parameters(data.to_csv(target, index=False,float_format='%g'))?
The reason why you don't use pandas directly is because pyarrow is faster at exporting csv.
A faster way to export csv than pandas and solve the floating point precision problem.
It seems for values with a fixed range, it is recommended in machine learning to min-max scale them between -1 and 1. What data type should be used for such values? It seems like a float[n] or double are the only options, but that also seems like it would be memory inefficient as a large portion of the available bits would never be used. Is this a meaningful concern in practice?
but that also seems like it would be memory inefficient as a large portion of the available bits would never be used
That's not how floating point works. It might be a concern for a fixed-point representation, but not floating point.
About half of all floating-point representable values have magnitude less than 1. Effectively, you're only sacrificing 1 bit worth of representable information per value by scaling values this way. (You can think of it as sacrificing the most significant bit of the exponent in the representation.)
I'm trying this notebook but on float numbers
https://github.com/erdogant/bnlearn/blob/master/notebooks/bnlearn.ipynb
Has anyone used "structure_learning.fit()" from bnlearn with float numbers?
My chart is blank. When I run a simple correlation on my dataframe, I get results so is not a a dataframe problem.
Another hint about my hypotheses : When I transform my float to binary, it works
Bnlearn in python only works with binary and not with cont values. This library is an adaptation of an R library so not everything is done. Currently P(A/B) can be done only for binary problems in this library. Please check the math of P(A/B) to understand
I am using a function that multiplies probabilities there by creating very small values. I am using decimal.Decimal module to handle it and then when the compuation is complete I convert that decimal to logofOdds using math.log module/function. But, below a certain proability python cannot convert these very small probabilities to log2 or 10 of likelyhood ratio.
I am getting ValueError: math domain error
So, I printed the value before the traceback started and it seems to be this number:
2.4876626750969332485460767406646530276378975654773588506772125620858727319570054153525540357327805722211631386444621446226193195409521079089382667946955357511114536197822067973513019098983691433561051610219726750413489309980667312714519374641433925197450250314924925500181809328656811236486523523785835600132361529950090E-366
Other small numbers like this are getting handled by math.log though in the same program:
5.0495856951184114023890172277484001329118412629157526209503867218204386939259819037402424581363918720565886924655927609161379229574865468595907661385853201472751861413845827437245978577896538019445515183910587509474989069747817303700894727201121392323641965506674606552182934813779310061601566189062725979740753305935661E-31
Is it true? any way to fix this. I know I can take the log of the probs and then sum it along the way, but when I tried to do that, it seems I have to update several places in my program - could take significant hours or days. and there is another process to convert it back to decimal.
Thanks,
If you want to take logarithms of Decimal objects, use the ln or log10 methods. Aside from a weird special case for huge ints, math.log casts inputs to float.
whatever_decimal.ln()
In Python 2.7,I need to record high precision floats (such as np.float64 from numpy or Decimal from decimal module) to a binary file and later read it back. How could I do it? I would like to store only bit image of a high precision float, without any overhead.
Thanks in advance!
The struct module can handle 64 bit floats. Decimals are another matter - the binary representation is a string of digits. Probably not what you want. You could covert it to BCD to halve the amount of storage.
Without further details, I'd just store a compressed pickle'd representation of the data. It will record data and read it back exactly as it was and will not "waste" bits.