Convert scientific notation string into int - python

How would I convert:
s='8.833167174e+11' (str)
==> 883316717400 (int)
I tried doing int(s) or some other 'casting' but it wasn't effective.

As your string is a float digit you need to first convert it to float:
>>> int(float(s))
883316717400
float([x])
Return a floating point number constructed from a number or string x.
If the argument is a string, it must contain a possibly signed decimal or floating point number, possibly embedded in whitespace. The argument may also be [+|-]nan or [+|-]inf. Otherwise, the argument may be a plain or long integer or a floating point number, and a floating point number with the same value (within Python’s floating point precision) is returned. If no argument is given, returns 0.0.

Related

Clean way to convert string to floating point number with specific precision?

I'm trying to convert strings of numbers that come from the output of another program into floating point numbers with two forced decimal places (including trailing zeros).
Right now I'm converting the strings to floats, then separately specifying precision (two decimal places), then converting back to float to do numeral comparisons on later.
# convert to float
float1 = float(output_string[6])
# this doesn't guarantee two decimal places in my output
# eg: -36.55, -36.55, -40.34, -36.55, -35.7 (no trailing zero on the last number)
nice_float = float('{0:.2f}'.format(float1))
# this works but then I later need to convert back into a float
# string->float->string->float is not super clean
nice_string = '{0:.2f}'.format(float1)
Edit for clarity:
I have a problem with the display in that I need that to show exactly two decimal places.
Is there a way to convert a string to a floating point number rounded to two decimal places that's cleaner than my implementation which involves converting a string to a float, then the float back into a formatted string?

Taking the exponent of a SHA3 hash function

I am trying to implement a protocol described in the paper Private Data Aggregation with Groups for Smart Grids in a Dynamic Setting using CRT in python.
In order to do this, I need to calculate the following value:
I know that since python 3.6, you can calculate a SHA3 value as follows:
import hashlib
hash_object = hashlib.sha3_512(b'value_to_encode')
hash_value = hash_object.hexdigest()
I was wondering you should solve this, since, as far as I know, a SHA-3 function returns a string and therefore cannot be calculated in a function with to the power of n.
What am I overlooking?
If we define a hash function $H: \{0, 1\}^* \rightarrow \{0, 1\}^n$, that is one that produces an $n$ bit output, we can always interpret the binary data $h$ that it outputs as an integer. The integer value of this digest is $\sum_{i=0}^n h_i 2^i$, in other words the digest is a base 2 representation of the integer.
In your case, since python has a notion of types, we need to take the binary string and convert it to an integer type. The builtin int function can do this for us:
int(x=0) -> integer
int(x, base=10) -> integer
Convert a number or string to an integer, or return 0 if no arguments
are given. If x is a number, return x.__int__(). For floating point
numbers, this truncates towards zero.
If x is not a number or if base is given, then x must be a string,
bytes, or bytearray instance representing an integer literal in the
given base. The literal can be preceded by '+' or '-' and be surrounded
by whitespace. The base defaults to 10. Valid bases are 0 and 2-36.
Base 0 means to interpret the base from the string as an integer literal.
>>> int('0b100', base=0)
4
The hexdigest call will return a hex string which is base 16, so you would want to do something like int_value = int(hash_value, 16).

struct.unpack with precision after decimal points

I am reading data from a binary file, it contains floating point data of which I want only first 6 digits after decimal point but its printing a pretty long string.
self.dataArray.append(struct.unpack("f", buf)[0])
I tried with this
self.dataArray.append(struct.unpack(".6f", buf)[0])
But it didn't worked.
Thanks in advance
a float isnt a string and a string isnt a float.
all a float is, is a number of bytes interpreted as both a whole number part and a fractional part
the_float = struct.unpack("f", buf)[0]
print "The Float String %0.6f"%(the_float)

append integer with comma in python

I have a function that assigns a number to a variable, and then append this number as an integer in a list. The numbers assigned may or may not have a comma.
for number in values:
list_of_values.append(int(number))
#do a few calculations for some of the numbers in the list
But this will just create a list where each number is rounded to a integer. How can I append the number as an integer and still retain its "true" value, without it being rounded?
edit:
sample values:
"0", "2", "1.5", "0.5", ...
If you wanted to represent real numbers (numbers with decimals behind a decimal point, or, in some locales, after the comma), then you should not use int() to represent these.
Use float() or decimal.Decimal() to represent the numbers instead:
list_of_values.append(float(number))
int() represents the number as an integer number, which by definition do not have a decimal component. If you don't want rounded numbers, don't use integers.
Whether you pick float() or decimal.Decimal() depends on your precision and performance needs. float() arithmetic can be handled in CPU hardware but are less precise (decimals are approximated using binary fractions), decimal.Decimal() preserves precision but arithmetic is slower.
the integer datatype is not able to hold floating values!
therefore you could use the float datatype instead!
list_of_values.append(float(number))

Format python decimal as [-.0-9]*, no trailing zeros, value-preserving

I'm looking for a method which takes a decimal.Decimal object x and returns a string s, such that:
x == Decimal(s)
re.match("^-?(0|[1-9][0-9]*)(.[0-9]*[1-9])?$", s) is not None
s != '-0'
In other words, the method doesn't change the value of the Decimal object; it returns a string representation which is never in scientific notation (e.g. 1e80), and never has any trailing zeros.
I would assume there is a standard library function which lets me do that, but I haven't found any. Do you know of any?
n.normalize() truncates decimals with more than 28 digits of precision, but you can use string formatting and a manual check for negative zero:
'{:f}'.format(abs(n) if n.is_zero() else n)

Categories