I just need to know how to put in a numerical value such as 1.5x10^15 into json. I assumed the same syntax as python would work but json doesn't like the *s it seems.
1.5x10^15 isn't a "numerical value," it's an expression. You could put that numerical value in JSON ({"value":1500000000000000}, or {"value":1.5e15} also works), but JSON has no syntax for expressions.
You can use the exponential notation in JSON. RFC 7159 -- 6. Numbers, says:
A number is represented in base 10 using decimal digits. It
contains an integer component that may be prefixed with an optional
minus sign, which may be followed by a fraction part and/or an
exponent part.
So you could use something as 1E400 in theory, although keep in mind that different implementation will have different limits.
Related
I am trying to return a number with 6 decimal places, regardless of what the number is.
For example:
>>> a = 3/6
>>> a
0.5
How can I take a and make it 0.500000 while preserving its type as a float?
I've tried
'{0:.6f}'.format(a)
but that returns a string. I'd like something that accomplishes this same task, but returns a float.
In memory of the computer, the float is being stored as an IEEE754 object, that means it's just a bunch of binary data exposed with a given format that's nothing alike the string of the number as you write it.
So when you manipulate it, it's still a float and has no number of decimals after the dot. It's only when you display it that it does, and whatever you do, when you display it, it gets converted to a string.
That's when you do the conversion to string that you can specify the number of decimals to show, and you do it using the string format as you wrote.
This question shows a slight misunderstanding on the nature of data types such as float and string.
A float in a computer has a binary representation, not a decimal one. The rendering to decimal that python is giving you in the console was converted to a string when it was printed, even if it's implicit by the print function. There is no difference between how a 0.5 and 0.5000000 is stored as a float in its binary representation.
When you are writing application code, it is best not to worry about the presentation until it gets to the end user where it must, somehow, be converted to a string if only implicitly. At that point you can worry about decimal places, or even whether you want it shown in decimal at all.
i want adding and subtracting this type of data: $12,587.30.which returns answer in same format.how can do this ?
Here is my code example:
print(int(col_ammount2.lstrip('$'))-int(col_ammount.lstrip('$')))
I removed $ sign and convert it to int but it gives me base 10 error.
You mentioned you want to do arithmetic operations to the numbers (addition/subtraction) so you probably want them in float instead. The difference between an integer (int) and float is that integers do not carry decimal points.
Additionally, as #officialaimm mentioned you need to remove the commas too, for example
float('$3,333.33'.replace('$', '').replace(',', ''))
will give you
3333.33
So putting it into your code
print(float(col_ammount2.lstrip('$').replace(',', ''))
- float(col_ammount.lstrip('$').replace(',', '')))
An additional note for when you parse a floating point number (same applies to integers too), you may want to watch out for empty values, i.e.
float('')
is bad. One of the things u can do in case col_amount and col_amount2 may be empty at some point is default them to 0 if that happens
float(col_amount.lstrip(...).replace(...) or 0)
You also want to read this to know about workaround to problems you may face with floating point arithmetic https://docs.python.org/3/tutorial/floatingpoint.html
There are two things you are missing here. Firstly python int(...) cannot parse numbers with commas so you will need to remove commas as well by using .replace(',',''). Secondly int() cannot parse floating point values you will have to use float(...) first and after that maybe typecast it to int using int or math.ceil, math.floor appropriately as per your choice and needs.
Maybe something like this will solve your problem:
col_ammount2='$1,587.30'
col_ammount = '$2,567.67'
print(int(float(col_ammount2.lstrip('$').replace(',','')))-int(float(col_ammount.lstrip('$').replace(',',''))))
If you are doing these sorts of things quite often in your code, making a function as such might be handy:
integerify_currency = lambda x:int(float(x.lstrip('$').replace(',','')))
I've noticed that the pyxb decimal datatype doesn't preserve trailing zeroes when it renders to XML. The culprit is a call to normalize() in the following line of the XsdLiteral function, in line 159 of binding/datatypes.py:
(sign, digits, exponent) = value.normalize().as_tuple()
(where value is an instance of Python's decimal). This is a bit of a problem for me because the web service I am trying to interact with requires version numbers of the form X.000 and pyxb is truncating that to X.0.
Is this expected behavior? or required by some standard? Do other XML schema-generating libraries do this as well? My solution right now is to use string instead, but the code would be easy to change if it doesn't break anything.
The official PyXB response is here, but from the description of the canonical representation of an xs:decimal value:
Leading and trailing zeroes are prohibited subject to the following:
there must be at least one digit to the right and to the left of the
decimal point which may be a zero.
Also in the description of decimal itself:
Precision is not reflected in this value space; the number 2.0 is not
distinct from the number 2.00.
The service provider is nonconformant by requiring the trailing zeros be provided.
I have no experience with pyxb, but my guess is that in general one wants XML to be as compact as possible in order to preserve memory, and so decimals truncate to preserve bytes.
This does not seem to be a normal case to use a decimal in. I gather that the decimal should be used to store mathematical-numerical values, and therefore truncation is always possible. Since your case is exceptional, the module was probably not designed to do what you are wanting.
I've got string like x='0x08h, 0x0ah' in Python, wanting to convert it to [8,10] (like unsigned ints). I could split and index it like [int(a[-3:-1],16) for a in x.split(', ')] but is there a better way to convert it to a list of ints?
Would it matter if I had y='080a'?
edit (for plus points:).) what (sane) string-based hexadecimal notations have python support, and which not?
You really have to know what the pattern you're trying to parse is, before you write a parser.
But it looks like your pattern is: optional 0x, then hex digits, then optional h. At least that's the most reasonable thing I can come up with that handles both '0x08h' and '080a'. So:
def parse_hex(s):
return int(s.lstrip('0x').rstrip('h'), 16)
Then:
numbers = [parse_hex(s) for s in x.split(', ')]
Of course you don't actually need to remove the 0x prefix, because Python accepts that as part of a hex string, so you could write it as:
def parse_hex(s):
return int(s.rstrip('h'), 16)
However, I think the intention is clearer if you're more explicit.
From your edit:
edit what (sane) string-based hexadecimal notations have python support, and which not?
See the documentation for int:
Base-2, -8, and -16 literals can be optionally prefixed with 0b/0B, 0o/0O, or 0x/0X, as with integer literals in code.
That's it. (If you read the rest of the paragraph, if you're guaranteed to have 0x/0X, you don't have to explicitly use base=16. But that doesn't help you here, so that one sentence is really all you need.) The docs on Numeric Types and Numeric literals detail exactly what "as with integer literals in code"; the only thing surprising there is that negative numbers aren't literals, complex numbers aren't literals (but pure imaginary numbers are), and non-ASCII digits can be used but the documentation doesn't explain how.
You can also use map: map(lambda s:int(s.lower().replace('0x','').replace('h',''), 16),x.split(', '))
So I have a list of tuples of two floats each. Each tuple represents a range. I am going through another list of floats which represent values to be fit into the ranges. All of these floats are < 1 but positive, so precision matter. One of my tests to determine if a value fits into a range is failing when it should pass. If I print the value and the range that is causing problems I can tell this much:
curValue = 0.00145000000671
range = (0.0014500000067055225, 0.0020968749796738849)
The conditional that is failing is:
if curValue > range[0] and ... blah :
# do some stuff
From the values given by curValue and range, the test should clearly pass (don't worry about what is in the conditional). Now, if I print explicitly what the value of range[0] is I get:
range[0] = 0.00145000000671
Which would explain why the test is failing. So my question then, is why is the float changing when it is accessed. It has decimal values available up to a certain precision when part of a tuple, and a different precision when accessed. Why would this be? What can I do to ensure my data maintains a consistent amount of precision across my calculations?
The float doesn't change. The built-in numberic types are all immutable. The cause for what you're observing is that:
print range[0] uses str on the float, which (up until very recent versions of Python) printed less digits of a float.
Printing a tuple (be it with repr or str) uses repr on the individual items, which gives a much more accurate representation (again, this isn't true anymore in recent releases which use a better algorithm for both).
As for why the condition doesn't work out the way you expect, it's propably the usual culprit, the limited precision of floats. Try print repr(curVal), repr(range[0]) to see if what Python decided was the closest representation of your float literal possible.
In modern day PC's floats aren't that precise. So even if you enter pi as a constant to 100 decimals, it's only getting a few of them accurate. The same is happening to you. This is because in 32-bit floats you only get 24 bits of mantissa, which limits your precision (and in unexpected ways because it's in base2).
Please note, 0.00145000000671 isn't the exact value as stored by Python. Python only diplays a few decimals of the complete stored float if you use print. If you want to see exactly how python stores the float use repr.
If you want better precision use the decimal module.
It isn't changing per se. Python is doing its best to store the data as a float, but that number is too precise for float, so Python modifies it before it is even accessed (in the very process of storing it). Funny how something so small is such a big pain.
You need to use a arbitrary fixed point module like Simple Python Fixed Point or the decimal module.
Not sure it would work in this case, because I don't know if Python's limiting in the output or in the storage itself, but you could try doing:
if curValue - range[0] > 0 and...