I failed an exam because of one question. The task is:
"Design a program that converts any number from any system to decimal.
We confine to the systems in the range from 2 to 22."
So there I am. I know the binary[2], octal[8], decimal[10] and hexadecimal[16] systems. There's 1 point for each conversion system, so it has to be a converter:
2->10
3->10
...
22->10
I have no idea how is that possible. I asked my professor after the exam how to do it and he said: "Just x to the power of y, multiply, and there it is. There's the same rule for all of them."
I might be mistaken in what he said because I was in the post-exam state of consciousness. Do you guys have any idea how to solve it?
I see that there were a few questions like that on stackoverflow already, but none of them does not solve the problem the way my professor said. Also, we started learning Python ~4 months ago and we haven't learned some of the options implemented in the replies.
"""IN
str/int, any base[2-22]
OUT
decimal int or float"""
The int() built-in function supports conversion of any number to any base. It requires a passed correct number within the base or else throws a ValueError.
Syntax: int('string', base) converts to decimal
Example:
Conversion of a number 3334 to base 5
>>> int('3334',5)
469
Conversion of number 3334 to base 9
>>>int('3334', 9)
2461
Conversion of the above to hex-decimal number
>>>hex(int('3334', 9))
'0x99d'
I just coded the answer but was too slow. This code follows exactly daTokenizers solution
def converter(number, base):
#split number in figures
figures = [int(i,base) for i in str(number)]
#invert oder of figures (lowest count first)
figures = figures[::-1]
result = 0
#loop over all figures
for i in range(len(figures)):
#add the contirbution of the i-th figure
result += figures[i]*base**i
return result
converter(10,22)
>>> 22
converter(52,16)
>>> 82
the basic stages are so:
understand what base you are in (to my understading this is given as var to you)
for each of the chars in the input number you multiply it by the base to the power of the location. so "654",base 17 -> "6*17^2 + 5*17^1 + 4*17^0"
the sum is your answer.
If n is the number, to convert from base 'other' to decimal, try this:
>>> other2dec = lambda n, other: sum([(int(v) * other**i) for i, v in enumerate(list(str(n))[::-1])])
>>> other2dec(71,8)
57
>>> other2dec(1011,2)
11
Related
I have a dataframe of floating point numbers, and I want to work with what I intuitively see to be their precision, or number of digits past zero:
dd = pd.DataFrame({'x':[12.123456,10.12345,9.1234]})
dd['digits'] = dd['x'].apply(lambda num: num - int(num))
dd['target'] = [6, 5, 4]
x
digits
target
12.123456
0.123456
6
10.123450
0.123450
5
9.123400
0.123400
4
My solution:
dd['precision'] = dd['x'].astype('str').str.split('.').str[1].str.len()
x
digits
target
precision
12.123456
0.123456
6
6
10.123450
0.123450
5
5
9.123400
0.123400
4
4
It works, but it's so ugly and difficult to recall something similar in 3 months when I'll need it again. Is there a cleaner solution? If not, could someone share some insight the docs don't? What exactly is the data type output by each of these dotted steps? Some of them seem to operate on the series, whereas other operate on the individual values.
Perhaps this is a property of the series dtype? or value metadata?
EDIT: I need this to run performantly as well. Is it possible to find a vectorized solution? I recall that "object" types in Pandas are pointers, in this case to string data, which sounds like it would make it very difficult to run calculations on more than one value at a time. Therefore, converting to string and accessing its values like:
.astype('str').str...
doesn't seem like the correct approach.
On the other hand, floating-point arithmetic used to count these digits without conversion sounds error-prone as well.
Here is another way to do it:
import pandas as pd
df = pd.DataFrame({"x": [12.123456, 10.12345, 9.1234]})
df["precision"] = df["x"].apply(
lambda x: [i for i in range(pd.options.display.precision + 1) if x == round(x, i)][0]
)
print(df)
# Output
x precision
0 12.123456 6
1 10.123450 5
2 9.123400 4
As per Pandas documentation, display.precision is an integer (6 by default) which represents the "floating point output precision in terms of number of places after the decimal, for regular formatting as well as scientific notation".
So, I have to create a python script that given 2 fractions and an operand will print the result of the operation. This was intended to be solved by firstly asking for one fraction and saving it into a variable, then ask for another fraction and lastly ask for the operand. But out of curiosity I've tried to give this problem a different point of view.
My idea was to ask for the full operation and save the input string into a variable, then with the function exec() I could get the decimal result of the given operation, finally to deal with decimals my idea was to multiply by 10 to the power of the number of decimal digits and then dividing by 10 to that same power, this way I could have a fraction as a result. So I went on to code and managed to program this out, my only issue is that the number of decimal digits is limited so normally the result that my script returns is a very big fraction that is very close to what the real fraction is. So I was wondering if there is any workaround for this. Here is my code and an example for further explanation:
op = input('Enter operation: ')
try:
exec('a = ' + op)
except:
print('Invalid operation')
def euclides(a, b):
while a != 0 and b != 0:
if a < b: b = b%a
else: a = a%b
if a == 0: return b
elif b == 0: return a
print(f'{int(a*10**len(str(a).split(".")[1])/euclides(a*10**len(str(a).split(".")[1]),10**len(str(a).split(".")[1])))}/{int(10**len(str(a).split(".")[1])/euclides(a*10**len(str(a).split(".")[1]),10**len(str(a).split(".")[1])))}')
EXAMPLE:
op input => 4/3+5/7
Result of script => 5119047619047619/2500000000000000 = 2.04761904761
Result I'm looking for => 43/21 = 2.047619 period
Thank you for your help in advance
What are your constraints as to what standard or add-on modules you can use? Without taking into account constraints you haven't specified, there are much better ways to go about what you're doing. Your problem seems to be summed up by "the result that my script returns is a very big fraction" and your question seems to be "I was wondering if there is any workaround for this?". There are a number of "work arounds". But it's pretty hard to guess what the best solution is for you as you don't tell us what tools you can and can't use to accomplish your task.
As an example, here's an elegant solution if you can use regular expressions and the fractions module, and if you can assume that the input will always be in the very strict format of <int>/<int>+<int>/<int>:
import re
import fractions
op = input('Enter operation: ')
m = re.match(r"(\d+)/(\d+)\+(\d+)/(\d+)", op)
if not m:
raise('Invalid operation')
gps = list(map(int, m.groups()))
f = fractions.Fraction(gps[0], gps[1]) + fractions.Fraction(gps[2], gps[3])
print(f)
print (float(f))
print(round(float(f), 6))
Result:
43/21
2.0476190476190474
2.047619
This answers your current question. I don't, however, know if this violates the terms of your assignment.
Could just turn all natural numbers into Fractions and evaluate:
>>> op = '4/3+5/7'
>>> import re, fractions
>>> print(eval(re.sub(r'(\d+)', r'fractions.Fraction(\1)', op)))
43/21
Works for other cases as well (unlike the accepted answer's solution, which only does the sum of exactly two fractions that must be positive and must not have spaces), for example:
>>> op = '-1/2 + 3/4 - 5/6'
>>> print(eval(re.sub(r'(\d+)', r'fractions.Fraction(\1)', op)))
-7/12
Checking:
>>> -7/12, -1/2 + 3/4 - 5/6
(-0.5833333333333334, -0.5833333333333334)
I've been searching around for hours and I can't find a simple way of accomplishing the following.
Value 1 = 0.00531
Value 2 = 0.051959
Value 3 = 0.0067123
I want to increment each value by its smallest decimal point (however, the number must maintain the exact number of decimal points as it started with and the number of decimals varies with each value, hence my trouble).
Value 1 should be: 0.00532
Value 2 should be: 0.051960
Value 3 should be: 0.0067124
Does anyone know of a simple way of accomplishing the above in a function that can still handle any number of decimals?
Thanks.
Have you looked at the standard module decimal?
It circumvents the floating point behaviour.
Just to illustrate what can be done.
import decimal
my_number = '0.00531'
mnd = decimal.Decimal(my_number)
print(mnd)
mnt = mnd.as_tuple()
print(mnt)
mnt_digit_new = mnt.digits[:-1] + (mnt.digits[-1]+1,)
dec_incr = decimal.DecimalTuple(mnt.sign, mnt_digit_new, mnt.exponent)
print(dec_incr)
incremented = decimal.Decimal(dec_incr)
print(incremented)
prints
0.00531
DecimalTuple(sign=0, digits=(5, 3, 1), exponent=-5)
DecimalTuple(sign=0, digits=(5, 3, 2), exponent=-5)
0.00532
or a full version (after edit also carries any digit, so it also works on '0.199')...
from decimal import Decimal, getcontext
def add_one_at_last_digit(input_string):
dec = Decimal(input_string)
getcontext().prec = len(dec.as_tuple().digits)
return dec.next_plus()
for i in ('0.00531', '0.051959', '0.0067123', '1', '0.05199'):
print(add_one_at_last_digit(i))
that prints
0.00532
0.051960
0.0067124
2
0.05200
As the other commenters have noted: You should not operate with floats because a given number 0.1234 is converted into an internal representation and you cannot further process it the way you want. This is deliberately vaguely formulated. Floating points is a subject for itself. This article explains the topic very well and is a good primer on the topic.
That said, what you could do instead is to have the input as strings (e.g. do not convert it to float when reading from input). Then you could do this:
from decimal import Decimal
def add_one(v):
after_comma = Decimal(v).as_tuple()[-1]*-1
add = Decimal(1) / Decimal(10**after_comma)
return Decimal(v) + add
if __name__ == '__main__':
print(add_one("0.00531"))
print(add_one("0.051959"))
print(add_one("0.0067123"))
print(add_one("1"))
This prints
0.00532
0.051960
0.0067124
2
Update:
If you need to operate on floats, you could try to use a fuzzy logic to come to a close presentation. decimal offers a normalize function which lets you downgrade the precision of the decimal representation so that it matches the original number:
from decimal import Decimal, Context
def add_one_float(v):
v_normalized = Decimal(v).normalize(Context(prec=16))
after_comma = v_normalized.as_tuple()[-1]*-1
add = Decimal(1) / Decimal(10**after_comma)
return Decimal(v_normalized) + add
But please note that the precision of 16 is purely experimental, you need to play with it to see if it yields the desired results. If you need correct results, you cannot take this path.
i'm relatively new to Phyton programming so excuse me if this question seems dumb but i just cant find an Answer for it.
How do i convert a number like lets say 1337 to 13,37€
All I have for now is this Code Sample:
number = 1337
def price():
Euro = 0
if number >= 100
Euro = Euro + 1
But obviously i need to put this in some kind of for-loop, kinda like this:
number = 1337
def price():
Euro = 0
for 100 in number:
Euro = Euro + 1
But this doesn't seem to work for me.
EDIT:
Also, how do i convert that number correctly if i just divide by 100 i get 13.37 but what i want is 13,37€.
Because Python does not support any currency symbols in numbers, you need to convert a number to a string and append the currency ("€") character to it: str(13.37) + "€":
def convertToCent(number):
return '{:,.2f}€'.format(number/100)
As #ozgur and #chepner pointed out the so called "true division" of two integers is only possible in Python 3:
x / y returns a reasonable approximation of the mathematical result of the division ("true division")
Python 2 only returns the actual floating-point quotient when one of the operands are floats.
The future division statement, spelled "from __future__ import division", will change the / operator to mean true division throughout the module.
For more information see PEP 238.
This shouldn't be hard to guess:
number = 1337
def price(number):
return (euro / 100.0)
print prince(number)
111111111111111111111111111111111111111111111111111111111111
when i take this as input , it appends an L at the end like this
111111111111111111111111111111111111111111111111111111111111L
thus affecting my calculations on it .. how can i remove it?
import math
t=raw_input()
l1=[]
a=0
while (str(t)!="" and int(t)!= 0):
l=1
k=int(t)
while(k!= 1):
l=l+1
a=(0.5 + 2.5*(k %2))*k + k % 2
k=a
l1.append(l)
t=raw_input()
a=a+1
for i in range(0,int(a)):
print l1[i]
this is my code and it works for every test case except 111111111111111111111111111111111111111111111111111111111111
so i guess something is wrong when python considers such a huge number
It looks like there are two distinct things happening here. First, as the other posters have noted, the L suffix simply indicates that Python has converted the input value to a long integer. The second issue is on this line:
a=(0.5 + 2.5*(k %2))*k + k % 2
This implicitly results in a floating point number for the value of (0.5 + 2.5*(k %2))*k. Since floats only have 53 bits of precision the result is incorrect due to rounding. Try refactoring the line to avoid floating point math, like this:
a=(1 + 5*(k %2))*k//2 + k % 2
It's being input as a Long Integer, which should behave just like any other number in terms of doing calculations. It's only when you display it using repr (or something that invokes repr, like printing a list) that it gets the 'L'.
What exactly is going wrong?
Edit: Thanks for the code. As far as I can see, giving it a long or short number makes no difference, but it's not really clear what it's supposed to do.
As RichieHindle noticed in his answer, it is being represented as a Long Integer. You can read about the different ways that numbers can be represented in Python at the following page.
When I use numbers that large in Python, I see the L at the end of the number as well. It shouldn't affect any of the computations done on the number. For example:
>>> a = 111111111111111111111111111111111111111
>>> a + 1
111111111111111111111111111111111111112L
>>> str(a)
'111111111111111111111111111111111111111'
>>> int(a)
111111111111111111111111111111111111111L
I did that on the python command line. When you output the number, it will have the internal representation for the number, but it shouldn't affect any of your computations. The link I reference above specifies that long integers have unlimited precision. So cool!
Another way to avoid numerical errors in python is to use Decimal type instead of standard float.
Please refer to official docs
Are you sure that L is really part of it? When you print such large numbers, Python will append an L to indicate it's a long integer object.