In Python, the += operation is a right-action, meaning that a+=b is equivalent to a=a+b.
Since, for strings, this operation is not commutative, it raises the question if there is a similar operator for a left-action, i.e. some operator (or other hack) say %= such that a%=b does a=b+a?
Addendum
The solutions so far, except the obvious a=b+a, involved overriding the str.__add__ method which, as pointed out by #BrianJoseph, was not quite what I had in mind since it merely shifts the problem to the other extreme.
The following workaround, involving this amazing hack, illustrates the behaviour I was seeking.
Prelims
# -------------------------------------------------------------
# Following class can be found in Tomer Filiba's blog
# Link provided in the question
from functools import partial
class Infix(object):
def __init__(self, func):
self.func = func
def __or__(self, other):
return self.func(other)
def __ror__(self, other):
return Infix(partial(self.func, other))
def __call__(self, v1, v2):
return self.func(v1, v2)
# -------------------------------------------------------------
# Custom Class
class my_str(str):
def __init__(self, string):
self.string = string
def __str__(self):
return self.string.__str__()
def __repr__(self):
return self.string.__repr__()
#Infix
def left_assign(self, string):
self.string = string + self.string
Example
# Testing
a = my_str('World')
b = 'Hello'
print(a)
# World
a |my_str.left_assign| b
print(a)
# HelloWorld
Of course the line a |my_str.left_assign| b is not exactly easier to write then a = b + a, but this was just an example for illustrations.
Finally, for those to whom my non-edited question might have been unclear: I am (was) wondering if a=b+a can bone writing a just once (analogously to a+=b for a=a+b).
If you're asking if there's a single operator to prepend strings (instead of appending them like +=), I don't think there is one. Writing out:
b = a + b
is the most succinct way I know of how to prepend a onto b.
(Interestingly, because string appending is non-commutative, Larry Wall (the creator of Perl) chose to use . as the string-appending operator, so as to leave + completely commutative and mathematical, in that a += b means both a = a+b and a = b+a. Unless you explicitly overload it, of course.)
Short answer is no.
The long answer:
You can create your own class based on str and override some operator action.
class A(str):
def __add__(s, st):
return st + s
This one will work as:
>>> A(50)
'50'
>>> A(50) + 'abc'
'abc50'
>>> a = A('aaa')
>>> a += 'ccc'
>>> a
'cccaaa'
But you definitely will need to learn documentation about overriding "magic" methods such as __add__ to be sure that you'll implement right behavior, because there is many side-cases in which it could work not ideal in current implementation. For example, current implementation causes RecursionError if both sides is instances of A class.
Honestly, all this stuff is not very good practice because it's not about Zen of Python and may be cause of headache of other programmers that will work with this code. So all these things is nothing but interesting just for fun experiments. For real convenient solution see short answer.
P.S.: Of course, you can override some other operator instead of +. For example, __mul__ is for *. You can even override bitwise ops such as << and &
P.P.S.: The operator that you mentioned, %=, really exists. Not much people know about it, but it is a shorthand for a = a % b, it's very useful for formatting strings:
a = 'Some number: %d; some string: %s'
a %= 1, 'abc'
print(a)
Will give you Some number: 1; some string: abc
No, there is no such operation. The complete list of operators can be found in the Python language reference, starting at section 6.5.
You could define another class inheriting from str:
class myStr(str):
def __init__(self, s):
super(str, s)
def __add__(self, other):
return other + self
s = myStr("abc")
print(s) #prints 'abc'
s += "d"
print(s) #prints 'dabc'
I don't see any use case frankly, though.
Short answer is no. Read here. The += is called augmented assignment
It is implemented for the common binary operators in python:"+=" | "-=" | "*=" | "#=" | "/=" | "//=" | "%=" | "**="| ">>=" | "<<=" | "&=" | "^=" | "|="
You can change the workings of the augmented assignment operation by changing the way the operation is computed at class level, e.g. in your example:
class Foo(str):
def __add__(s, other):
return other + s
Although I would not recommend it.
Related
Question
Is there any way to declare function arguments as non-strict (passed by-name)?
If this is not possible directly: are there any helper functions or decorators that help me achieve something similar?
Concrete example
Here is a littly toy-example to experiment with.
Suppose that I want to build a tiny parser-combinator library that can cope with the following classic grammar for arithmetic expressions with parentheses (numbers replaced by a single literal value 1 for simplicity):
num = "1"
factor = num
| "(" + expr + ")"
term = factor + "*" + term
| factor
expr = term + "+" + expr
| term
Suppose that I define a parser combinator as an object that has a method parse that can take list of tokens, current position, and either throw a parse error, or return a result and a new position. I can nicely define a ParserCombinator base class that provides + (concatenation) and | (alternative). Then I can define parser combinators that accept constant strings, and implement + and |:
# Two kinds of errors that can be thrown by a parser combinator
class UnexpectedEndOfInput(Exception): pass
class ParseError(Exception): pass
# Base class that provides methods for `+` and `|` syntax
class ParserCombinator:
def __add__(self, next):
return AddCombinator(self, next)
def __or__(self, other):
return OrCombinator(self, other)
# Literally taken string constants
class Lit(ParserCombinator):
def __init__(self, string):
self.string = string
def parse(self, tokens, pos):
if pos < len(tokens):
t = tokens[pos]
if t == self.string:
return t, (pos + 1)
else:
raise ParseError
else:
raise UnexpectedEndOfInput
def lit(str):
return Lit(str)
# Concatenation
class AddCombinator(ParserCombinator):
def __init__(self, first, second):
self.first = first
self.second = second
def parse(self, tokens, pos):
x, p1 = self.first.parse(tokens, pos)
y, p2 = self.second.parse(tokens, p1)
return (x, y), p2
# Alternative
class OrCombinator(ParserCombinator):
def __init__(self, first, second):
self.first = first
self.second = second
def parse(self, tokens, pos):
try:
return self.first.parse(tokens, pos)
except:
return self.second.parse(tokens, pos)
So far, everything is fine. However, because the non-terminal symbols of the grammar are defined in a mutually recursive fashion, and I cannot eagerly unfold the tree of all possible parser combinations, I have to work with factories of parser combinators, and wrap them into something like this:
# Wrapper that prevents immediate stack overflow
class LazyParserCombinator(ParserCombinator):
def __init__(self, parserFactory):
self.parserFactory = parserFactory
def parse(self, tokens, pos):
return self.parserFactory().parse(tokens, pos)
def p(parserFactory):
return LazyParserCombinator(parserFactory)
This indeed allows me to write down the grammar in a way that is very close to the EBNF:
num = p(lambda: lit("1"))
factor = p(lambda: num | (lit("(") + expr + lit(")")))
term = p(lambda: (factor + lit("*") + term) | factor)
expr = p(lambda: (term + lit("+") + expr) | term)
And it actually works:
tokens = [str(x) for x in "1+(1+1)*(1+1+1)+1*(1+1)"]
print(expr.parse(tokens, 0))
However, the p(lambda: ...) in every line is a bit annoying. Is there some idiomatic way to get rid of it? It would be nice if one could somehow pass the whole RHS of a rule "by-name", without triggering the eager evaluation of the infinite mutual recursion.
What I've tried
I've checked what's available in the core language: it seems that only if, and and or can "short-circuit", please correct me if I'm wrong.
I've tried looking at how other non-toy-example libraries do this.
For example,
funcparserlib
uses explicit forward declarations to avoid mutual recursion
(look at the forward_decl and value.define
part in github README.md example code).
The parsec.py uses some special #generate decorators
and seems to do something like monadic parsing using coroutines.
That's all very nice, but my goal is to understand what options
I have with regards to the basic evaluation strategies available
in Python.
I've also found something like the lazy_object_proxy.Proxy, but it didn't seem to help to instantiate such objects in more concise way.
So, is there a nicer way to pass arguments by-name and avoid the blowup of mutually recursively defined values?
It's a nice idea, but it's not something that Python's syntax allows for: Python expressions are always evaluated strictly (with the exception of if blocks and and and or short-circuiting expressions).
In particular, the problem is that in an expression like:
num = p(lit("1"))
The p function argument is always received with a new name binding to the same object. The object resulting from evaluating lit("1") is not named anything (until a name is created by the formal parameter to p), so there is no name there to bind to. Conversely, there must be an object there, or otherwise p wouldn't be able to receive a value at all.
What you could do is add a new object to use instead of a lambda to defer evaluation of a name. For example, something like:
class DeferredNamespace(object):
def __init__(self, namespace):
self.__namespace = namespace
def __getattr__(self, name):
return DeferredLookup(self.__namespace, name)
class DeferredLookup(object):
def __init__(self, namespace, name):
self.__namespace = namespace
self.__name = name
def __getattr__(self, name):
return getattr(getattr(self.__namespace, self.__name), name)
d = DeferredNamespace(locals())
num = p(d.lit("1"))
In this case, d.lit actually doesn't return lit, it returns a DeferredLookup object that will use getattr(locals(), 'lit') to resolve its members when they are actually used. Note that this captures locals() eagerly, which you might not want; you can adapt that to use a lambda, or better yet just create all your entities in some other namespace anyway.
You still get the wart of the d. in the syntax, which may or may not be a deal-breaker, depending on your goals with this API.
Special solution for functions that must accept exactly one by-name argument
If you want to define a function f that has to take one single argument by-name, consider making f into a #decorator. Instead of an argument littered with lambdas, the decorator can then directly receive the function definition.
The lambdas in the question appear because we need a way to make the execution of the right hand sides lazy. However, if we change the definitions of non-terminal symbols to be defs rather than local variables, the RHS is also not executed immediately. Then what we have to do is to convert these defs into ParserCombinators somehow. For this, we can use decorators.
We can define a decorator that wraps a function into a LazyParserCombinator as follows:
def rule(f):
return LazyParserCombinator(f)
and then apply it to the functions that hold the definitions of each grammar rule:
#rule
def num(): return lit("1")
#rule
def factor(): return num | (lit("(") + expr + lit(")"))
#rule
def term(): return factor + lit("*") + term | factor
#rule
def expr(): return (term + lit("+") + expr) | term
The syntactic overhead within the right hand sides of the rules is minimal (no overhead for referencing other rules, no p(...)-wrappers or ruleName()-parentheses needed), and there is no counter-intuitive boilerplate with lambdas.
Explanation:
Given a higher order function h, we can use it to decorate other function f as follows:
#h
def f():
<body>
What this does is essentially:
def f():
<body>
f = h(f)
and h is not constrained to returning functions, it can also return other objects, like ParserCombinators above.
I would like to define my own str.format() specification, e.g.
def earmuffs(x):
return "*"+str(x)+"*"
to be used, e.g., like this:
def triple2str(triple, fmt="g"):
return "[{first:{fmt}} & {second:+{fmt}} | {third}]".format(
first=triple[0], second=triple[1], third=triple[2], fmt=fmt)
so that:
## this works:
>>> triple2str((1,-2,3))
'[1 & -2 | 3]'
>>> triple2str((10000,200000,"z"),fmt=",d")
'[10,000 & +200,000 | z]'
## this does NOT work (I get `ValueError: Invalid conversion specification`)
>>> triple2str(("a","b","z"),fmt=earmuffs)
'[*a* & *b* | z]'
The best I could come up with so far is
def triple2str(triple, fmt=str):
return "[{first} & {second} | {third}]".format(
first=fmt(triple[0]), second=fmt(triple[1]), third=triple[2])
which works like this:
>>> triple2str((1,-2,3))
'[1 & -2 | 3]'
>>> triple2str((10000,200000,"z"),fmt="{:,d}".format)
'[10,000 & 200,000 | z]' # no `+` before `2`!
>>> triple2str((10000,200000,"z"),fmt=earmuffs)
'[*10000* & *200000* | z]'
Is this really the best I can do?
What I am unhappy about is that that it is unclear how to incorporate the modifiers (e.g., + above).
Is str.format extensible?
str.format itself is not extensible. However, there are two ways:
1.
Use your custom string formatter: https://docs.python.org/2/library/string.html#custom-string-formatting
Override the format_field(obj, format_spec) method to catch the callable format_spec. Then call your formatter directly.
This code snippet can help you (it works with py 3.5 & 2.7 at least):
import string
class MyFormatter(string.Formatter):
def __init__(self, *args, **kwargs):
super(MyFormatter, self).__init__(*args, **kwargs)
self.fns = {}
def format_field(self, value, format_spec):
# print(repr(value), repr(format_spec))
# intercept {fmt} part with functions:
if callable(value):
result = super(MyFormatter, self).format_field(value, format_spec)
self.fns[result] = value
return result
# intercept {var:{fmt}} with remembered {fmt}:
if format_spec in self.fns:
return self.fns[format_spec](value)
else:
return super(MyFormatter, self).format_field(value, format_spec)
def fn(val):
return '*{}*'.format(val)
f = MyFormatter()
print(f.format("{var:{fmt}}", var=1000, fmt='g'))
print(f.format("{var:{fmt}}", var=1000, fmt=fn))
2.
Define per-object format method __format__(self, format_spec), where format_spec is whatever goes after : in e.g. {var:g}. You can format self-presentation of the object as you wish.
However, in your case, the objects are ints/strs, not the custom objects, so this method will also not help much.
As the conclusion:
Yes, your solution in the question is sufficient and probably the simplest one.
In python 3 you can call functions in f-strings, perhaps this can help.
def earmuffs(val):
return "*{}*".format(val)
form = lambda a: f"method {earmuffs(a[0])} and method {earmuffs(a[1])}"
b = ('one', 'two')
form(b)
>>>'method *one* and method *two*'
This question is an extension of a previous question (Python: defining my own operators?). I really liked the solution provided there for Infix operators, but for my expressions, I need a way to define custom unary operators in a similar fashion.
Do you have any recommendations? Reusing the existing python operators doesn't help, as I've used them all up. Thank you very much for your help!
The main reason for doing this overloading is the absence of the following unary operators in Python:
(Is > 0)
(Is >= 0)
I need operators to distinguish between these two types of operations and my requirement is to match my interface as closely as possible with the interface provided by a pre-defined language which comes with its own set of operators. I could have chosen to replace the operators with > 0 and >= 0 but this did not go down very well with the user community. Is there a better way to do this?
Well you can use the same hack:
#! /usr/bin/python3.2
class Postfix:
def __init__(self, f):
self.f = f
def __ror__(self, other):
return self.f(other)
x = Postfix(lambda x: x * 2)
a = 'Hello'
print(a |x)
a = 23
print(a |x |x)
Nevertheless, I wouldn't advocate its use, as it is only confusing.
EDIT: Especially as your operators are unary, you can simply call a function, and anyone reading your code would understand immediately what it does.
def choose(t): pass
#magic happens here and returns nCr(t[0], t[1])
nCr = Postfix(choose)
#This is unintuitive:
print((3, 4) |nCr)
nCr = choose
#But this is obvious:
print(nCr((3, 4)))
Edit2: Dear people who are religious about PEP-8: This "operator"-hack is all about not complying with PEP-8, so please stop editing the answer. The idea is that |op is read like one entity, basically a postfix operator.
Edit 3: Thinking hard about a case where this hack could come in handy, maybe the following could be a halfway sensible use. (If and only if this feature is well documented in the API):
#! /usr/bin/python3.2
class Language:
def __init__(self, d):
self.d = d
def __ror__(self, string):
try: return self.d[string]
except: return string
enUS = Language({})
esMX = Language({'yes': 'sí', 'cancel': 'cancelar'})
deDE = Language({'yes': 'ja', 'no': 'nein', 'cancel': 'abbrechen'})
print('yes' |enUS)
print('no' |deDE)
print('cancel' |esMX)
This question already has answers here:
How do I pass a variable by reference?
(39 answers)
Closed 3 years ago.
I noticed that my code has many statements like this:
var = "some_string"
var = some_func(var)
var = another_func(var)
print(var) # outputs "modified_string"
It's really annoying me, it just looks awful (in the opposite of whole Python).
How to avoid using that and start using it in a way like this:
var = "some_string"
modify(var, some_func)
modify(var, another_func)
print(var) # outputs "modified_string"
That might not be the most "pythonic" thing to do, but you could "wrap" your string in a list, since lists are mutable in Python.
For example:
var = "string"
var_wrapper = [var]
Now you can pass that list to functions and access its only element. When changed, it will be visible outside of the function:
def change_str(lst):
lst[0] = lst[0] + " changed!"
and you'll get:
>>> change_str(var_wrapper)
>>> var_wrapper[0]
"string changed!"
To make things a bit more readable, you could take it one step further and create a "wrapper" class:
class my_str:
def __init__(self, my_string):
self._str = my_string
def change_str(self, new_str):
self._str = new_str
def __repr__(self):
return self._str
Now let's run the same example:
>>> var = my_str("string")
>>> var
string
>>> var.change_str("new string!")
>>> var
new string!
* Thanks for #Error-SyntacticalRemorse for the remark of making a class.
The problem is that str, int and float (long too, if you're in Py 2.x (True and False are really ints, so them too)) are what you call 'immutable types' in Python. That means that you can't modify their internal states: all manipulations of an str (or int or float) will result in a "new" instance of the str (or whatever) while the old value will remain in Python's cache until the next garbage collection cycle.
Basically, there's nothing you can do. Sorry.
In fact, there's been at least one attempt to add a compose function to functools. I guess I understand why they didn't... But hey, that doesn't mean we can't make one ourselves:
def compose(f1, f2):
def composition(*args, **kwargs):
return f1(f2(*args, **kwargs))
return composition
def compose_many(*funcs):
if len(funcs) == 1:
return funcs[0]
if len(funcs) == 2:
return compose(funcs[0], funcs[1])
else:
return compose(funcs[0], compose_many(*funcs[1:]))
Tested:
>>> def append_foo(s):
... return s + ' foo'
...
>>> def append_bar(s):
... return s + ' bar'
...
>>> append_bar(append_foo('my'))
'my foo bar'
>>> compose(append_bar, append_foo)('my')
'my foo bar'
>>> def append_baz(s):
... return s + ' baz'
...
>>> compose_many(append_baz, append_bar, append_foo)('my')
'my foo bar baz'
Come to think of it, this probably isn't the best solution to your problem. But it was fun to write.
the others already explained why that's not possible, but you could:
for modify in some_func, other_func, yet_another_func:
var = modify(var)
or as pst said:
var = yet_another_func(other_func(some_func(var)))
There is a way to modify an immutable variable, by rewriting it in the local symbol table, however, I think that it's not very nice and should be avoided as much as possible.
def updatevar(obj, value, callingLocals=locals()):
callingLocals[next(k for k, o in callingLocals.items() if o is obj)] = value
Another way, even less pythonic, is to use exec with a formatted instruction. It gets the variable name as a string thanks to this solution:
def getname(obj, callingLocals=locals()):
"""
a quick function to print the name of input and value.
If not for the default-Valued callingLocals, the function would always
get the name as "obj", which is not what I want.
"""
return next(k for k, v in callingLocals.items() if v is obj)
def updatevar2(k, v, callingLocals=locals()):
n = getname(k, callingLocals)
exec('global {};{}={}'.format(n, n, repr(v)))
The result is as expected:
var = "some_string"
updatevar(var, "modified_string")
print(var) # outputs "modified_string"
updatevar2(var, var + '2')
print(var) # outputs "modified_string2"
Strings are immutable in python, so your second example can't work. In the first example you are binding the name var to a completely new object on each line.
Typically multiple assignments to a single name like that are a code smell. Perhaps if you posted a larger sample of code someone here could show you a better way?
I'm just gonna put this right here (since none of the answers seem to have addressed it yet)
If you're commonly repeating the same sequences of functions, consider wrapping them in a higher level function:
def metafunc(var):
var = somefunc(var)
var = otherfunc(var)
var = thirdfunc(var)
return lastfunc(var)
Then when you call the function metafunc you know exactly what's happening to your var: nothing. All you get out of the function call is whatever metafunc returns.
Additionally you can be certain that nothing is happening in parts of your program that you forgot about. This is really important especially in scripting languages where there's usually a lot going on behind the scenes that you don't know about/remember.
There are benefits and drawbacks to this, the theoretical discussion is under the category of pure functional programming. Some real-world interactions (such as i/o operations) require non-pure functions because they need real-world implications beyond the scope of your code's execution.
The principle behind this is defined briefly here:
http://en.wikipedia.org/wiki/Functional_programming#Pure_functions
This question already has answers here:
How can I represent an 'Enum' in Python?
(43 answers)
Closed 4 years ago.
I've been using a small class to emulate Enums in some Python projects. Is there a better way or does this make the most sense for some situations?
Class code here:
class Enum(object):
'''Simple Enum Class
Example Usage:
>>> codes = Enum('FOO BAR BAZ') # codes.BAZ will be 2 and so on ...'''
def __init__(self, names):
for number, name in enumerate(names.split()):
setattr(self, name, number)
Enums have been proposed for inclusion into the language before, but were rejected (see http://www.python.org/dev/peps/pep-0354/), though there are existing packages you could use instead of writing your own implementation:
enum: http://pypi.python.org/pypi/enum
SymbolType (not quite the same as enums, but still useful): http://pypi.python.org/pypi/SymbolType
Or just do a search
The most common enum case is enumerated values that are part of a State or Strategy design pattern. The enums are specific states or specific optional strategies to be used. In this case, they're almost always part and parcel of some class definition
class DoTheNeedful( object ):
ONE_CHOICE = 1
ANOTHER_CHOICE = 2
YET_ANOTHER = 99
def __init__( self, aSelection ):
assert aSelection in ( self.ONE_CHOICE, self.ANOTHER_CHOICE, self.YET_ANOTHER )
self.selection= aSelection
Then, in a client of this class.
dtn = DoTheNeeful( DoTheNeeful.ONE_CHOICE )
There's a lot of good discussion here.
What I see more often is this, in top-level module context:
FOO_BAR = 'FOO_BAR'
FOO_BAZ = 'FOO_BAZ'
FOO_QUX = 'FOO_QUX'
...and later...
if something is FOO_BAR: pass # do something here
elif something is FOO_BAZ: pass # do something else
elif something is FOO_QUX: pass # do something else
else: raise Exception('Invalid value for something')
Note that the use of is rather than == is taking a risk here -- it assumes that folks are using your_module.FOO_BAR rather than the string 'FOO_BAR' (which will normally be interned such that is will match, but that certainly can't be counted on), and so may not be appropriate depending on context.
One advantage of doing it this way is that by looking anywhere a reference to that string is being stored, it's immediately obvious where it came from; FOO_BAZ is much less ambiguous than 2.
Besides that, the other thing that offends my Pythonic sensibilities re the class you propose is the use of split(). Why not just pass in a tuple, list or other enumerable to start with?
The builtin way to do enums is:
(FOO, BAR, BAZ) = range(3)
which works fine for small sets, but has some drawbacks:
you need to count the number of elements by hand
you can't skip values
if you add one name, you also need to update the range number
For a complete enum implementation in python, see:
http://code.activestate.com/recipes/67107/
I started with something that looks a lot like S.Lott's answer but I only overloaded 'str' and 'eq' (instead of the whole object class) so I could print and compare the enum's value.
class enumSeason():
Spring = 0
Summer = 1
Fall = 2
Winter = 3
def __init__(self, Type):
self.value = Type
def __str__(self):
if self.value == enumSeason.Spring:
return 'Spring'
if self.value == enumSeason.Summer:
return 'Summer'
if self.value == enumSeason.Fall:
return 'Fall'
if self.value == enumSeason.Winter:
return 'Winter'
def __eq__(self,y):
return self.value==y.value
Print(x) will yield the name instead of the value and two values holding Spring will be equal to one another.
>>> x = enumSeason(enumSeason.Spring)
>>> print(x)
Spring
>>> y = enumSeason(enumSeason.Spring)
>>> x == y
True