I am making a program to do some calculations for my Microeconomics class. Since there are some ways of working depending on the problem I am given, I have created a class. The class parses an Utility function and a 'mode' from the command line and calls a function or another depending on the mode.
Since every function uses the same variables I initiate them in __init__():
self.x = x = Symbol('x') # Variables are initiated
self.y = y = Symbol('y')
self.Px, self.Py, self.m = Px, Py, m = Symbol('Px'), Symbol('Py'), Symbol('m')
I need a local definition to successfully process the function. Once the function is initiated through sympify() I save it as an instance variable:
self.function = sympify(args.U)
Now I need to pass the variables x,yPx,Py,m to the different functions. This is where I have the problem. As I want a local definition I could simply x=self.x with all the variables. I would need to repeat this in every piece of code which isn't really sustainable. Another option is to pass all the variables as arguments.
But since I'm using a dictionary to choose which function to call depending on the mode this would mean I have to pass the same arguments for every function, whether I use them or not.
So I have decided to create a dictionary such as:
variables = { #A dictionary of variables is initiated
'x':self.x,
'y':self.y,
'Px':self.Px,
'Py':self.Py,
'm':self.m
}
This dictionary is initiated after I declare the variables as sympy Symbols. What I would like is to pass this dictionary in an unpacked form to every function. This way i would only need **kwargs as an argument and I could use the variables I want.
What I want is something like this:
a = 3
arggs = {'a' = a}
def f(**kwargs):return a+1
f(**args)
This returns 4. However when I pass my dictionary as an argument I get a non-defined 'x' or 'y' variables error. It can't be an scope issue because all the variables have been initiated for all the instance.
Here is my code calling the function:
self.approaches[self.identification][0](**self.variables)
def default(self, **kwargs):
solutions = dict()
self.MRS = S(self.function.diff(x) / self.function.diff(y)) # This line provokes the exception
What's my error?
PS: Some information may be unclear. English is not my main language. Apologies in advance.
Unfortunately, Python doesn't quite work like that. When you use **kwargs, the only variable this assigns is the variable kwargs, which is a dictionary of the keyword arguments. In general, there's no easy way to inject names into a function's local namespace, because of the way locals namespaces work. There are ways to do it, but they are fairly hacky.
The easiest way to make the variables available without having to define them each time is to define them at the module level. Generally speaking, this is somewhat bad practice (it really does belong on the class), but since SymPy Symbols are immutable and defined entirely by their name (and assumptions if you set any), it's just fine to set
Px, Py, m = symbols("Px Py m")
at the module level (i.e., above your class definition), because even if some other function defines its own Symbol("Px"), SymPy will consider it equal to the Px you defined from before.
In general, you can play somewhat fast and loose with immutable objects in this way (and all SymPy objects are immutable) because it doesn't really matter if an immutable object gets replaced with a second, equal object. It would matter, if, say, you had a list (a mutable container) because it would make a big difference if it were defined on the module level vs. the class level vs. the instance level.
Related
One of the things I find frustrating with python is that if I write a function like this:
def UnintentionalValueChangeOfGlobal(a):
SomeDict['SomeKey'] = 100 + a
b = 0.5 * SomeDict['SomeKey']
return b
And then run it like so:
SomeDict = {}
SomeDict['SomeKey'] = 0
b = UnintentionalValueChangeOfGlobal(10)
print(SomeDict['SomeKey'])
Python will: 1) find and use SomeDict during the function call even though I have forgotten to provide it as an input to the function; 2) permanently change the value of SomeDict['SomeKey'] even though it is not included in the return statement of the function.
For me this often leads to variables unintentionally changing values - SomeDict['SomeKey'] in this case becomes 110 after the function is called when the intent was to only manipulate the function output b.
In this case I would have preferred that python: 1) crashes with an error inside the function saying that SomeDict is undefined; 2) under no circumstances permanently changes the value of any variable other than the output b after the function has been called.
I understand that it is not possible to disable the use of globals all together in python, but is there a simple method (a module or an IDE etc.) which can perform static analysis on my python functions and warn me when a function is using and/or changing the value of variables which are not the function's output? I.e., warn me whenever variables are used or manipulated which are not local to the function?
One of the reasons Python doesn't provide any obvious and easy way to prevent accessing (undeclared) global names in a function is that in Python everything (well, everything that can be assigned to a name at least) is an object, including functions, classes and modules, so preventing a function to access undeclared global names would make for quite verbose code... And nested scopes (closures etc) don't help either.
And, of course, despite globals being evils, there ARE still legitimate reasons for mutating a global object sometimes. FWIW, even linters (well pylint and pyflakes at least) don't seem to have any option to detect this AFAICT - but you'll have to double-check by yourself, as I might have overlooked it or it might exist as a pylint extension or in another linter.
OTHO, I very seldom had bugs coming from such an issue in 20+ years (I can't remember a single occurrence actually). Routinely applying basic good practices - short functions avoiding side effects as much as possible, meaningful names and good naming conventions etc, unittesting at least the critical parts etc - seem to be effective enough to prevent such issues.
One of the points here is that I have a rule about non-callable globals being to be considered as (pseudo) constants, which is denoted by naming them ALL_UPPER. This makes it very obvious when you actually either mutate or rebind one...
As a more general rule: Python is by nature a very dynamic language (heck, you can even change the class of an object at runtime...) and with a "we're all consenting adults" philosophy, so it's indeed "lacking" most of the safety guards you'll find in more "B&D" languages like Java and relies instead on conventions, good practices and plain common sense.
Now, Python is not only vey dynamic but also exposes much of it's inners, so you can certainly (if this doesn't already exists) write a pylint extension that would at least detect global names in function codes (hint: you can access the compiled code of a function object with yourfunc.co_code (py2) or yourfunc.__code__ (py3) and then inspect what names are used in the code). But unless you have to deal with a team of sloppy undisciplined devs (in which case you have another issue - there's no technical solutions to stupidity), my very humble opinion is that you're wasting your time.
Ideally I would have wanted the global-checking functionality I’m searching for to be implemented within an IDE and continuously used to assess the use of globals in functions. But since that does not appear to exist I threw together an ad hoc function which takes a python function as input and then looks at the bytecode instructions of the function to see if there are any LOAD_GLOBAL or STORE_GLOBAL instructions present. If it finds any, it tries to assess the type of the global and compare it to a list of user provided types (int, float, etc..). It then prints out the name of all global variables used by the function.
The solution is far from perfect and quite prone to false positives. For instance, if np.unique(x) is used in a function before numpy has been imported (import numpy as np) it will erroneously identify np as a global variable instead of a module. It will also not look into nested functions etc.
But for simple cases such as the example in this post it seems to work fine. I just used it to scan through all the functions in my codebase and it found another global usage that I was unaware of – so at least for me it is useful to have!
Here is the function:
def CheckAgainstGlobals(function, vartypes):
"""
Function for checking if another function reads/writes data from/to global
variables. Only variables of the types contained within 'vartypes' and
unknown types are included in the output.
Inputs:
function - a python function
vartypes - a list of variable types (int, float, dict,...)
Example:
# Define a function
def testfcn(a):
a = 1 + b
return a
# Check if the function read/writes global variables.
CheckAgainstGlobals(testfcn,[int, float, dict, complex, str])
# Should output:
>> Global-check of function: testfcn
>> Loaded global variable: b (of unknown type)
"""
import dis
globalsFound = []
# Disassemble the function's bytecode in a human-readable form.
bytecode = dis.Bytecode(function)
# Step through each instruction in the function.
for instr in bytecode:
# Check if instruction is to either load or store a global.
if instr[0] == 'LOAD_GLOBAL' or instr[0] == 'STORE_GLOBAL':
# Check if its possible to determine the type of the global.
try:
type(eval(instr[3]))
TypeAvailable = True
except:
TypeAvailable = False
"""
Determine if the global variable is being loaded or stored and
check if 'argval' of the global variable matches any of the
vartypes provided as input.
"""
if instr[0] == 'LOAD_GLOBAL':
if TypeAvailable:
for t in vartypes:
if isinstance(eval(instr[3]), t):
s = ('Loaded global variable: %s (of type %s)' %(instr[3], t))
if s not in globalsFound:
globalsFound.append(s)
else:
s = ('Loaded global variable: %s (of unknown type)' %(instr[3]))
if s not in globalsFound:
globalsFound.append(s)
if instr[0] == 'STORE_GLOBAL':
if TypeAvailable:
for t in vartypes:
if isinstance(eval(instr[3]), t):
s = ('Stored global variable: %s (of type %s)' %(instr[3], t))
if s not in globalsFound:
globalsFound.append(s)
else:
s = ('Stored global variable: %s (of unknown type)' %(instr[3]))
if s not in globalsFound:
globalsFound.append(s)
# Print out summary of detected global variable usage.
if len(globalsFound) == 0:
print('\nGlobal-check of fcn: %s. No read/writes of global variables were detected.' %(function.__code__.co_name))
else:
print('\nGlobal-check of fcn: %s' %(function.__code__.co_name))
for s in globalsFound:
print(s)
When used on the function in the example directly after the function has been declared, it will find warn about the usage of the global variable SomeDict but it will not be aware of its type:
def UnintentionalValueChangeOfGlobal(a):
SomeDict['SomeKey'] = 100 + a
b = 0.5 * SomeDict['SomeKey']
return b
# Will find the global, but not know its type.
CheckAgainstGlobals(UnintentionalValueChangeOfGlobal,[int, float, dict, complex, str])
>> Global-check of fcn: UnintentionalValueChangeOfGlobal
>> Loaded global variable: SomeDict (of unknown type)
When used after SomeDict has been defined it also detects that the global is a dict:
SomeDict = {}
SomeDict['SomeKey'] = 0
b = UnintentionalValueChangeOfGlobal(10)
print(SomeDict['SomeKey'])
# Will find the global, and also see its type.
CheckAgainstGlobals(UnintentionalValueChangeOfGlobal,[int, float, dict, complex, str])
>> Global-check of fcn: UnintentionalValueChangeOfGlobal
>> Loaded global variable: SomeDict (of type <class 'dict'>)
Note: in its current state the function fails to detect that SomeDict['SomeKey'] changes value. I.e., it only detects the load instruction, not that the previous value of the global is manipulated. That is because the instruction STORE_SUBSCR seems to be used in this case instead of STORE_GLOBAL. But the use of the global is still detected (since it is being loaded) which is enough for me.
You can check the varible using globals():
def UnintentionalValueChangeOfGlobal(a):
if 'SomeDict' in globals():
raise Exception('Var in globals')
SomeDict['SomeKey'] = 100 + a
b = 0.5 * SomeDict['SomeKey']
return b
SomeDict = {}
SomeDict['SomeKey'] = 0
b = UnintentionalValueChangeOfGlobal(10)
print(SomeDict['SomeKey'])
Let's assume we have an exposed function (Level 0). We call this function with various parameter. Internally this function calls a second function (Level 1) but does not use any of the given parameters other than calling a third function (Level 2) with them as arguments. It might do some other stuff however.
My Question is. How can we pass down the arguments without creating too much noise in the middle layer function (Level 1)? I list some possible ways beneath. Be warned however that some of them are rather ugly and only there for completeness reasons. I'm looking for some established guideline rather than individual personal opinion on the topic
# Transport all of them individually down the road.
# This is the most obvious way. However the amount of parameter increases the
# noise in A_1 since they are only passed along
def A_0(msg, term_print):
A_1(msg, term_print)
def A_1(msg, term_print):
A_2(msg, term_print)
def A_2(msg, term_print):
print(msg, end=term_print)
# Create parameter object (in this case dict) and pass it down.
# Reduces the amount of parameters. However when only reading the source of B1
# it is impossible to determine what par is
def B_0(msg, term_print):
B_1({'msg': msg, 'end': term_print})
def B_1(par):
B_2(par)
def B_2(par):
print(par['msg'], end=par['end'])
# Use global variables. We all know the pitfalls of global variables. However
# in python there are at least limited to their module
def C_0(msg, term_print):
global MSG, TERM_PRINT
MSG = msg
TERM_PRINT = term_print
C_1()
def C_1():
C_2()
def C_2():
print(MSG, end=TERM_PRINT)
# Use the fact that python creates function objects. We can now append those
# objects. This makes some more 'localised' variables than shown before. However
# this also makes the code harder to maintain. When we change D_2 we have to alter
# D_0 as well even though it never directly calls it
def D_0(msg, term_print):
D_2.msg = msg
D_2.term_print = term_print
D_1()
def D_1():
D_2()
def D_2():
print(D_2.msg, end=D_2.term_print)
# Create a class with the functions E_1, E_2 to enclose the variables.
class E(dict):
def E_1(self):
self.E_2()
def E_2(self):
print(self['msg'], end=self['end'])
def E_0(msg, term_print):
E([('msg', msg), ('end', term_print)]).E_1()
# Create a nested scope. This make it very hard to read the function. Furthermore
# F_1 cannot be called directly from outside (without abusing the construct)
def F_0(msg, term_print):
def F_1():
F_2()
def F_2():
print(msg, end=term_print)
F_1()
A_0('What', ' ')
B_0('is', ' ')
C_0('the', ' ')
D_0('best', ' ')
E_0('way', '')
F_0('?', '\n')
It's hard to give a complete answer without knowing the full specifics of why there are so many parameters and so many levels of functions. But in general, passing too many parameters is considered a code smell.
Generally, if a group of functions all make use of the same parameters, it means they are closely related in some way, and may benefit from encapsulating the parameters within a Class, so that all the associated methods can share that data.
TooManyParameters is often a CodeSmell. If you have to pass that much
data together, it could indicate the data is related in some way and
wants to be encapsulated in its own class. Passing in a single
data structure that belongs apart doesn't solve the problem. Rather,
the idea is that things that belong together, keep together; things
that belong apart, keep apart; per the OneResponsibilityRule.
Indeed, you may find that entire functions are completely unnecessary if all they are doing is passing data along to some other function.
class A():
def __init__(self, msg, term_print)
self.msg = msg
self.term_print = term_print
def a_0(self):
return self.a_1()
def a_1(self):
return self.a_2()
def a_2(self):
print(msg, self.term_print)
Depending on the meaning of your sets of parameters and of your function A0, using the *args notation may also be an option:
def A0(*args):
A1(*args)
This allows any number of arguments to be passed to A0 and will pass them on to A1 unchanged. If the semantics of A0 is just that, then the * notation expresses the intention best. However, if you are going to pass on the arguments in a different order or do anything else with them besides just passing them on as an opaque sequence, this notation is not a good fit.
The book "Code Complete 2" by Steve McConnell suggests to use globals, their words are:
Reasons to Use Global Data
Eliminating tramp data
Sometimes you pass data to a routine or class
merely so that it can be passed to another routine or class. For
example, you might have an error-processing object that's used in each
routine. When the routine in the middle of the call chain doesn't use
the object, the object is called "tramp data". Use of global variables can eliminate tramp data.
Use Global Data Only as a Last Resort
Before you resort to using global data
consider a few alternatives:
Begin by making each variable local and make variables global only as you need to
Make all variables local to individual routines initially. If you find
they're needed elsewhere, make them private or protected class
variables before you go so far as to make them global. If you finally
find that you have to make them global, do it, but only when you're
sure you have to. If you start by making a variable global, you'll
never make it local, whereas if you start by making it local, you
might never need it to make it global.
Distinguish between global and class variables
Some variables are truly global in that they are accessed throughout
the whole program. Others are really class variables, used heavily
only within a certain set of routines. It's OK to access a class
variable any way you want to within the set of routines that use it
heavily. If routines outside the class need to use it, provide the
variable's value by means of an access routine. Don't access class
values direcly - as if they were global variables - even if your
programming language allows you to. This advice is tantamount to
saying "Modularize! Modularize! Modularize"
Use access routines
Creating access routines is the workhorse approach to getting around
problems with global data...
Link:
https://books.google.com/books/about/Code_Complete.html?hl=nl&id=LpVCAwAAQBAJ
I want to do matching my time-series data to meta data from a given file.
In my code, main function calls "create_match()" function every 1 minute. Inside "create_match(), there is a "list_from_file()" function to read data from file and store in lists to perform matching.
The problem is that my code is not effective since every 1 minute, it reads the file and rewrite in the same lists. I want to read file only one time (to initialize lists only one time), and after that ignoring the "list_from_file()" function. I do not want to just move this task to main function and pass lists through function.
Does python have a special variable like static variable in c programming?
Python does not have a static variable declaration; however, there are several fairly standard programming patterns in python to do something similar. The easiest way is to just use a global variable.
Global Variables
Define a global variable and check it before running your initialize function. If the global variable has already been set (ie. the lists you're reading in), just return them.
CACHE = None
def function():
global CACHE
if CACHE is None:
CACHE = initialize_function()
return CACHE
You can use a class:
class Match (object):
def __init__(self):
self.data = list_from_file()
def create_match(self):
# do something with `self.data` here
Make an instance:
match = Match()
This calls list_from_file().
Now, you can call create_match() repeatedly with access to self.data
import time
for x in range(10):
match.create_match()
time.sleep(60)
There are lots of ways.
You can make a variable part of a class - not a member of the object, but of the class itself. It is initialized when the class is defined.
Similarly you can put a variable at the outer level of a module. It will belong to the module, and will be initialed when the module is imported the first time.
Finally there's the hack of defining an object as a default parameter to a function. The variable will be initialized when the function is defined, and will belong to the function. You will only be able to access it with the parameter name, and it can be overridden by the caller.
Let's say I have a code like this:
def read_from_file(filename):
list = []
for i in filename:
value = i[0]
list.append(value)
return list
def other_function(other_filename):
"""
That's where my question comes in. How can I get the list
from the other function if I do not know the value "filename" will get?
I would like to use the "list" in this function
"""
read_from_file("apples.txt")
other_function("pears.txt")
I'm aware that this code might not work or might not be perfect. But the only thing I need is the answer to my question in the code.
You have two general options. You can make your list a global variable that all functions can access (usually this is not the right way), or you can pass it to other_function (the right way). So
def other_function(other_filename, anylist):
pass # your code here
somelist = read_from_file("apples.txt")
other_function("pears.txt.", somelist)
You need to "catch" the value return from the first function, and then pass that to the second function.
file_name = read_from_file('apples.txt')
other_function(file_name)
You need to store the returned value in a variable before you can pass it onto another function.
a = read_from_file("apples.txt")
There are at least three reasonable ways to achieve this and two which a beginner will probably never need:
Store the returned value of read_from_file and give it as a parameter to other_function (so adjust the signature to other_function(other_filename, whatever_list))
Make whatever_list a global variable.
Use an object and store whatever_list as a property of that object
(Use nested functions)
(Search for the value via garbage collector gc ;-)
)
Nested functions
def foo():
bla = "OK..."
def bar():
print(bla)
bar()
foo()
Global variables
What are the rules for local and global variables in Python? (official docs)
Global and Local Variables
Very short example
Misc
You should not use list as a variable name as you're overriding a built-in function.
You should use a descriptive name for your variables. What is the content of the list?
Using global variables can sometimes be avoided in a good way by creating objects. While I'm not always a fan of OOP, it sometimes is just what you need. Just have a look of one of the plenty tutorials (e.g. here), get familiar with it, figure out if it fits for your task. (And don't use it all the time just because you can. Python is not Java.)
Say I had a function in Python:
def createCube(no_of_cubes):
This function creates cubes and the number of cubes it creates is set by the parameter: no_of_cubes
If i then wanted to create another function:
def moveCubes():
and I wanted the parameter no_of_cubes to be used in this function, and for the parameter to use the same integer that has been inputted for the first function. How would I do so? Or is it not possible?
A parameter is merely an argument to a function, much like you'd have in a maths function, e.g. f(x) = 2*x. Also like in maths, you can define infinite questions with the same arguments, e.g. g(x) = x^2.
The name of the parameter doesn't change anything, it's just how your function is gonna call that value. You could call it whatever you wanted, e.g. f(potato) = 2 * potato. However, there are a few broad naming conventions—in maths, you'd give preference to a lowercase roman letter for a real variable, for example. In programming, like you did, you want to give names that make sense—if it refers to the number of cubes, calling it no_of_cubes makes it easier to read your program than calling it oquhiaisnca, so kudos on that.
I'm not sure how that bit fits into your program. A few of the other answers suggested ways to do it. If it's just two loose functions (not part of a class), you can define a variable outside the functions to do what you want, like this:
1: n = 4 # number of cubes, bad variable name
2: def createCube(no_of_cubes):
3: # things
4: def moveCubes(no_of_cubes):
5: # different things
6: createCube(n)
7: moveCubes(n)
What happens here is that line 6 calls the function createCube and gives it n (which is 4) as a parameter. Then line 7 calls moveCubes giving it the same n as a parameter.
This is a very basic question, so I'm assuming you're new to programming. It might help a lot if you take some python tutorial. I recommend Codecademy, but there are several others you can choose from. Good luck!
It is possible. But that two definitions get that parameter as their own one. I mean that parameter works only the definition scope. It may not be harmful for another same name parameter on different definitions.
If you cannot, probably you shouldn't do it.
If they're two separate functions (not nested or so), they should not share parameters.
If they do have connection in some way, a better way is to define a class:
class Cube:
def __init__(self, no_of_cubes):
self.no_of_cubes = no_of_cubes
def create_cube(self):
# use self.no_of_cubes
pass
def move_cubes(self):
# use self.no_of_cubes
pass
c = Cube(no_of_cubes)
c.create_cube()
c.move_cubes
Unless you know what you're doing, don't define global variable.
You can load the function moveCubes() inside createCube(). For example:
def createCube(no_of_cubes):
# Do stuff
moveCubes(no_of_cubes)
def moveCubes(no_of_cubes):
# Do more stuff
Or you could define no_of_cubes out of the functions so it is accessible to all.
no_of_cubes = 5
def createCube():
global no_of_cubes
# Do stuff
def moveCubes():
global no_of_cubes
# Do stuff