I am attempting to use a module called interface.py which defines a list of conditions and a few functions to check arguments against those conditions. There are many thousands of conditions however, and so I want to use a dictionary instead of a list to prevent needing to look at all of them. To do this I'm using the following code:
def listToDictionary(list):
"""This function takes a list of conditions and converts it to a dictionary
that uses the name of the condition as a key."""
d = {}
for condition in list:
if condition.name.lower() not in d:
d[condition.name.lower()] = []
d[condition.name.lower()].append(condition)
return d
conditionList = listToDictionary(conditions.list) #the condition list comes from another module
Further into the file are the actual interface functions that take arguments to compare with the list of conditions - these functions are written assuming that conditionList will be a dictionary.
Unfortunately this isn't working. Giving error details is difficult because this code is being imported by a django page and I am trying to avoid talking about django so this question stays uncomplicated. Essentially the pages including this code will not load, and if I change it back to just using a list everything works fine.
My suspicion is that the problem has to do with how Python treats import statements. I need the listToDictionary conversion to run as soon as interface.py is imported, otherwise the interface functions will expect a dictionary and get a list instead. Is there any way to ensure that this is happening?
An educated guess: the list in conditions.list is not yet fully constructed when your module is being imported. As a result, you get a dictionary that is missing some entries or even empty, which is causing problems later. Try deferring the construction of the dict, like this:
conditionTable = None # shouldn't call it list if it's a dict
def get_cond_table():
global conditionTable
if conditionTable is None:
conditionTable = listToDictionary(conditions.list)
return conditionTable
Instead of referring to conditionList in your functions, refer to get_cond_table().
Alright, I found out that the problem was in another function that was still expecting the dictionary to be a list. The reason I couldn't see it right away is that Django left a very cryptic error message. I was able to get a better one using python manage.py shell and importing the module manually.
Thanks for your help everyone.
Related
First of all, i am somewhat new to python coding, so this might seem like a stupid question.
Problem: I am trying to create a script that allows me to import a number into (variable?operator?) names, in order to run my python script from a bash script.
Ideally i wanted to do the following (i know the syntax is wrong, but it is from my first try and captures what i would want it to do):
replica_number = 2 #2 is only for testing. It will later be exchanged with an imported number from a bash script over many different data sheets.
t_r+replica number = md.load(data_path+'potein_+replica_number+_centered.xtc', top = data_path+'potein_+replica_number+_first.gro')[1:]
What i want this to do is to automatically create the variables named t_r2 and import the files called protein_2_centered.xtc and protein_2_first.gro. However when i do this i get: SyntaxError: can't assign to operator
Does anyone know how to get around this problem, or do i just have to make a separate script for every replica?
What you need is either a list or a dictionary.
you can keep all your results in a list (without keeping the replica_number):
t_r_list = []
t_r_list.append(md.load(...)[1:]) # Run this line for each t_r you want to load
or if you want to keep the replica_number, you can use a dict:
t_r_dict = {}
t_r_dict[replica_number] = md.load(...)[1:]
You might want to read a tutorial on these data structures and how to use them, it will greatly help you on your journey with python later on and is the basis of the basis when it comes to working with data in python.
when the name of the attributes or variables is dynamic, we can use, for example, the new way Python uses for fomat Strings (f'') and then the setattr method:
The settattr method is this, is part of the builtins library:
def setattr(x, y, v): # real signature unknown; restored from __doc__
""" Sets the named attribute on the given object to the specified value."""
Here what you can do with variables
replica_number = 2
variable_name = f't_r{replica_number }'
and then check and set the aatribute:
if not hasattr(YOUR_OBJECT, variable_name ):
raise ValueError
setattr(YOUR_OBJECT, variable_name , THE_VALUE)
Use a dictionary for such kind of operations:
replica_number = md.load(...)[1:]
your_dict = {t_r : replica_number}
And access it through
your_dict[t_r]
So I'm trying to write a python program to integrate into a greater in-house developed python application. This program I'm writing needs to generate an xml document and populate the fields with data stored in variables from another function in a different module.
After realizing I can't have both programs import each other (main program needs to call xmlgen.py to generate the xml doc, while xmlgen.py needs to utilize variables in the main program to generate that doc), I'm a little bit at a loss as to what to do here.
In the example shown below, xmlgen.py needs to use variables from the function sendFax in Faxer.py. Faxer.py needs to call xmlgen.py to generate the document.
snippet from xmlgen.py:
from lxml import etree
from Faxer import coverPage, ourOrg, ourPhonenum, ourFaxnum, emailAddr, sendReceipt, webAddr, comments
from Faxer import sendFax
def generateXml():
#xml file structure
root = etree.Element('schedule_fax')
...
~ A bunch of irrelevant xml stuff
...
grandchild_recipient_name = etree.Element('name')
grandchild_recipient_name.text = cliName
child_recipient.append(grandchild_recipient_name)
Now the piece of the main program I need to utilize the "cliName" variable from...
def sendFax(destOrg, destFax, cliName, casenum, attachments, errEAddr, comment, destName):
creds=requests.auth.HTTPBasicAuth(user,password)
allData=''
allData+='<schedule_fax>\n'
allData+='<cover_page>\n'
allData+='<url>'+prepXMLString(coverPage)+'</url>\n'
allData+='<enabled>true</enabled>\n'
allData+='<subject>'+prepXMLString(cliName)+' - case # '+str(casenum)+'</subject>\n'
Now when I try to import sendFax function from Faxer.py, I'm unable to call any of the variables from the function like,
grandchild_recipient_name.text = sendFax.cliName
does not work. What am i doing wrong here?? I'm not a python guru and am in fact quite new to all of this, so I'm hoping it's something simple. Should I just dump everything into a new function in the main program?
As pointed out above, you are trying to reference cliName as if it is an attribute of the function. This would be closer to being correct if sendFax was a class, but that's another subject. The snippet you have provided is simply a function definition. It doesn't guarantee that this function is ever actually used or give you any idea what cliName actually is, cliName is just the name used by the function internallt to describe the 3rd value supplied as input.
What you need to do is find where sendFax is actually used, rather than where it is defined. Then look at what the variables are called which are passed into it. There are two ways to pass variables into a function: by position and by name. If the variables are being passed by pposition you will find something like:
sendFax(some_name,some_other_name,yet_another_name,...
The third one of these will be the variable which becomes cliName inside the function.
If being passed by name you will see something like
sendFax(cliName=yet_another_name,...
Where once again yet_another_name is the thing which becomes cliName.
Depending on how the programme is structured you may be able to refer to yet_another_name from your program and get the value you need.
from Faxer import yet_another_name
But this will only work if Faxer runs and finishes with the one and only value of yet_another_name assigned. If Faxer iterates through lots of values of yet_another_name, or simply doesn't run sensibly when called as an import you'll need a more sophisticated approach.
I am looking for a way in python to stop certain parts of the code inside a function but only when the output of the function is assigned to a variable. If the the function is run without any assignment then it should run all the inside of it.
Something like this:
def function():
print('a')
return ('a')
function()
A=function()
The first time that I call function() it should display a on the screen, while the second time nothing should print and only store value returned into A.
I have not tried anything since I am kind of new to Python, but I was imagining it would be something like the if __name__=='__main__': way of checking if a script is being used as a module or run directly.
I don't think such a behavior could be achieved in python, because within the scope of the function call, there is no indication what your will do with the returned value.
You will have to give an argument to the function that tells it to skip/stop with a default value to ease the call.
def call_and_skip(skip_instructions=False):
if not skip_instructions:
call_stuff_or_not()
call_everytime()
call_and_skip()
# will not skip inside instruction
a_variable = call_and_skip(skip_instructions=True)
# will skip inside instructions
As already mentionned in comments, what you're asking for is not technically possible - a function has (and cannot have) any knowledge of what the calling code will do with the return value.
For a simple case like your example snippet, the obvious solution is to just remove the print call from within the function and leave it out to the caller, ie:
def fun():
return 'a'
print(fun())
Now I assume your real code is a bit more complex than this so such a simple solution would not work. If that's the case, the solution is to split the original function into many distinct one and let the caller choose which part it wants to call. If you have a complex state (local variables) that need to be shared between the different parts, you can wrap the whole thing into a class, turning the sub functions into methods and storing those variables as instance attributes.
I am currently trying to create a program that learns through user input, however it converts to a string automatically.
Here's the code. I use the shelve module to store the commands for the code.
ok = {str(name):func}
asd.update(ok)
print(asd)
data["cmd"] = asd
data.close()
The 'asd' list contains every command which has been extracted from the shelf. I want to update it and store it, so next time it updates when calling a command.
'func' is the variable that stores the name of the function am trying to call, but string objects cannot be called.
How do I solve this?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EDIT:
This has been solved (I totally forgot about eval() )
Not sure what you're trying to achieve here, but from what I've understood you should have a look to eval()
The eval() function evaluates the specified expression, if the expression is a legal Python statement, it will be executed.
More information here
Question
We are getting a strange error** and we suspect it is because our script.py*** assigned a variable that already has some built-in meaning. E.g.
str = 2
Is there a way we can check if this has happened?
So far
We're thinking it would involve:
Assign a list at the beggining of the script, containing all built-in objects' names as strings:
builtin_names = get_builtin_var_names() # hypothetical function
Assign a list at the end of the script, containing all user-assigned objects' names as strings:
user_names = get_user_var_names() # hypothetical function
Find the intersection, and check if not empty:
overwritten_names = list(set(user_names) & set(builtin_names))
if overwritten_names:
print("Whoops")
Related
Is there a way to tell if a function in JavaScript is getting over
written?
Is there a common way to check in Python if an object
is any function type?
**Silent error, for those interested in it, it is silent, i.e. it finishes without an error code but the value it spits out differs between two implementations of the same code, call them A and B... both versions require the running of two modules (separate files) that we've made (changes.py and dnds.py), but whereas:
Version A: involves running changes.py -> pickle intermediate data (into a .p file) -> dnds.py,
Version B: involves running changes.py -> return the data (a dict) as arguments to dnds.py -> dnds.py.
And for some reason only version A is the one with the correct final value (benchmarked against MATLAB's dnds function).
***script.py, is actually dnds.py (who has imported changes.py). You can find all the code, but to test the two alternative versions I was talking about in ** you need to specifically look at dnds.py, the line with: CTRL+F: "##TODO:Urgent:debug:2016-11-28:". Once you find that line, you can read the rest of that comment line for instructions how to replicate version B, and its resulting silent error**. For some reason I HAVE to pickle the data to get it to work... when I just return the dicts directly I get the wrong dN/dS values.
You can get the names (and values) of builtins via the dict __builtins__. You can get the names (and values) of global variables with globals() and of locals with locals(). So you could do something like:
import __builtin__
name, val = None, None
for name, val in locals().iteritems():
if hasattr(__builtin__, name) and getattr(__builtin__, name) != val:
print("{} was overwritten!".format(name))
and then the same for globals(). This will check whether there is any object in the local namespace that has a different value in the builtins namespace. (Setting name and val to None is needed so that the variables exist before calling locals, or else you'll get a "dictionary changed sized during iteration" error because the names are added partway through the loop.)
You could also use a tool like pylint which checks for such errors among many others.