ArcMap Data Driven Pages Dynamic Feature Labels - python

I am trying to work out a way to change between two sets of labels on a map. I have a map with zip codes that are labeled and I want to be able to output two maps: one with the zip code label (ZIP) and one with a value from a field I have joined to the data (called chrlabel). The goal is to have one map showing data for each zip code, and a second map giving the zip code as reference.
My initial attempt which I can't get working looks like this:
1) I added a second data frame to my map and add a new layer that contains two polygons with names "zip" and "chrlabel".
2) I use this frame to enable data driven pages and then I hide it behind the primary frame (I don't want to see those polygons, I just want to use them to control the data driven pages).
3) In the zip code labels I tried to write a VBScript expression like this pseudo-code:
test = "
If test = "zip" then
label = ZIP
else
label = CHRLABEL
endif
This does not work because the dynamic text does not resolve to the page name in the VBScript.
Is there some way to call the page name in VBScript so that I can make this work?
If not, is there another way to do this?
My other thought is to add another field to the layer that gets filled with a one or a zero. Then I could replace the if-then test condition with if NewField = 1.
Then I would just need to write a script that updates all the NewFields for the zipcode features when the data driven page advances to the second page. Is there a way to trigger a script (python or other) when a data driven page changes?
Thanks

8 months too late, but for posterity...
You're making things hard on yourself - it would be much easier to set up a duplicate layer and use different layers, then adjust layer visibility. I'm not familiar with VBScript for this sort of thing, but in Python (using ESRI's library) it would look something like so [python 2.6, ArcMap 10 - sample only, haven't debugged this but I do similar things quite often]:
from arcpy import mapping
## Load the map from disk
mxdFilePath = "C:\\GIS_Maps_Folder\\MyMap.mxd"
mapDoc = mapping.MapDocument(mxdFilePath)
## Load map elements
dataFrame = mapping.ListDataFrames(mapDoc)[0] #assumes you want the first dataframe; you can also search by name
mxdLayers = mapping.ListLayers(dataFrame)
## Adjust layers
for layer in mxdLayers:
if (layer.name == 'zip'):
zip_lyr = layer
elif(layer.name == 'sample_units'):
labels_lyr = layer
## Print zip code map
zip_lyr.visible = True
zip_lyr.showLabels = True
labels_lyr.visible = False
labels_lyr.showLabels = False
zip_path = "C:\\Output_Folder\\Zips.pdf"
mapping.ExportToPDF(mapDoc, zip_path, layers_attributes="NONE", resolution=150)
## Print labels map
zip_lyr.visible = False
zip_lyr.showLabels = False
labels_lyr.visible = True
labels_lyr.showLabels = True
labels_path = "C:\\Output_Folder\\Labels.pdf"
mapping.ExportToPDF(mapDoc, labels_path, layers_attributes="NONE", resolution=150)
## Combine files (if desired)
pdfDoc = mapping.PDFDocumentCreate("C:\\Output_Folder\\Output.pdf"")
pdfDoc.appendPages(zip_path)
pdfDoc.appendPages(labels_path)
pdfDoc.saveAndClose()
As far as the Data Driven Pages go, you can export them all at once or in a loop, and adjust whatever you want, although I'm not sure why you'd need to if you use something similar to the above. The ESRI documentation and examples are actually quite good on this. (You should be able to get to all the other Python documentation pretty easily from that page.)

Related

Biopython: return chain but with the new chain ID already

I have script which can extract selected chains from a structure into a new file. I do it for 400+ structures. Because chainIDs of my selected chains can differ in the structures, I parse .yaml files where I store the corresponding chainIDs. This script is working, everything is fine but the next step is to rename the chains to be the same in each file. I used edited code from here:this. Basically it worked as well, however the problem is that e.g. my new chainID of chain1 is the same as original chainID of chain2, and the error occurrs:Cannot change id from U to T. The id T is already used for a sibling of this entity. Actually, this happened for many variables and it'd be too complicated doing it manually.
I've got idea that this could be solved by renaming the chainIDs right in the moment when I'm extracting it. Is it possible using Biopython like that? Could'nt find anything similar to my problem.
Simplified code for one structure (in the original one is one more loop for iterating over 400+ structures and its .yaml files):
with open(yaml_file, "r") as file:
proteins = yaml.load(file, Loader=yaml.FullLoader)
chain1= proteins["1_chain"].split(",")[0] #just for illustration that I have to parse the original chainIDs
chain2= proteins["2_chain"].split(",")[0]
structure = parser.get_structure("xxx", "xxx.cif" )[0]
for model in structure:
for chain in model:
class ChainSelect(Select):
def accept_chain(self, chain):
if chain.get_id() == '{}'.format(chain1):
return True # I thought that somewhere in this part could be added command renaming the chain to "A"
if chain.get_id() == '{}'.format(chain2):
return True #here I'd rename it "B"
else:
return False
io = MMCIFIO()
io.set_structure(structure)
io.save("new.cif" , ChainSelect())
Is it possible to somehow expand "return" command in a way that it would return the chain with desired chainID (e.g. A)? Note that the original chain ID can differ in the structures (thus I have to use .format(chainX))
I don't have any other idea how I'd get rid of the error that my desired chainID is already in sibling entity.

Python- Insert new values into 'nested' list?

What I'm trying to do isn't a huge problem in php, but I can't find much assistance for Python.
In simple terms, from a list which produces output as follows:
{"marketId":"1.130856098","totalAvailable":null,"isMarketDataDelayed":null,"lastMatchTime":null,"betDelay":0,"version":2576584033,"complete":true,"runnersVoidable":false,"totalMatched":null,"status":"OPEN","bspReconciled":false,"crossMatching":false,"inplay":false,"numberOfWinners":1,"numberOfRunners":10,"numberOfActiveRunners":8,"runners":[{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":2.8,"size":34.16},{"price":2.76,"size":200},{"price":2.5,"size":237.85}],"availableToLay":[{"price":2.94,"size":6.03},{"price":2.96,"size":10.82},{"price":3,"size":33.45}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832765}...
All I want to do is add in an extra field, containing the 'runner name' in the data set below, into each of the 'runners' sub lists from the initial data set, based on selection_id=selectionId.
So initially I iterate through the full dataset, and then create a separate list to get the runner name from the runner id (I should point out that runnerId===selectionId===selection_id, no idea why there are multiple names are used), this works fine and the code is shown below:
for market_book in market_books:
market_catalogues = trading.betting.list_market_catalogue(
market_projection=["RUNNER_DESCRIPTION", "RUNNER_METADATA", "COMPETITION", "EVENT", "EVENT_TYPE", "MARKET_DESCRIPTION", "MARKET_START_TIME"],
filter=betfairlightweight.filters.market_filter(
market_ids=[market_book.market_id],
),
max_results=100)
data = []
for market_catalogue in market_catalogues:
for runner in market_catalogue.runners:
data.append(
(runner.selection_id, runner.runner_name)
)
So as you can see I have the data in data[], but what I need to do is add it to the initial data set, based on the selection_id.
I'm more comfortable with Php or Javascript, so apologies if this seems a bit simplistic, but the code snippets I've found on-line only seem to assist with very simple Python lists and nothing 'nested' (to me the structure seems similar to a nested array).
As per the request below, here is the full list:
{"marketId":"1.130856098","totalAvailable":null,"isMarketDataDelayed":null,"lastMatchTime":null,"betDelay":0,"version":2576584033,"complete":true,"runnersVoidable":false,"totalMatched":null,"status":"OPEN","bspReconciled":false,"crossMatching":false,"inplay":false,"numberOfWinners":1,"numberOfRunners":10,"numberOfActiveRunners":8,"runners":[{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":2.8,"size":34.16},{"price":2.76,"size":200},{"price":2.5,"size":237.85}],"availableToLay":[{"price":2.94,"size":6.03},{"price":2.96,"size":10.82},{"price":3,"size":33.45}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832765},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":20,"size":3},{"price":19.5,"size":26.36},{"price":19,"size":2}],"availableToLay":[{"price":21,"size":13},{"price":22,"size":2},{"price":23,"size":2}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832767},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":11,"size":9.75},{"price":10.5,"size":3},{"price":10,"size":28.18}],"availableToLay":[{"price":11.5,"size":12},{"price":13.5,"size":2},{"price":14,"size":7.75}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832766},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":48,"size":2},{"price":46,"size":5},{"price":42,"size":5}],"availableToLay":[{"price":60,"size":7},{"price":70,"size":5},{"price":75,"size":10}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832769},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":18.5,"size":28.94},{"price":18,"size":5},{"price":17.5,"size":3}],"availableToLay":[{"price":21,"size":20},{"price":23,"size":2},{"price":24,"size":2}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832768},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":4.3,"size":9},{"price":4.2,"size":257.98},{"price":4.1,"size":51.1}],"availableToLay":[{"price":4.4,"size":20.97},{"price":4.5,"size":30},{"price":4.6,"size":16}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832771},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":24,"size":6.75},{"price":23,"size":2},{"price":22,"size":2}],"availableToLay":[{"price":26,"size":2},{"price":27,"size":2},{"price":28,"size":2}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":12832770},{"status":"ACTIVE","ex":{"tradedVolume":[],"availableToBack":[{"price":5.7,"size":149.33},{"price":5.5,"size":29.41},{"price":5.4,"size":5}],"availableToLay":[{"price":6,"size":85},{"price":6.6,"size":5},{"price":6.8,"size":5}]},"sp":{"nearPrice":null,"farPrice":null,"backStakeTaken":[],"layLiabilityTaken":[],"actualSP":null},"adjustmentFactor":null,"removalDate":null,"lastPriceTraded":null,"handicap":0,"totalMatched":null,"selectionId":10064909}],"publishTime":1551612312125,"priceLadderDefinition":{"type":"CLASSIC"},"keyLineDescription":null,"marketDefinition":{"bspMarket":false,"turnInPlayEnabled":false,"persistenceEnabled":false,"marketBaseRate":5,"eventId":"28180290","eventTypeId":"2378961","numberOfWinners":1,"bettingType":"ODDS","marketType":"NONSPORT","marketTime":"2019-03-29T00:00:00.000Z","suspendTime":"2019-03-29T00:00:00.000Z","bspReconciled":false,"complete":true,"inPlay":false,"crossMatching":false,"runnersVoidable":false,"numberOfActiveRunners":8,"betDelay":0,"status":"OPEN","runners":[{"status":"ACTIVE","sortPriority":1,"id":10064909},{"status":"ACTIVE","sortPriority":2,"id":12832765},{"status":"ACTIVE","sortPriority":3,"id":12832766},{"status":"ACTIVE","sortPriority":4,"id":12832767},{"status":"ACTIVE","sortPriority":5,"id":12832768},{"status":"ACTIVE","sortPriority":6,"id":12832770},{"status":"ACTIVE","sortPriority":7,"id":12832769},{"status":"ACTIVE","sortPriority":8,"id":12832771},{"status":"LOSER","sortPriority":9,"id":10317013},{"status":"LOSER","sortPriority":10,"id":10317010}],"regulators":["MR_INT"],"countryCode":"GB","discountAllowed":true,"timezone":"Europe\/London","openDate":"2019-03-29T00:00:00.000Z","version":2576584033,"priceLadderDefinition":{"type":"CLASSIC"}}}
i think i understand what you are trying to do now
first hold your data as a python object (you gave us a json object)
import json
my_data = json.loads(my_json_string)
for item in my_data['runners']:
item['selectionId'] = [item['selectionId'], my_name_here]
the thing is that my_data['runners'][i]['selectionId'] is a string, unless you want to concat the name and the id together, you should turn it into a list or even a dictionary
each item is a dicitonary so you can always also a new keys to it
item['new_key'] = my_value
So, essentially this works...with one exception...I can see from the print(...) in the loop that the attribute is updated, however what I can't seem to do is then see this update outside the loop.
mkt_runners = []
for market_catalogue in market_catalogues:
for r in market_catalogue.runners:
mkt_runners.append((r.selection_id, r.runner_name))
for market_book in market_books:
for runner in market_book.runners:
for x in mkt_runners:
if runner.selection_id in x:
setattr(runner, 'x', x[1])
print(market_book.market_id, runner.x, runner.selection_id)
print(market_book.json())
So the print(market_book.market_id.... displays as expected, but when I print the whole list it shows the un-updated version. I can't seem to find an obvious solution, which is odd, as it seems like a really simple thing (I tried messing around with indents, in case that was the problem, but it doesn't seem to be, its like its not refreshing the market_book list post update of the runners sub list)!

Getting element density from abaqus output database using python scripting

I'm trying to get the element density from the abaqus output database. I know you can request a field output for the volume using 'EVOL', is something similar possible for the density?
I'm afraid it's not because of this: Getting element mass in Abaqus postprocessor
What would be the most efficient way to get the density? Look for every element in which section set it is?
Found a solution, I don't know if it's the fastest but it works:
odb_file_path=r'your_path\file.odb'
odb = session.openOdb(name=odb_file_path)
instance = odb.rootAssembly.instances['MY_PART']
material_name = instance.elements[0].sectionCategory.name[8:-2]
density=odb.materials[material_name].density.table[0][0])
note: the 'name' attribute will give you a string like, 'solid MATERIALNAME'. So I just cut out the part of the string that gave me the real material name. So it's the sectionCategory attribute of an OdbElementObject that is the answer.
EDIT: This doesn't seem to work after all, it turns out that it gives all elements the same material name, being the name of the first material.
The properties are associated something like this:
sectionAssignment connects section to set
set is the container for element
section connects sectionAssignment to material
instance is connected to part (could be from a part from another model)
part is connected to model
model is connected to section
Use the .inp or .cae file if you can. The following gets it from an opened cae file. To thoroughly get elements from materials, you would do something like the following, assuming you're starting your search in rootAssembly.instances:
Find the parts which the instances were created from.
Find the models which contain these parts.
Look for all sections with material_name in these parts, and store all the sectionNames associated with this section
Look for all sectionAssignments which references these sectionNames
Under each of these sectionAssignments, there is an associated region object which has the name (as a string) of an elementSet and the name of a part. Get all the elements from this elementSet in this part.
Cleanup:
Use the Python set object to remove any multiple references to the same element.
Multiply the number of elements in this set by the number of identical part instances that refer to this material in rootAssembly.
E.g., for some cae model variable called model:
model_part_repeats = {}
model_part_elemLabels = {}
for instance in model.rootAssembly.instances.values():
p = instance.part.name
m = instance.part.modelName
try:
model_part_repeats[(m, p)] += 1
continue
except KeyError:
model_part_repeats[(m, p)] = 1
# Get all sections in model
sectionNames = []
for s in mdb.models[m].sections.values():
if s.material == material_name: # material_name is already known
# This is a valid section - search for section assignments
# in part for this section, and then the associated set
sectionNames.append(s.name)
if sectionNames:
labels = []
for sa in mdb.models[m].parts[p].sectionAssignments:
if sa.sectionName in sectionNames:
eset = sa.region[0]
labels = labels + [e.label for e in mdb.models[m].parts[p].sets[eset].elements]
labels = list(set(labels))
model_part_elemLabels[(m,p)] = labels
else:
model_part_elemLabels[(m,p)] = []
num_elements_with_material = sum([model_part_repeats[k]*len(model_part_elemLabels[k]) for k in model_part_repeats])
Finally, grab the material density associated with material_name then multiply it by num_elements_with_material.
Of course, this method will be extremely slow for larger models, and it is more advisable to use string techniques on the .inp file for faster performance.

Python friends network visualization

I have hundreds of lists (each list corresponds to 1 person). Each list contains 100 strings, which are the 100 friends of that person.
I want to 3D visualize this people network based on the number of common friends they have. Considering any 2 lists, the more same strings they have, the closer they should appear together in this 3D graph. I wanted to show each list as a dot on the 3D graph without nodes/connections between the dots.
For brevity, I have included only 3 people here.
person1 = ['mike', 'alex', 'arker','locke','dave','david','ross','rachel','anna','ann','darl','carl','karle']
person2 = ['mika', 'adlex', 'parker','ocke','ave','david','rosse','rachel','anna','ann','darla','carla','karle']
person3 = ['mika', 'alex', 'parker','ocke','ave','david','rosse','ross','anna','ann','darla','carla','karle', 'sasha', 'daria']
Gephi Setup steps:
Install Gephi and then start it
You probably want to upgrade all the plugins now, see the button in the lower right corner.
Now create a new project.
Make sure the current workspace is Workspace1
Enable Graph Streaming plugin
In Streaming tab that then appears configure server to use http and port 8080
start the server (it will then have a green dot underneath it instead of a red dot).
Python steps:
install gephistreamer package (pip install gephistreamer)
Copy the following python cod to something like friends.py:
from gephistreamer import graph
from gephistreamer import streamer
import random as rn
stream = streamer.Streamer(streamer.GephiWS(hostname="localhost",port=8080,workspace="workspace1"))
szfak = 100 # this scales up everything - somehow it is needed
cdfak = 3000
nodedict = {}
def addfnode(fname):
# grab the node out of the dictionary if it is there, otherwise make a newone
if (fname in nodedict):
nnode = nodedict[fname]
else:
nnode = graph.Node(fname,size=szfak,x=cdfak*rn.random(),y=cdfak*rn.random(),color="#8080ff",type="f")
nodedict[fname] = nnode # new node into the dictionary
return nnode
def addnodes(pname,fnodenamelist):
pnode = graph.Node(pname,size=szfak,x=cdfak*rn.random(),y=cdfak*rn.random(),color="#ff8080",type="p")
stream.add_node(pnode)
for fname in fnodenamelist:
print(pname+"-"+fname)
fnode = addfnode(fname)
stream.add_node(fnode)
pfedge = graph.Edge(pnode,fnode,weight=rn.random())
stream.add_edge(pfedge)
person1friends = ['mike','alex','arker','locke','dave','david','ross','rachel','anna','ann','darl','carl','karle']
person2friends = ['mika','adlex','parker','ocke','ave','david','rosse','rachel','anna','ann','darla','carla','karle']
person3friends = ['mika','alex','parker','ocke','ave','david','rosse','ross','anna','ann','darla','carla','karle','sasha','daria']
addnodes("p1",person1friends)
addnodes("p2",person2friends)
addnodes("p3",person3friends)
Run it with the command python friends.py
You should see all the nodes appear. There are then lots of ways you can lay it out to make it look better, I am using the Force Atlas layouter here and you can see the parameters I am using on the left.
Some notes:
you can get the labels to show or disappear by clicking on the T on the bottom status/control bar.
View the data in the nodes and edges by opening Window/Data Table.
It is a very rich program, there are more options than you can shake a stick at.
You can set more properties on your nodes and edges in the python code and then they will show up in the data table view and can be used to filter, etc.
You want to pay attention to that update button in the bottom right corner of Gephi, there are a lot of bugs to fix.
This will get you started (as you asked), but for your particular problem:
you will also need to calculate weights for your persons (the "p" nodes), and link them to each other with those weights
Then you need to find a layouter and paramters that positions those nodes the way you want them based on the new weights.
So you don't really need to show the type="f" nodes, you need just the "p" nodes.
The weight between to "p" nodes should be based on the intersection of the sets of the friend names.
There are also Gephi plugins that can then display this in 3D, but that is actually a completely separate issue, you probably want to get it working in 2D first.
This is running on Windows 10 using Anaconda 4.4.1 and Python 3.5.2 and Gephi 0.9.1.

Extracting information from unconventional text files? (Python)

I am trying to extract some information from a set of files sent to me by a collaborator. Each file contains some python code which names a sequence of lists. They look something like this:
#PHASE = 0
x = np.array(1,2,...)
y = np.array(3,4,...)
z = np.array(5,6,...)
#PHASE = 30
x = np.array(1,4,...)
y = np.array(2,5,...)
z = np.array(3,6,...)
#PHASE = 40
...
And so on. There are 12 files in total, each with 7 phase sets. My goal is to convert each phase into it's own file which can then be read by ascii.read() as a Table object for manipulation in a different section of code.
My current method is extremely inefficient, both in terms of resources and time/energy required to assemble. It goes something like this: Start with a function
def makeTable(a,b,c):
output = Table()
output['x'] = a
output['y'] = b
output['z'] = c
return output
Then for each phase, I have manually copy-pasted the relevant part of the text file into a cell and appended a line of code
fileName_phase = makeTable(a,b,c)
Repeat ad nauseam. It would take 84 iterations of this to process all the data, and naturally each would need some minor adjustments to match the specific fileName and phase.
Finally, at the end of my code, I have a few lines of code set up to ascii.write each of the tables into .dat files for later manipulation.
This entire method is extremely exhausting to set up. If it's the only way to handle the data, I'll do it. I'm hoping I can find a quicker way to set it up, however. Is there one you can suggest?
If efficiency and code reuse instead of copy is the goal, I think that Classes might provide a good way. I'm going to sleep now, but I'll edit later. Here's my thoughts: create a class called FileWithArrays and use a parser to read the lines and put them inside the object FileWithArrays you will create using the class. Once that's done, you can then create a method to transform the object in a table.
P.S. A good idea for the parser is to store all the lines in a list and parse them one by one, using list.pop() to auto shrink the list. Hope it helps, tomorrow I'll look more on it if this doesn't help a lot. Try to rewrite/reformat the question if I misunderstood anything, it's not very easy to read.
I will suggest a way which will be scorned by many but will get your work done.
So apologies to every one.
The prerequisites for this method is that you absolutely trust the correctness of the input files. Which I guess you do. (After all he is your collaborator).
So the key point here is that the text in the file is code which means it can be executed.
So you can do something like this
import re
import numpy as np # this is for the actual code in the files. You might have to install numpy library for this to work.
file = open("xyz.txt")
content = file.read()
Now that you have all the content, you have to separate it by phase.
For this we will use the re.split function.
phase_data = re.split("#PHASE = .*\n", content)
Now we have the content of each phase in an array.
Now comes for the part of executing it.
for phase in phase_data:
if len(phase.strip()) == 0:
continue
exec(phase)
table = makeTable(x, y, z) # the x, y and z are defined by the exec.
# do whatever you want with the table.
I will reiterate that you have to absolutely trust the contents of the file. Since you are executing it as code.
But your work seems like a scripting one and I believe this will get your work done.
PS : The other "safer" alternative to exec is to have a sandboxing library which takes the string and executes it without affecting the parent scope.
To avoid the safety issue of using exec as suggested by #Ajay Brahmakshatriya, but keeping his first processing step, you can create your own minimal 'phase parser', something like:
VARS = 'xyz'
def makeTable(phase):
assert len(phase) >= 3
output = Table()
for i in range(3):
line = [s.strip() for s in phase[i].split('=')]
assert len(line) == 2
var, arr = line
assert var == VARS[i]
assert arr[:10]=='np.array([' and arr[-2:]=='])'
output[var] = np.fromstring(arr[10:-2], sep=',')
return output
and then call
table = makeTable(phase)
instead of
exec(phase)
table = makeTable(x, y, z)
You could also skip all these assert statements without compromising safety, if the file is corrupted or not formatted as expected the error that will be thrown might just be harder to understand...

Categories