I would like to be able to call a python script that checks to see if the variables passed to it have already been passed to it and if not then spit out a KML file for google earth to read. I've looked at environment variables to no avail. But essentially I need to store a string so that the next time the script is called I can reference it. I'll post what I have below. Thanks and any help is greatly appreciated.
EDIT: I suppose I didn't clearly state the issue, I'm attempting to call a python script on an Apache server with KML passing URL variables to the script. One of the URL variables contains a time string, I would like to store that time and be able to reference it to the next "Time" that is passed to the script, IF the times don't match then print out certain KML, If they DO match then print empty script so that Google Earth doesn't duplicate a placemark. In essence I am filtering the KML files so that I can avoid duplicates. I've also updated the code below.
import cgi
import os
url = cgi.FieldStorage()
bbox = url['test'].value
bbox = bbox.split(',')
lat = float(bbox[0])
lon = float(bbox[1])
alt = float(bbox[2])
when = str(bbox[3])
if when == os.environ['TEMP']:
kml = ('<?xml version="1.0" encoding="UTF-8"?>\n'
'<kml xmlns="http://www.opengis.net/kml/2.2">\n'
'</kml>')
else:
kml = ('<?xml version="1.0" encoding="UTF-8"?>\n'
'<kml xmlns="http://www.opengis.net/kml/2.2">\n'
'<NetworkLinkControl>\n'
'<Update>\n'
'<targetHref>http://localhost/GE/Placemark.kml</targetHref>\n'
'<Create>\n'
'<Folder targetId="fld1">\n'
'<Placemark>\n'
'<name>View-centered placemark</name>\n'
'<TimeStamp>\n'
'<when>%s</when>\n'
'</TimeStamp>\n'
'<Point>\n'
'<coordinates>%.6f,%.6f,%.6f</coordinates>\n'
'</Point>\n'
'</Placemark>\n'
'</Folder>\n'
'</Create>\n'
'</Update>\n'
'</NetworkLinkControl>\n'
'</kml>'
) % (when, lat, lon, alt)
os.environ['TEMP'] = when
print('Content-Type: application/vnd.google-earth.kml+xml\n')
print(kml)
It seems like you have a few options here to share a state:
Use a db.
file system based persistence
a separate daemon process that you can connect to via sockets
Use memcache or some other service to store in memory
You can share states via python: https://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes)
You can also create a manager and have proxy objects:
Related
I am trying to solve a problem:
I receive auto-generated email from government with no tags in HTML. It's one table nested upon another. An abomination of a template. I get it every few days and I want to extract some fields from it. My idea was this
Use HTML in the email as template. Remove all fields that change with every mail like Name of my client, their Unique ID and issue explained in the mail.
Use this html template with missing fields and diff it with new emails. That will give me all the new info in one shot without having to parse this email.
Problem is, I can't find any way of loading only these additions. I am trying to use difflib in python and it returns byte streams of additions and subtractions in each line that I am not able to process properly. I want to find a way to only return the additions and nothing else. I am open to using other libraries or methods. I do not want to write a huge regex with tons of html.
When I got the stdout from using Popen calling diff it also returned bytes.
You can convert the bytes to chars, then continue with your processing.
You could do something similar to what I do below to convert your bytes to a string
The below calls diff on two files and prints only the lines beginning with the '>' symbol (new in the rhs file):
#! /usr/env python
import os
import sys, subprocess
file1 = 'test1'
file2 = 'test2'
if len(sys.argv)==3:
file1=sys.argv[1]
file2=sys.argv[2]
if not os.access(file1,os.R_OK):
print(f'Unable to read: \'{file1}\'')
sys.exit(1)
if not os.access(file2,os.R_OK):
print(f'Unable to read: \'{file2}\'')
sys.exit(1)
argv = ['diff',file1,file2]
runproc = subprocess.Popen(args=argv, stdout=subprocess.PIPE)
out, err = runproc.communicate()
outstr=''
for c in out:
outstr+=chr(c)
for line in outstr.split('\n'):
if len(line)==0:
continue
if line[0]=='>':
print(line)
I am trying to understand http queries and was succesfully getting the data from GET requests through the environment variables by first looking through the keys of the environment vars and then accessing 'QUERY_STRING' to get the actual data.
like this:
#!/usr/bin/python3
import sys
import cgi
import os
inputVars = cgi.FieldStorage()
f = open('test','w')
f.write(str(os.environ['QUERY_STRING])+"\n")
f.close()
Is there a way to get the POST data (the equivalent of 'QUERY_STRING' for POST - so to say) as well or is it not accessible because the POST data is send in its own package? the keys of the environment variables did not give me any hint so far.
the possible duplicate link solved it, as syntonym pointed out in the comments and user Schien explains in one of the answers to the linked question:
the raw http post data (the stuff after the query) can be read through stdin.
so the sys.stdin.read() method can be used.
my code now works looking like this:
#!/usr/bin/python3
import sys
import os
f = open('test','w')
f.write(str(sys.stdin.read()))
f.close()
So I'm not really sure of the issue here... Basically, as long as I use the live API I'm working with to get the data, everything works fine. But if I try to open an existing file with the SAME exact data in it (from a previous api call), I get an internal server error. Here is the codeblock that is causing issues:
thisFile = os.path.join(__location__, '2014/' + npi + '_individual.json')
# if the local cached copy exists, then load that, otherwise, connect to the API
if os.path.isfile(thisFile):
target2 = open(thisFile)
content = json.loads(target2.read())
else:
req = urllib2.Request(url + '?database=nppes&npi=' + npi + '&apikey=' + apiKey)
r = urllib2.urlopen(req)
content = json.loads(r.read())
I believe I'm using webpy or web2py (I'm not sure if these are two separate things). Executing the script via WSGI on Apache2.4.
It was an unrelated matter. Basically, this was failing because... so on the first call, it was refining the data I got back from the API, and then storing that REFINED data in the json file.
So on the next load, the json file exists, and when it loads it, some things have been renamed in the refining process, and so the rest of the script doesn't know how to execute. Silly issue.
The problem:
I have a .sav file containing a lot of data from a questionnaire. This data is transferred to an access database through an ODBC connection using VBA. The VBA macro is being used by a lot of not so tech-savvy coworkers and each time they need to use a new .sav file, they need to create or alter a connection. It can be a hastle to explain how every time, so I would like to be able to create this connection programmatically (ideally contained in the VBA macro, but if that's not possible something like a python script will do).
The solution I thought of:
I'm not very familiar with VBA, so to begin with I tried doing it with python. I found this example, which seemed (to me) like a solution:
http://code.activestate.com/recipes/414879-create-an-odbc-data-source/
However, I'm not sure what arguments to feed SQLConfigDataSource. I tried:
create_sys_dsn(
'IBM SPSS Statistics 20 Data File Driver - Standalone',
SDSN='SAVDB',
HST='C:\ProgramFiles\IBM\SPSS\StatisticsDataFileDriver\20\Standalone\cfg\oadm.ini',
PRT='StatisticsSAVDriverStandalone',
CP_CONNECT_STRING='data/Dustin_w44-47_2011.sav',
CP_UserMissingIsNull=1)
But to no avail.
Being green in this field, I realise that my proposed solution might not even be the correct one. Does stackoverflow have any ideas?
Thanks in advance
once you have the filename, from a prompt such as
dim fd as Object
dim CSVFilename as String
Set fd = Application.FileDialog(3)
'Use a With...End With block to reference the FileDialog object.
With fd
.AllowMultiSelect = False
.Filters.Add "Data Files", "*.csv", 1
.ButtonName = "Forecast"
If .Show = -1 Then
CSVFilename = .SelectedItems(1)
' returns an array, even with multiselect off
Else
Exit Sub
End If
End With
'Set the object variable to Nothing.
Set fd = Nothing
then you should be able to construct the connection string, which will allow you to open the file
Dim MyConnect as new Connection
With MyConnect
.Provider="IBM SPSS Statistics 20 Data File Driver - Standalone"
.ConnectionString="data/" & CSVFilename
.Open
End With
I would also point out that if you can import the csv file using the External Data -> Text File (or equivalent), then a command more like
DoCmd.TransferText(TransferType, SpecificationName, TableName, FileName, _
HasFieldNames, HTMLTableName, CodePage)
would probably work out easier for you
For anyone that happens to need a solution for this, what I ended up doing was creating the system DSN by manually editing the registry.
To see which keys and values needs to be modified, create a system DSN manually and inspect the values in 'HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI[DSN]' and 'HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\ODBC Data Sources'. It should be fairly clear how to create a DSN through VBA then.
Im using the plone cms and am having trouble with a python script. I get a name error "the global name 'open' is not defined". When i put the code in a seperate python script it works fine and the information is being passed to the python script becuase i can print the query. Code is below:
#Import a standard function, and get the HTML request and response objects.
from Products.PythonScripts.standard import html_quote
request = container.REQUEST
RESPONSE = request.RESPONSE
# Insert data that was passed from the form
query=request.query
#print query
f = open("blast_query.txt","w")
for i in query:
f.write(i)
return printed
I also have a second question, can i tell python to open a file in in a certain directory for example, If the script is in a certain loaction i.e. home folder, but i want the script to open a file at home/some_directory/some_directory can it be done?
Python Scripts in Plone are restricted and have no access to the filesystem. The open call is thus not available. You'll have to use an External Method or full python module to have full access to the filesystem.