I'm trying to use couchdb.py to create and update databases. I'd like to implement notification changes, preferably in continuous mode. Running the test code posted below, I don't see how the changes scheme works within python.
class SomeDocument(Document):
#############################################################################
# def __init__ (self):
intField = IntegerField()#for now - this should to be an integer
textField = TextField()
couch = couchdb.Server('http://127.0.0.1:5984')
databasename = 'testnotifications'
if databasename in couch:
print 'Deleting then creating database ' + databasename + ' from server'
del couch[databasename]
db = couch.create(databasename)
else:
print 'Creating database ' + databasename + ' on server'
db = couch.create(databasename)
for iii in range(5):
doc = SomeDocument(intField=iii,textField='somestring'+str(iii))
doc.store(db)
print doc.id + '\t' + doc.rev
something = db.changes(feed='continuous',since=4,heartbeat=1000)
for iii in range(5,10):
doc = SomeDocument(intField=iii,textField='somestring'+str(iii))
doc.store(db)
time.sleep(1)
print something
print db.changes(since=iii-1)
The value
db.changes(since=iii-1)
returns information that is of interest, but in a format from which I haven't worked out how to extract the sequence or revision numbers, or the document information:
{u'last_seq': 6, u'results': [{u'changes': [{u'rev': u'1-9c1e4df5ceacada059512a8180ead70e'}], u'id': u'7d0cb1ccbfd9675b4b6c1076f40049a8', u'seq': 5}, {u'changes': [{u'rev': u'1-bbe2953a5ef9835a0f8d548fa4c33b42'}], u'id': u'7d0cb1ccbfd9675b4b6c1076f400560d', u'seq': 6}]}
Meanwhile, the code I'm really interested in using:
db.changes(feed='continuous',since=4,heartbeat=1000)
Returns a generator object and doesn't appear to provide notifications as they come in, as the CouchDB guide suggests ....
Has anyone used changes in couchdb-python successfully?
I use long polling rather than continous, and that works ok for me. In long polling mode db.changes blocks until at least one change has happened, and then returns all the changes in a generator object.
Here is the code I use to handle changes. settings.db is my CouchDB Database object.
since = 1
while True:
changes = settings.db.changes(since=since)
since = changes["last_seq"]
for changeset in changes["results"]:
try:
doc = settings.db[changeset["id"]]
except couchdb.http.ResourceNotFound:
continue
else:
// process doc
As you can see it's an infinite loop where we call changes on each iteration. The call to changes returns a dictionary with two elements, the sequence number of the most recent update and the objects that were modified. I then loop through each result loading the appropriate object and processing it.
For a continuous feed, instead of the while True: line use for changes in settings.db.changes(feed="continuous", since=since).
I setup a mailspooler using something similar to this. You'll need to also load couchdb.Session() I also use a filter for only receiving unsent emails to the spooler changes feed.
from couchdb import Server
s = Server('http://localhost:5984/')
db = s['testnotifications']
# the since parameter defaults to 'last_seq' when using continuous feed
ch = db.changes(feed='continuous',heartbeat='1000',include_docs=True)
for line in ch:
doc = line['doc']
// process doc here
doc['priority'] = 'high'
doc['recipient'] = 'Joe User'
# doc['state'] + 'sent'
db.save(doc)
This will allow you access your doc directly from the changes feed, manipulate your data as you see fit, and finally update you document. I use a try/except block on the actual 'db.save(doc)' so I can catch when a document has been updated while I was editing and reload the doc before saving.
Related
I have a Jython 2.7 script that receives a URL and uses the parameters/values in the URL to create or update records.
Example URL: http://server:host/maximo/oslc/script/CREATEWO?&wonum=WO0001&description=Legacy&classstructureid=1666&wopriority=1&worktype=CM
Details:
Receive the URL and put the parameters/values in variables:
from psdi.server import MXServer
from psdi.mbo import MboSet
resp = {}
wonum = request.getQueryParam("wonum")
description = request.getQueryParam("description")
classstructureid = request.getQueryParam("classstructureid")
wopriority = request.getQueryParam("wopriority")
worktype = request.getQueryParam("worktype")
Some lines that aren't relevant to the question:
woset = MXServer.getMXServer().getMboSet("workorder",request.getUserInfo())
whereClause = "wonum= '" + wonum + "'"
woset.setWhere(whereClause)
woset.reset()
woMbo = woset.moveFirst()
Then use the values to either create a new record or update an existing record:
#If workorder already exists, update it:
if woMbo is not None:
woMbo.setValue("description", description)
woMbo.setValue("classstructureid", classstructureid)
woMbo.setValue("wopriority", wopriority)
woMbo.setValue("worktype", worktype)
woset.save()
woset.clear()
woset.close()
resp[0]='Updated workorder ' + wonum
#Else, create a new workorder
else:
woMbo=woset.add()
woMbo.setValue("wonum",wonum)
woMbo.setValue("description", description)
woMbo.setValue("classstructureid", classstructureid)
woMbo.setValue("wopriority", wopriority)
woMbo.setValue("worktype", worktype)
woset.save()
woset.clear()
woset.close()
resp[0]='Created workorder ' + wonum
responseBody =resp[0]
Question:
Unfortunately, the field names/values are hardcoded in 3 different places in the script.
I would like to enhance the script so that it is dynamic -- not hardcoded.
In other words, it would be great if the script could accept a list of parameters/values and simply loop through them to update or create a record in the respective fields.
Is it possible to do this?
You're using the Maximo Next Gen. REST API to execute an automation script that accepts an HTTP request with parameters and creates or updates a Work Order in the system. You want to make your script more generic (presumably to accept more paramaters for the created/updated work order) and/or other mbo's.
This can be achieved without developing automation scripts and just using the Next Gen. API you're already using to execute the script. The API already accepts create & update requests on the mxwo object structure with the ability to use all the fields, child objects, etc.
https://developer.ibm.com/static/site-id/155/maximodev/restguide/Maximo_Nextgen_REST_API.html#_creating_and_updating_resources
Assuming you are working always with the same query parameters, rather than define variables, loop through a list of strings and put them as key-value pairs
To populate
items = ["wonum", "description"]
resp = {k: request.getQueryParam(k) for k in items}
Then to set
for i in items:
woMbo.setValue(i, resp[i])
Otherwise, you are looking for URL parsing and the getQuery method, followed by a split("="), giving you ["wonum", "WO0001", "description", "Legacy"], for example, and you can loop over every other element to get you dynamic entries
l = ["wonum", "WO0001", "description", "Legacy"]
for i in range(0, len(l)-1, 2):
print(f'key:{l[i]}\tvalue:{l[i+1]}')
key:wonum value:WO0001
key:description value:Legacy
Note: This is subject to SQL injection attacks, and should be fixed
whereClause = "wonum= '" + wonum + "'"
This question already has answers here:
Are global variables thread-safe in Flask? How do I share data between requests?
(4 answers)
Closed 5 years ago.
I'm working on a small web app to create some diagrams and I need to create a variable to hold a unique file name for each web app session so that users don't end up getting the wrong file when they save the diagram as a pdf. To do this I've wrapped the related views in a class using flask_classful and created an instance variable to hold the file name.
class PiperView(FlaskView):
route_base = '/'
def __init__(self):
self.piper_name = '_init_.pdf'
self.tst_index = 0
self.tst_plot = 0
self.tst_download = 0
self.tst_master = 0
#route('/',methods=['GET','POST'])
#route('/index/',methods=['GET','POST'],endpoint='index')
#nocache
def index(self):
self.piper_name = '_piper.pdf'
#test Code
#=======================================================================
file = open(fpath+'index.txt','a')
self.tst_index += 1
self.tst_master += 1
file.write(str(self.tst_index)+"."+str(self.tst_master)+") " +str(self.piper_name)+', ')
file.close()
#=======================================================================
plot_data = np.loadtxt('piper_data.csv', delimiter=',', skiprows=1 )
html_plot = Markup(piper(plot_data, ' ', alphalevel=1.0, color=False, file_nam=self.piper_name))
return render_template('plot.html',title='The Plot', figure=html_plot)
#route('/plot',methods=['GET','POST'],endpoint='plot')
#nocache
def plot(self):
self.piper_name = str(random.randint(0,10000001))+'_piper.pdf'
#test Code
#=======================================================================
file = open(fpath+'plot.txt','a')
self.tst_plot += 1
self.tst_master += 1
file.write(str(self.tst_plot)+"."+str(self.tst_master)+" ) " +str(self.piper_name)+', ')
file.close()
#=======================================================================
try:
f = request.files['data_file']
plot_data = np.loadtxt(f, delimiter=',', skiprows=1 )
html_plot = Markup(piper( plot_data, ' ', alphalevel=1.0, color=False, file_nam=self.piper_name))
return render_template('plot.html',title='The Plot', figure=html_plot)
except:
return render_template('plot.html',title='The Plot', figure="There Seems To Be A Problem With Your Data")
#route('/download',methods=['GET','POST'],endpoint='download')
#nocache
def download(self):
#test Code
#=======================================================================
file = open(fpath+'download.txt','a')
self.tst_download += 1
self.tst_master += 1
file.write(str(self.tst_download)+"."+str(self.tst_master)+") " +str(self.piper_name)+', ')
file.close()
#=======================================================================
return send_from_directory(directory=fpath,filename=self.piper_name)
The problem is that the instance variable that holds the file name doesn't get shared between methods. I added some test code to try and figure out what was happening. The 'tst_index', 'tst_plot' and 'tst_download' each behave as expected in that they get incremented but the 'tst_master' does not get incremented between method calls.
The output from the test code is:
index.txt
1.1) _piper.pdf,
plot.txt
1.1 ) 7930484_piper.pdf, 2.2 ) 9579691_piper.pdf,
download.txt
1.1) init.pdf, 2.2) init.pdf,
when I call the index view one (1) time, the plot view two (2) times and the download view (2) times. As you can see the 'tst_master' instance variable is not getting updated between method calls.
I know this would work in plain python as I tested it but what am I missing about flask and flask_classful that is causing this?
You are overcomplicating your task. You probably don't need to use flask-classful for it.
You can use ordinary flask sessions. Session is unique for each user. The only thing you need is to use some unique ID for each file. This file id can be user id if your users log in into your web app and their credentials are stored in the db. Or you can randomly generate this file id. Then you can store the filename in the flask session like this:
from flask import session
...
def plot(...):
session['user_file_name'] = user_file_name
def download(...):
user_file_name = session['user_file_name']
Hope this helps.
Embedding state like that in your application is generally a bad idea. There is no guarantee that the view instance the generates a response will persist for more than that one request. Store your data outside of flask- server-side on a database, a key-value store, or even a file on disk somewhere, or client-side in the browser. Flask has a number of plugins that make that easier (flask-sqlalchemy, flask-session, flask-redis, etc).
Flask natively offers the flask.session object, which stores information in cookies on the client side. flask-session would probably do what you want without much additional overhead if you were concerned with storing things server side. Just configure it with the File System session interface and you get a flask.session variable that handles all the magic of linking user requests to data stored on the filesystem.
This is a duplicate to this question:
How to convert suds object to xml
But the question has not been answered: "totxt" is not an attribute on the Client class.
Unfortunately I lack of reputation to add comments. So I ask again:
Is there a way to convert a suds object to its xml?
I ask this because I already have a system that consumes wsdl files and sends data to a webservice. But now the customers want to alternatively store the XML as files (to import them later manually). So all I need are 2 methods for writing data: One writes to a webservice (implemented and tested), the other (not implemented yet) writes to files.
If only I could make something like this:
xml_as_string = My_suds_object.to_xml()
The following code is just an example and does not run. And it's not elegant. Doesn't matter. I hope you get the idea what I want to achieve:
I have the function "write_customer_obj_webservice" that works. Now I want to write the function "write_customer_obj_xml_file".
import suds
def get_customer_obj():
wsdl_url = r'file:C:/somepathhere/Customer.wsdl'
service_url = r'http://someiphere/Customer'
c = suds.client.Client(wsdl_url, location=service_url)
customer = c.factory.create("ns0:CustomerType")
return customer
def write_customer_obj_webservice(customer):
wsdl_url = r'file:C:/somepathhere/Customer.wsdl'
service_url = r'http://someiphere/Customer'
c = suds.client.Client(wsdl_url, location=service_url)
response = c.service.save(someparameters, None, None, customer)
return response
def write_customer_obj_xml_file(customer):
output_filename = r'C\temp\testxml'
# The following line is the problem. "to_xml" does not exist and I can't find a way to do it.
xml = customer.to_xml()
fo = open(output_filename, 'a')
try:
fo.write(xml)
except:
raise
else:
response = 'All ok'
finally:
fo.close()
return response
# Get the customer object always from the wsdl.
customer = get_customer_obj()
# Since customer is an object, setting it's attributes is very easy. There are very complex objects in this system.
customer.name = "Doe J."
customer.age = 42
# Write the new customer to a webservice or store it in a file for later proccessing
if later_processing:
response = write_customer_obj_xml_file(customer)
else:
response = write_customer_obj_webservice(customer)
I found a way that works for me. The trick is to create the Client with the option "nosend=True".
In the documentation it says:
nosend - Create the soap envelope but don't send. When specified, method invocation returns a RequestContext instead of sending it.
The RequestContext object has the attribute envelope. This is the XML as string.
Some pseudo code to illustrate:
c = suds.client.Client(url, nosend=True)
customer = c.factory.create("ns0:CustomerType")
customer.name = "Doe J."
customer.age = 42
response = c.service.save(someparameters, None, None, customer)
print response.envelope # This prints the XML string that would have been sent.
You have some issues in write_customer_obj_xml_file function:
Fix bad path:
output_filename = r'C:\temp\test.xml'
The following line is the problem. "to_xml" does not exist and I can't find a way to do it.
What's the type of customer? type(customer)?
xml = customer.to_xml() # to be continued...
Why mode='a'? ('a' => append, 'w' => create + write)
Use a with statement (file context manager).
with open(output_filename, 'w') as fo:
fo.write(xml)
Don't need to return a response string: use an exception manager. The exception to catch can be EnvironmentError.
Analyse
The following call:
customer = c.factory.create("ns0:CustomerType")
Construct a CustomerType on the fly, and return a CustomerType instance customer.
I think you can introspect your customer object, try the following:
vars(customer) # display the object attributes
help(customer) # display an extensive help about your instance
Another way is to try the WSDL URLs by hands, and see the XML results.
You may obtain the full description of your CustomerType object.
And then?
Then, with the attributes list, you can create your own XML. Use an XML template and fill it with the object attributes.
You may also found the magic function (to_xml) which do the job for you. But, not sure the XML format matches your need.
client = Client(url)
client.factory.create('somename')
# The last XML request by client
client.last_sent()
# The last XML response from Web Service
client.last_received()
I am trying to use endpoints to update some JSON values in my datastore. I have the following Datastore in GAE...
class UsersList(ndb.Model):
UserID = ndb.StringProperty(required=True)
ArticlesRead = ndb.JsonProperty()
ArticlesPush = ndb.JsonProperty()
In general what I am trying to do with the API is have the method take in a UserID and a list of articles read (with an article being represented by a dictionary holding an ID and a boolean field saying whether or not the user liked the article). My messages (centered on this logic) are the following...
class UserID(messages.Message):
id = messages.StringField(1, required=True)
class Articles(messages.Message):
id = messages.StringField(1, required=True)
userLiked = messages.BooleanField(2, required=True)
class UserIDAndArticles(messages.Message):
id = messages.StringField(1, required=True)
items = messages.MessageField(Articles, 2, repeated=True)
class ArticleList(messages.Message):
items = messages.MessageField(Articles, 1, repeated=True)
And my API/Endpoint method that is trying to do this update is the following...
#endpoints.method(UserIDAndArticles, ArticleList,
name='user.update',
path='update',
http_method='GET')
def get_update(self, request):
userID = request.id
articleList = request.items
queryResult = UsersList.query(UsersList.UserID == userID)
currentList = []
#This query always returns only one result back, and this for loop is the only way
# I could figure out how to access the query results.
for thing in queryResult:
currentList = json.loads(thing.ArticlesRead)
for item in articleList:
currentList.append(item)
for blah in queryResult:
blah.ArticlesRead = json.dumps(currentList)
blah.put()
for thisThing in queryResult:
pushList = json.loads(thisThing.ArticlesPush)
return ArticleList(items = pushList)
I am having two problems with this code. The first is that I can't seem to figure out (using the localhost Google APIs Explorer) how to send a list of articles to the endpoints method using my UserIDAndArticles class. Is it possible to have a messages.MessageField() as an input to an endpoint method?
The other problem is that I am getting an error on the 'blah.ArticlesRead = json.dumps(currentList)' line. When I try to run this method with some random inputs, I get the following error...
TypeError: <Articles
id: u'hi'
userLiked: False> is not JSON serializable
I know that I have to make my own JSON encoder to get around this, but I'm not sure what the format of the incoming request.items is like and how I should encode it.
I am new to GAE and endpoints (as well as this kind of server side programming in general), so please bear with me. And thanks so much in advance for the help.
A couple things:
http_method should definitely be POST, or better yet PATCH because you're not overwriting all existing values but only modifying a list, i.e. patching.
you don't need json.loads and json.dumps, NDB does it automatically for you.
you're mixing Endpoints messages and NDB model properties.
Here's the method body I came up with:
# get UsersList entity and raise an exception if none found.
uid = request.id
userlist = UsersList.query(UsersList.UserID == uid).get()
if userlist is None:
raise endpoints.NotFoundException('List for user ID %s not found' % uid)
# update user's read articles list, which is actually a dict.
for item in request.items:
userslist.ArticlesRead[item.id] = item.userLiked
userslist.put()
# assuming userlist.ArticlesPush is actually a list of article IDs.
pushItems = [Article(id=id) for id in userlist.ArticlesPush]
return ArticleList(items=pushItems)
Also, you should probably wrap this method in a transaction.
I have a small script that checks a large list of domains for their MX records, everything works fine but when the script finds a domain with no record, it takes quite a long time to skip to the next one.
I have tried adding:
query.lifetime = 1.0
or
query.timeout = 1.0
but this doesn't seem to do anything. Does anyone know how this setting is configured?
My script is below, thanks for your time.
import dns.resolver
from dns.exception import DNSException
import dns.query
import csv
domains = csv.reader(open('domains.csv', 'rU'))
output = open('output.txt', 'w')
for row in domains:
try:
domain = row[0]
query = dns.resolver.query(domain,'MX')
query.lifetime = 1.0
except DNSException:
print "nothing here"
for rdata in query:
print domain, " ", rdata.exchange, 'has preference', rdata.preference
output.writelines(domain)
output.writelines(",")
output.writelines(rdata.exchange.to_text())
output.writelines("\n")
You're setting the timeout after you've already performed the query. So that's not gonna do anything!
What you want to do instead is create a Resolver object, set its timeout, and then call its query() method. dns.resolver.query() is just a convenience function that instantiates a default Resolver object and invokes its query() method, so you need to do that manually if you don't want a default Resolver.
resolver = dns.resolver.Resolver()
resolver.timeout = 1
resolver.lifetime = 1
Then use this in your loop:
try:
domain = row[0]
query = resolver.resolve(domain,'MX')
except:
# etc.
You should be able to use the same Resolver object for all queries.