How to open a document from a Notes View with python noteslib? - python

I have an established connection with a notes database and I am able to loop through all the records in a view. What I am curious about if it is possible to open a document and get the data from it using python. (Like double clicking on a record from an HCL Notes Client).
Here is my code simplified:
import noteslib
db = noteslib.Database('my-domino-server','my-db.nsf', 'mypassword')
view = db.GetView('my_view')
doc = view.GetFirstDocument()
while doc:
print(doc.ColumnValues)
#here after printing the column values, I want to open the document and store it's values in a variable.
doc = view.GetNextDocument(doc)
I tried googling about LotusScript and I found the Open() method, but doc.Open() did not work.

Just use the LotusScript documentation to find examples for everything you need.
In your case you start with the NotesDatabase - class, then get an object of type NotesView and finally get a NotesDocument object.
This doc object does not need to be opened. You can directly access all items in that document either by their name or -if you don't know the name- by cycling through all items.
If you e.g. know the name of an item (can be found in the document properties box on the second tab, found with Alt + Enter) then you can read the value like this:
#Subject of a given mail
subject = doc.GetitemValue( "Subject" )[0]
#Start date of a calendar entry
startdate = doc.GetItemValue( "StartDate" )[0]
# cycle through all items
for item in doc.Items
print(item.Name)
value = item.Values
Take care: items are always arrays, even if they contain only a single value. To get the value of a single value item, always access the element at position 0.

Related

How to search through a paragraph of text for a list of words, and once found, add to a dictionary

I am using a GET request to receive XML information back from a website. I have parsed the information and have gotten a point where I want to set up a new scenario.
The Scenario is, when I run a FOR loop through the xml(now turned into text in a dictionary []) how can I:
search the whole text for a variety of words? For example, say I know I'm looking for "value", "Lastupdatedate", and "Createdby" and want to grab that information everytime I see it
IF I find one of those words above, grab the word, and everything in between them (this is xml, so if I find "value" I want to grab the information in between like (value - 5 - value) ?. So the final information would have value: 5
finalvalues = []
MASTER = [This is the text I'm searching through]
variables = ["value", "Lastupdatedate", "Createdby"]
for i in MASTER:
#need something here to grab the actual name that it found
tst = re.findall(variables(.*?)variables,i)
finalvalues.append(tst)

Getting element density from abaqus output database using python scripting

I'm trying to get the element density from the abaqus output database. I know you can request a field output for the volume using 'EVOL', is something similar possible for the density?
I'm afraid it's not because of this: Getting element mass in Abaqus postprocessor
What would be the most efficient way to get the density? Look for every element in which section set it is?
Found a solution, I don't know if it's the fastest but it works:
odb_file_path=r'your_path\file.odb'
odb = session.openOdb(name=odb_file_path)
instance = odb.rootAssembly.instances['MY_PART']
material_name = instance.elements[0].sectionCategory.name[8:-2]
density=odb.materials[material_name].density.table[0][0])
note: the 'name' attribute will give you a string like, 'solid MATERIALNAME'. So I just cut out the part of the string that gave me the real material name. So it's the sectionCategory attribute of an OdbElementObject that is the answer.
EDIT: This doesn't seem to work after all, it turns out that it gives all elements the same material name, being the name of the first material.
The properties are associated something like this:
sectionAssignment connects section to set
set is the container for element
section connects sectionAssignment to material
instance is connected to part (could be from a part from another model)
part is connected to model
model is connected to section
Use the .inp or .cae file if you can. The following gets it from an opened cae file. To thoroughly get elements from materials, you would do something like the following, assuming you're starting your search in rootAssembly.instances:
Find the parts which the instances were created from.
Find the models which contain these parts.
Look for all sections with material_name in these parts, and store all the sectionNames associated with this section
Look for all sectionAssignments which references these sectionNames
Under each of these sectionAssignments, there is an associated region object which has the name (as a string) of an elementSet and the name of a part. Get all the elements from this elementSet in this part.
Cleanup:
Use the Python set object to remove any multiple references to the same element.
Multiply the number of elements in this set by the number of identical part instances that refer to this material in rootAssembly.
E.g., for some cae model variable called model:
model_part_repeats = {}
model_part_elemLabels = {}
for instance in model.rootAssembly.instances.values():
p = instance.part.name
m = instance.part.modelName
try:
model_part_repeats[(m, p)] += 1
continue
except KeyError:
model_part_repeats[(m, p)] = 1
# Get all sections in model
sectionNames = []
for s in mdb.models[m].sections.values():
if s.material == material_name: # material_name is already known
# This is a valid section - search for section assignments
# in part for this section, and then the associated set
sectionNames.append(s.name)
if sectionNames:
labels = []
for sa in mdb.models[m].parts[p].sectionAssignments:
if sa.sectionName in sectionNames:
eset = sa.region[0]
labels = labels + [e.label for e in mdb.models[m].parts[p].sets[eset].elements]
labels = list(set(labels))
model_part_elemLabels[(m,p)] = labels
else:
model_part_elemLabels[(m,p)] = []
num_elements_with_material = sum([model_part_repeats[k]*len(model_part_elemLabels[k]) for k in model_part_repeats])
Finally, grab the material density associated with material_name then multiply it by num_elements_with_material.
Of course, this method will be extremely slow for larger models, and it is more advisable to use string techniques on the .inp file for faster performance.

How to use dictionaries to store a list of data from a file and do a check if data occurs in multiple keys?

I am fairly new to Python and what I am trying to achieve is read json and store in a dictionary. I need to store a common variable index, a change variable index and a list of changes for the change variable.
So far I get the json message and set the common variable index and append it to a list. I then store the change variable in the same list before appending all the changes for it.
For example: If I had two authors and for each author they wrote three books, for each book I need to store the changes they made(store page numbers changed). Once all that data is stored, I would need to loop over all the changes for each book and see if they appear in the other books. Then print out the author and the book that has the same page number changed. (I know it is not the best example, but you get the point.
Reading in from a file:
list = []
item = []
with open("tst.file",'r') as f:
for line in f:
authorRegEx = re.search(r"\([A-Z0-9-]+)",line)
author = authorRegEx.group(0).strip("()")
list.append(author)
bookItemRegEx = re.search(r"\*[A-Z0-9-]+\*",line)
book = bookItemRegEx.strip("*")
list.append(book)
***Then call a rest api to get the json message****
modifiedPages = api call
for i in range(len(modifiedPages['values'])):
pageNumber = modifiedPages['value'][i]['page']
item.append(pageNumber)
list.append(item)
item=[]
print list
Which outputs
['Author1','book1,[u'page1','page2','page3'],'Author1','book2', [u'page2,page4'],'Author2','book1'[u'page2','page3']]
At this point I am at a loss as to what to do next. I need to loop over the page(x) variable for each author and book and check if it appears in the others. Then need to print out which Author, book and page that contains the same page(x) number.
Thanks in advance.

Add a field to existing document in CouchDB

I have a database with a bunch of regular documents that look something like this (example from wiki):
{
"_id":"some_doc_id",
"_rev":"D1C946B7",
"Subject":"I like Plankton",
"Author":"Rusty",
"PostedDate":"2006-08-15T17:30:12-04:00",
"Tags":["plankton", "baseball", "decisions"],
"Body":"I decided today that I don't like baseball. I like plankton."
}
I'm working in Python with couchdb-python and I want to know if it's possible to add a field to each document. For example, if I wanted to have a "Location" field or something like that.
Thanks!
Regarding IDs
Every document in couchdb has an id, whether you set it or not. Once the document is stored you can access it through the doc._id field.
If you want to set your own ids you'll have to assign the id value to doc._id. If you don't set it, then couchdb will assign a uuid.
If you want to update a document, then you need to make sure you have the same id and a valid revision. If say you are working from a blog post and the user adds the Location, then the url of the post may be a good id to use. You'd be able to instantly access the document in this case.
So what's a revision
In your code snippet above you have the doc._rev element. This is the identifier of the revision. If you save a document with an id that already exists, couchdb requires you to prove that the document is still the valid doc and that you are not trying to overwrite someone else's document.
So how do I update a document
If you have the id of your document, you can just access each document by using the db.get(id) function. You can then update the document like this:
doc = db.get(id)
doc['Location'] = "On a couch"
db.save(doc)
I have an example where I store weather forecast data. I update the forecasts approximately every 2 hours. A separate process is looking for data that I get from a different provider looking at characteristics of tweets on the day.
This looks something like this.
doc = db.get(id)
doc_with_loc = GetLocationInformationFromOtherProvider(doc) # takes about 40 seconds.
doc_with_loc["_rev"] = doc["_rev"]
db.save(doc_with_loc) # This will fail if weather update has also updated the file.
If you have concurring processes, then the _rev will become invalid, so you have to have a failsave, eg. this could do:
doc = db.get(id)
doc_with_loc = GetLocationInformationFromAltProvider(doc)
update_outstanding = true
while update_outstanding:
doc = db.get(id) //reretrieve this to get
doc_with_loc["_rev"] = doc["_rev"]
update_outstanding = !db.save(doc_with_loc)
So how do I get the Ids?
One option suggested above is that you actively set the id, so you can retrieve it. Ie. if a user sets a given location that is attached to a URL, use the URL. But you may not know which document you want to update - or even have a process that finds all the document that don't have a location and assign one.
You'll most likely be using a view for this. Views have a mapper and a reducer. You'll use the first one, forget about the last one. A view with a mapper does the following:
It returns a simplyfied/transformed way of looking at your data. You can return multiple values per data or skip some. It gives the data you emit a key, and if you use the _include_docs function it will give you the document (with _id and rev alongside).
The simplest view is the default view db.view('_all_docs') this will return all documents and you may not want to update all of them. Views for example will be stored as a document as well when you define these.
The next simple way is to have view that only returns items that are of the type of the document. I tend to have a _type="article in my database. Think of this as marking that a document belongs to a certain table if you had stored them in a relational database.
Finally you can filter elements that have a location so you'd have a view where you can iterate over all those docs that still need a location and identify this in a separate process. The best documentation on writing view can be found here.

How do I get the full content of a Splunk search result when using the Python SDK?

I can get the results from a one_shot query, but I can't get the full content of the _raw field.
import splunklib.client as client
import splunklib.results as results
def splunk_oneshot(search_string, **CARGS):
# Run a oneshot search and display the results using the results reader
service = client.connect(**CARGS)
oneshotsearch_results = service.jobs.oneshot(search_string)
# Get the results and display them using the ResultsReader
reader = results.ResultsReader(oneshotsearch_results)
for item in reader:
for key in item.keys():
print(key, len(item[key]), item[key])
This gives me the following for _raw:
('_raw', 120, '2013-05-03 22:17:18,497+0000 [SWF Activity attNsgActivitiesTaskList1 19] INFO c.s.r.h.s.a.n.AttNsgSmsRequestAdapter - ')
So this content is truncated at 120 characters. I need the entire value of the search result, because I need to run some string comparisons thereupon. I have not found any documentation on the ResultsReader fields or their size restrictions.
My best guess is that is caused by the insertion of special tags in the event raw data to highlight matched search terms in the Splunk UI front-end. In all likelihood, your search string specifies a matching literal term present in the raw data right at the point of truncation. This is not an appropriate default behavior for the SDK result-fetching method and there is currently a bug opened to fix this (internal reference DVPL-1519).
Fortunately, avoiding this problem is fairly trivial: One simply needs to pass segmentation='none' as an argument to the job.results() method:
(...)
oneshotsearch_results = service.jobs.oneshot(search_string,segmentation='none')
(...)
Do note that the 'segmentation' argument for the service.jobs() method is only available on Splunk 5.0 and onwards.

Categories