I am trying to get a specific document from a Domino view.
The view has 3 columns: Name, Surname, Age.
The problem is, that Name is not unique, so I need to get the document that matches 'John' in the Name column (1st column) as well as 'Doe' in the second column (Surname).
So obviously the following won't work: doc = view.GetDocumentByKey('John')
There is a NotesView COM class which contains the .GetDocumentByKey() method, which allows one to enter a key array. But I am not able to enter a key array in Python.
I have tried the following:
doc = view.GetDocumentByKey('John Doe')
doc = view.GetDocumentByKey('John, Doe')
doc = view.GetDocumentByKey(('John', 'Doe'))
doc = view.GetDocumentByKey(['John', 'Doe'])
But none of them are able to get the needed document.
What is the correct way to pass a key array?
EDIT:
Solution found. There was a sorted hidden column with unique values that I ended up using.
Solution found. There was a sorted hidden column with unique values that I ended up using.
Related
Greeting everyone. New user of this Stack Overflow resource. I have code I'm developing that requires me to capture some faculty data into a list, update that list then merge with another list and ftp a csv file. First things first.
I created an empty list
records: List[EmptyRecord] = []
and using
records.extend(faculty_records)
now have a list of faculty data. The email address is at index 3.
I have a doc string SQL statement GET_MAIL that will return the email address I need to update the value at index 3 in faculty_records. I think I need some sort of
records.insert(3, '{email address}')
inside a while loop for all the values in faculty_records.
I have username at index 2 and ID at index 4 in the list to match which address to update. It's Peoplesoft data so ID in the list has to match the emplid from the SQL results.
Can someone assist in getting my pseudocode into python?
Once I get the values updated I need to merge with my student data list, which should be as easy as.
records.extend(student_records)
and send both student and faculty data to a vendor.
insert() adds a new element at that index, shifting all the following elements over to make room, it doesn't replace. Just use ordinary assignment to replace an element.
Loop through the records with a for loop to find the record with the username you want to update. Then assign to element 3 to update the email.
for record in records:
if record[2] == username_to_update:
record[3] = new_email
break
I have a data frame like
I have a dictionary with the ec2 instance details
Now, I want to add a new column 'Instance Name' and populate it based on a condition that the instance ID in the dictionary is in the column 'ResourceId' and further, depending on what is there in the Name field in dictionary for that instance Id, I want to populate the new column value for each matching entry
Finally I want to create separate data frames for my specific use-cases e.g. to get only Box-Usage results. Something like this
box_usage = df[df['lineItem/UsageType'].str.contains('BoxUsage')]
print(box_usage.groupby('Instance Name')['lineItem/BlendedCost'].sum())
The new column value is not coming up against the respective Resource Id as I desire. It is rather coming up sequentially.
I have tried bunch of things including what I mentioned in above code, but no result yet. Any help?
After struggling through several options, I used the .apply() way and it did the trick
df.insert(loc=17, column='Instance_Name', value='Other')
instance_id = []
def update_col(x):
for key, val in ec2info.items():
if x == key:
if ('MyAgg' in val['Name']) | ('MyAgg-AutoScalingGroup' in val['Name']):
return 'SharkAggregator'
if ('MyColl AS Group' in val['Name']) | ('MyCollector-AutoScalingGroup' in val['Name']):
return 'SharkCollector'
if ('MyMetric AS Group' in val['Name']) | ('MyMetric-AutoScalingGroup' in val['Name']):
return 'Metric'
df['Instance_Name'] = df.ResourceId.apply(update_col)
df.Instance_Name.fillna(value='Other', inplace=True)
I have a table where each row has 10 different fields, but for a certain Changefeed I'm using I'm only interested in looking if there is a change in one of the fields, but return two of them.
For instance, if I have a table with the rows [id, name, age, height] I'd want to see all the changes made only in age (I don't care about the height changes) and I want the cursor to return both the fields Name and age.
I know that to look at one field I can do:
r.table('Customers').get_all(*list_of_customers*).get_field('age').changes.run()
but this wouldn't return the ID or the Name.
Is there a way to do this?
Thanks
You can use pluck, which accepts several fields, instead of get_field.
If you have a look at this website: http://gbgfotboll.se/serier/?scr=table&ftid=57109
The second table information is what i need.
What i am doing right now:
I am going through every cell in the column Tid to match a specific date. If it matches then it goes on to extract other relevant data from that row. The code for that looks like this:
rows_xpath = XPath("//*[#id='content-primary']/table[2]/tbody/tr[td[1]/span/span//text()='%s']" % (date))
time_xpath = XPath("td[1]/span/span//text()[2]")
team_xpath = XPath("td[2]/a/text()")
html = lxml.html.parse(url)
league_xpath = XPath("//*[#id='content-primary']/h1//text()")
divName = league_xpath(html)[0]
trash, divisionName = divName.rsplit("- ")
dict[divisionName] = {}
for i,row in enumerate(rows_xpath(html)):
.... doing some stuff here
Problem:
As time goes another table will be inserted to the webpage meaning rows_xpath will be invalid since it will be needed to change to this:
rows_xpath = XPath("//*[#id='content-primary']/table[3]/tbody/tr[td[1]/span/span//text()='%s']" % (date))
what is changed is the table[x] where x is the number being changed.
Is there a smart solution to solve this or even a better way of getting the information that i need in a more secure way that is not depending of the XPath? I appreciate all help i can get!
You do not necessarily have to specify the table element number - if you are simply looking for unique date values among all tables.
rows_xpath = XPath("//*[#id='content-primary']/table/tbody/tr[td[1]/span/span//text()='%s']" % (date))
This would return a collection of rows with that date value, among however many tables there may be. If, however, you are looking for dates on a specific table, you may have to select the table with an xpath first (assuming there are some unique values you can use), then use an xpath within that table.
I have written a simple script that prints out and adds the name of a table and it's associated column headings to a python list:
for table in arcpy.ListTables():
for field in arcpy.ListFields(table):
b.append(field.name + "," + fc)
print b
In each table there are a number of column headings. There are many instances where one or more tables contain the same column headings. I want to do a bit of a reverse python dictionary instead of a list, where keys are the column headings and the values are the table names. My idea is, to find the all the tables that each column heading lies within.
I've been playing around all afternoon and I think I am over thinking this so I came here for some help. If anyone can suggest how I can accomplish this, i would appreciate it.
Thanks,
Mike
Try this:
result = {}
for table in arcpy.ListTables():
for field in arcpy.ListFields(table):
result.setdefault(field.name, []).append(table)
If I understand correctly, you want to map from a column name to a list of tables that contain that have columns with that name. That should be easy enough to do with a defaultdict:
from collections import defaultdict
header_to_table_dict = defaultdict(list)
for table in arcpy.ListTables():
for field in arcpy.ListFields(table):
header_to_table_dict[field.name].append(table.name)
I'm not sure if table.name is what you want to save, exactly, but this should get you on the right track.
You want to create a dictionary in which each key is a field name, and each value is a list of table names:
# initialize the dictionary
col_index = {}
for table in arcpy.ListTables():
for field in arcpy.ListFields(table):
if field.name not in col_index:
# this is a field name we haven't seen before,
# so initialize a dictionary entry with an empty list
# as the corresponding value
col_index[field.name] = []
# add the table name to the list of tables for this field name
col_index[field.name].append(table.name)
And then, if you want want a list of tables that contain the field LastName:
list_of_tables = col_index['LastName']
If you're using a database that is case-insensitive with respect to column names, you might want to convert field.name to upper case before testing the dictionary.